text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
How to monitor panics with Raygun for GolangPosted Apr 8, 2015 | 4 min. (652 words) information you need to solve them. Get valuable awareness of how well your published/deployed applications are really doing now with Raygun for Golang.
Basic usage
Install
This tutorial assumes you know the basics of Golang. To keep things simple, I’m trying out Raygun4Go in the hello world test program described here. To get started, install the Raygun4Go package using go get in a terminal:
go get github.com/MindscapeHQ/raygun4go
To ensure full panic coverage in your Golang program, we recommend setting up Raygun4Go to be one of the first steps in your main method (or in webservers, your request handling method). Open the .go file containing the main method (in my case, hello.go), and then add an import for Raygun4Go.
import "github.com/mindscapehq/raygun4go"
Set up
Now in the main method, create a raygun client object by calling the New method which requires an application name, and your Raygun API key that you want to use for this app. The application name can be whatever you want, and the API key is shown to you when you create a new Application in your Raygun account, or can be viewed in your application settings.
raygun, err := raygun4go.New("Hello", "YOUR_APP_API_KEY")
In the spirit of good error handling, lets also check that the raygun client creation didn’t cause an error, and print it to the console if it did.
if err != nil { fmt.Println("Unable to create Raygun client: ", err.Error()) }
The last step required to set up Raygun4Go is to defer to the HandleError method.
defer raygun.HandleError()
Try it out
To test this out, call panic any where after the defer, then run the program to see the message and stack trace show up in your Raygun dashboard:
panic("Panic for no reason at all")
Manually sending messages
The raygun client object also has a CreateError method for manually sending an error message to your Raygun dashboard. This is great for reporting that something has gone wrong, but a panic doesn’t need to be invoked – for example you may want to report errors from within an error handler, or send a message when a bad status code is returned from a web request. The error report sent to Raygun will even include the stack trace.
raygun.CreateError("No need to panic")
Features
The raygun client object returned from the New method has several chainable features as follows:
- Silent(bool) If set to true, errors will be printed to the console instead of being sent to Raygun.
- Version(string) Sets the program version number to be sent with each error report to Raygun.
- Request(*http.Request) Provides information about any http request responsible for the issue.
- User(string) Sets the user identifier, e.g. a name, email or database id.
- Tags([]string) Sets a list of tags which can help you filter errors in your Raygun dashboard.
- CustomData(interface{}) Sets additional information that can help you debug the problem. (Must work with json.Marshal).
Here is an example of using a few of these features:
raygun.Version("1.0.0").Tags([]string{"Urgent", "Critical", "Fix it now!"}).User("Robbie Robot")
And here is the full listing of my hello.go program:
package main import "fmt" import "github.com/mindscapehq/raygun4go" func main() { raygun, err := raygun4go.New("Hello", "YOUR_APP_API_KEY") if err != nil { fmt.Println("Unable to create Raygun client: ", err.Error()) } defer raygun.HandleError() raygun.Version("1.0.0").Tags([]string{"Urgent", "Critical", "Fix it now!"}).User("Robbie Robot") raygun.CreateError("No need to panic") // Manually send an error message to Raygun sayHello() } func sayHello() { fmt.Println("Hello, world.") panic("Panic for no reason at all") // panic will be sent to Raygun }
Try it yourself
If you want awareness of the issues occurring in your go programs, try Raygun4Go using the simple steps above. If you don’t have an account yet, start a free Raygun trial here, no credit card required. | https://raygun.com/blog/how-to-monitor-panics-with-raygun-for-golang/ | CC-MAIN-2022-27 | refinedweb | 672 | 55.64 |
- Author:
- mrtron
- Posted:
- April 25, 2008
- Language:
- Python
- Version:
- .96
- django python unicode latin1 character encoding
- Score:
- 1 (after 1 ratings) display issues for the end users even if you are using a UTF-8 database encoding. The topic is well covered at Effbot and contains a list of appropriate conversions for each of hte problem characters.
Correcting this for all of your Django models is another issue. Do you handle the re-encoding during the form validation? The save for each model? Create a base class that all your models need to inherit from?
The simplest solution I have created leverages Signals
Combining the re-encoding method suggested at Effbot and the pre_save signal gives you the ability to convert all the problem characters right before the save occurs for any model.
kill_gremlins method replaced with Gabor's suggestion
def kill_gremlins(text): return text.encode('iso-8859-1').decode('cp1252')</pre> <p>?</p> <p>(assuming that we are dealing with mishandled unicode-strings.</p>
#
#
>>> s = u"\u1234" # random unicode character >>> unicodedata.name(s) 'ETHIOPIC SYLLABLE SEE' >>> s.encode("iso-8859-1").decode("cp1252") Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'latin-1' codec can't encode character u'\u1234' in position 0: ordinal not in range(256) >>></pre> <p>Maybe the usecase for this snippet is more limited, but it's not a full replacement for my (rather dated) code.</p>
#
Please login first before commenting. | https://djangosnippets.org/snippets/724/ | CC-MAIN-2016-50 | refinedweb | 242 | 56.25 |
RSS Reader Sample App for Windows Phone 7 ★★★★★★★★★★★★★★★ Benjamin XueMay 25, 20102 0 0 0 This is a simple RSS reader application that shows how easy it is to build applications for Windows Phones 7. In particular, it shows how to work with XML & LINQ to XML. You can download the source code at codeplex.
god work but it does not run vs 2010 .
many errors are there like PhoneApplicationPage could not be found.
no definition FindName?
undefined library or namespace web browser?
Yes.. thats right jimmy. Because, many assmebly moved ord deleted new version …
Instructions here for migrating apps from CTP to Beta.
blogs.msdn.com/…/migrating-apps-from-windows-phone-ctps-to-the-beta-build.aspx | https://blogs.msdn.microsoft.com/innov8showcase/2010/05/25/rss-reader-sample-app-for-windows-phone-7/ | CC-MAIN-2016-30 | refinedweb | 118 | 68.06 |
Re: How to reference custom control from App_Code folder?
- From: "Teemu Keiski" <joteke@xxxxxxxxxxxxxxx>
- Date: Mon, 6 Feb 2006 18:50:08 +0200
If you reference it from App_Code you need to omit the Assembly part of the
Regioster directive, since it's not in an named assembly.
<%@ Register TagPrefix="Fred" Namespace="FredECommerceControls" %>
E.g just have TagPrefix and the namespace declared.
--
Teemu Keiski
ASP.NET MVP, AspInsider
Finland, EU
"Alan Silver" <alan-silver@xxxxxxxxxxxxxxxxxxxx> wrote in message
news:QUeMD+Fv425DFwdT@xxxxxxxxxxxxxxxxxxxxxx
Hello,
I have a custom control that is inside a DLL. Until now, I have had the
source file for this DLL in a development folder on my machine, and I've
been compiling it using csc on the command line. I reference the control
in the DLL on a master page by adding the line to the top of the file...
<%@ Register TagPrefix="Fred" Namespace="FredECommerceControls"
Assembly="ECommControls" %>
...and then something like this where I want the control...
<Fred:SiteLinks
This all worked fine, but didn't let me debug the control in VWD as it
didn't have the source file.
I tried creating an App_Code folder and putting the source file in there
(deleting the DLL from the bin folder), but this gave the following error
when I tried to run the page...
Element 'SiteLinks' is not a known element. This can occur if there is a
compilation error in the Web site.
I presume that it's not compiling the source file, so it can't find the
assembly containing the control. The question is, why? I thought the whole
idea of putting the source file in App_Code was so that it would
automatically be compiled.
Anyone any ideas? TIA
--
Alan Silver
(anything added below this line is nothing to do with me)
.
- Follow-Ups:
- Re: How to reference custom control from App_Code folder?
- From: Alan Silver
- References:
- How to reference custom control from App_Code folder?
- From: Alan Silver
- Prev by Date: flash embedded in C#
- Next by Date: The compiler failed with error code 2000
- Previous by thread: How to reference custom control from App_Code folder?
- Next by thread: Re: How to reference custom control from App_Code folder?
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2006-02/msg00975.html | crawl-002 | refinedweb | 367 | 64.81 |
Docker is a computer program build to allow an easier process of creating and running applications by using containers. Have you often used Docker? If so you may have come across a shell message after logging into a docker container with an intention to edit a text file.
However, don't worry as Senior Software Engineer Maciek Opała is here with possible solutions! Check them out and we hope they help you with your editing needs.
'bash: <EDITOR_NAME>: command not found'— if you’ve ever encountered this shell message after logging into a docker container with an intention to edit a text file, this is a post you should read.
What’s the problem?
Docker is meant to be lightweight (doing one job and doing it well), hence docker containers are trimmed to a bare minimum — they have only necessary packages installed to play the required role in a given project ecosystem. From this point of view, having any editor installed is pointless and introduces needless complication. So if you prepared a 'Dockerfile', built an image and after running a container you need to edit a file you may get surprised:
~/ docker run -it openjdk:11 bash root@d0fb3a0b527c:/# vi Lol.java bash: vi: command not found root@d0fb3a0b527c:/#
What are the possible solutions?
#1 Use volume
Let’s use the following 'Dockerfile':
FROM openjdk:11 WORKDIR "/app"
Now, build an image with:
docker build -t lol .
And finally run the container with a 'volume' attached (a 'volume' can be also created with a 'docker volume create' command):
docker run --rm -it --name=lol -v $PWD/app-vol:/app lol bash
'$PWD/app-vol' folder will be created automatically if it does not exist. Now if you try to list all the files in the '/app' directory you will get an empty result:
~/ docker run --rm -it --name=lol -v $PWD/app-vol:/app lol bash root@4b72fbabb0af:/app# ls
Navigate to the '$PWD/app-vol' directory from another terminal and create a 'Lol.java' file. If you try to list files once again in the container being run you’ll see that newly-created 'Lol.java' file is there:
root@4b72fbabb0af:/app# ls Lol.java root@4b72fbabb0af:/app# cat Lol.java public class Lol { } root@4b72fbabb0af:/app#
As you can see 'cat' command works, so you can at least view the file’s content.
#2 Install the editor
If using a 'volume' is not an option you can install the editor you need to use in a running container. Run the container first (this time mounting a 'volume' is not necessary):
docker run --rm -it --name=lol lol bash
And then install the editor:
root@4b72fbabb0af:/app# apt-get update root@4b72fbabb0af:/app# apt-get -y install vim
Installing a package in a running container is something that should be done incidentally. If you do it repeatedly, it’s a better idea to add the required package to the 'Dockerfile':
FROM openjdk:11 RUN ["apt-get", "update"] RUN ["apt-get", "-y", "install", "vim"] WORKDIR "/app"
It seems that vim-tiny is a light-weight alternative, hence a better choice for an editor in a docker container.
#3 Copy file into a running docker container
Let’s run a container with no editor installed ('Dockerfile' from #1):
docker run --rm -it --name=lol lol bash
(again, no 'volume' needed). If you try to 'ls' files in '/app' folder you’ll get an empty result. This time we will use docker tools to copy the file to the running container. So, on the host machine create the 'Lol.java' file and use the following command to copy the file:
docker cp Lol.java lol:/app
where 'lol' represents the container name. Instead of the name of the container its 'ID' may be also used when copying a file. Files cannot be copied directly between containers. So, if there’s a need to copy a file from one container to an another one, the host machine must be involved.
Another, quite similar option, is to use the 'docker exec' command combined with 'cat'. The following command will also copy the 'Lol.java' file to the running container:
docker exec -i lol sh -c 'cat > /app/Lol.java' < Lol.java
Where '/app/Lol.java' represents a file in a docker container whereas 'Lol.java' is an existing file on the host.
#4 Use linux tools
No favourite (or even any) editor installed on the docker container? No problem! Other linux tools like 'sed', 'awk', 'echo', 'cat', 'cut' are on board and will come to the rescue. With some of them like 'sed' or 'awk' you can edit a file in place. Other, like 'echo', 'cat', 'cut' combined with powerful stream redirection can be used to create and then edit files. As you’ve already seen in the previous examples these tools can be combined with the 'docker exec' command which makes them even more robust.
#5 Use vim (or other editor) remote
IMPORTANT: this idea is bad for many reasons (like running multiple processes in a docker container or enabling others to ssh into a running container via exposed port number 22). I’m showing it rather as a curiosity than something you should use in a day-to-day work. Let’s have a look a the 'Dockerfile', since it has changed a bit:
FROM openjdk:11 RUN ["apt-get", "update"] RUN ["apt-get", "install", "-y", "openssh-server"] RUN mkdir /var/run/sshd RUN echo 'root:lollol0' | chpasswd RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config RUN ["/etc/init.d/ssh", "start"] EXPOSE 22 WORKDIR "/app" CMD ["/usr/sbin/sshd", "-D"]
This time, since 'scp' will be used for remote edit, we need to install 'openssh-server', expose a port, and finally start it. After building the container with the following command:
docker build -t lol .
run it with the following command:
docker run --rm -p 2222:22 -d --name=lol lol
Now, when the container is running you can edit the 'Lol.java' file with the following command:
vim scp://root@localhost:2222//app/Lol.java
After connect confirmation and entering the password 'vi' opens and the file can be edited. Because of this issue, run ':set bt=acwrite' in 'vi' screen and go ahead with file edition. After you finished, save and exit 'vi', confirm with root’s password and you’re done. Now, run:
docker exec -it lol cat /app/Lol.java
to confirm that the file was in fact created and saved.
Why do I need this?
Actually, you do not, Docker containers are meant to be immutable units of work, devoted to running a single, particular process. Images should be built and run without any further intervention. What’s more, when you edit a file in a running docker container you need to ensure that all the processes that depend on the edited file have been notified about the change. If they’re not configured for redeployment on a configuration change, they need to be restarted manually.
Editing files in a docker container might be useful only during development. When you don’t want or even need to build an image, run it and verify it the change introduced has taken the desired effect every single time you add or remove something in 'Dockerfile'. This way you can save some time, but after it’s done, the redundant packages added to an image should be removed.
This article was written by Maciek Opała and posted originally on SoftwareMill Blog. | https://www.signifytechnology.com/blog/2018/12/editing-files-in-a-docker-container-by-maciek-opala | CC-MAIN-2021-04 | refinedweb | 1,256 | 61.06 |
TL;DR: This blog post is aimed to demonstrate how to make a custom Twitter bot in Python using the official Twitter API. The bot will reply to every tweet in which it got mentioned with a specific keyword. The reply will be in the form of an image with a quote written on it.
Source code of this application is available in this GitHub repository
Introduction
In this article, you'll learn how to make your own Twitter bot in Python using Tweepy, a Python library for accessing the official Twitter API.
You will be creating a Reply to mentions bot, which will send a reply to everyone's tweet who has mentioned it with a specific keyword.
The reply will be in the form of an image that you will generate and put some text over it. This text will be a quote that you will fetch from a third-party API.
Here's how it will look like:
Prerequisites
To follow along with this tutorial, make sure you have:
An AWS account
You are going to deploy the final application to AWS Elastic Beanstalk so make sure you are signed up on AWS.
Twitter API authentication credentials
To enable your bot to interact with Twitter, you first have to sign up to a Twitter developer account and create an application for which Twitter will grant you access (There is a detailed explanation of this step in the next section).
Python 3
At the time of writing this article, the latest version is Python 3.9, but it is always recommended to choose a version that is one point revision behind the latest one so that you do not face any compatibility issues with third-party modules. For this tutorial, you can go with Python 3.8.
Installed these external Python libraries on your local environment
- Tweepy — To interact with Twitter API
- Pillow — To create an image and to add texts over it
- Requests — To make HTTP requests to the random quote generator API
- APScheduler — To schedule your job periodically
- Flask — To create a web app for deploying your application on Elastic Beanstalk
Rest all other libraries that you'll see in this project are part of Python's standard library, so you do not need to install them.
Twitter API Authentication Credentials
Any request that is accessing the official Twitter API requires OAuth for authenticating. That's why you need to create those required credentials to be able to use the API. These credentials include:
- A consumer key
- A consumer secret
- An access token
- An access secret
You need to follow the steps below to create your credentials once you have signed up to Twitter:
Step 1: Apply for a Twitter Developer Account
Go to the Twitter developer platform to apply for a developer account.
Twitter will ask for some information about how you're planning to use the developer account. So, you have to specify the use case for your application.
Try to be as specific as possible about your intended use for faster and better chances of getting approval.
Once you submit your application, you'll land on this screen :
Step 2: Create an Application
You'll receive the confirmation back within a week. Once your Twitter developer account access gets approved, create a project on the Twitter developer portal dashboard.
You have to do this process because Twitter permits authentication credentials only for apps. An app can be defined as any tool that uses the Twitter API. You need to provide the following information about your project:
- Project name: a name to identify your project (such as Reply-To-Mention-Bot)
- Category: Select the category to which your project belongs. In this case, choose "Making a bot."
- Description: The purpose of your project or how users will use your app (such as this app is used to automatically respond to tweets)
- Name of app: Finally, enter the name of your app
Step 3: Create the Authentication Credentials
To create your authentication credentials, firstly, go to your Twitter apps section. Here you'll find the "Keys and Tokens" tab; clicking this will take you to another page where you can generate the credentials.
After generating the credentials, save them to your local machine to later use in your code. Inside your project folder, create a new file called
credentials.py and store these four keys in the key-value format as shown below:
access_token="XXXXXXX" access_token_secret="XXXXXXXX" API_key="XXXXXXX" API_secret_key="XXXXXXXX"
You can even test the credentials to check if everything is working as expected using the following code snippet:
import tweepy # Authenticate to Twitter auth = tweepy.OAuthHandler("CONSUMER_KEY", "CONSUMER_SECRET") auth.set_access_token("ACCESS_TOKEN", "ACCESS_SECRET") api = tweepy.API(auth) try: api.verify_credentials() print("Authentication Successful") except: print("Authentication Error")
If everything is correct, you should be able to see a response saying "Authentication Successful".
Understanding Tweepy
Tweepy is an open-sourced, easy-to-use Python library for accessing the Twitter API. It gives you an interface to access the API from your Python application.
To install the latest version of Tweepy, type the following command in your console:
pip install tweepy
Alternatively, you can also install it from the GitHub repository.
pip install git+
Let's now understand some of its basic functionalities:
OAuth
Tweepy provides an
OAuthHandler class that takes care of the OAuth required by Twitter to authenticate API calls.
The code you just saw above depicts the OAuth functionality by Tweepy.
Twitter API wrapper
Tweepy also provides an API class for accessing the Twitter RESTful API methods, which you can use to access various Twitter functionalities. You can find those methods here and the most commonly used are listed below:
- Methods for tweets
- Methods for users
- Methods for user timelines
- Methods for trends
- Methods for likes
Models
When you call any of the API methods that you just saw above, you'll get a Tweepy model class instance in response. This will contain the response returned from Twitter. For example:
user = api.get_user('apoorv__tyagi')
This returns a User model which contains the data which you can further use in your application. For example:
python
print(user.screen_name) #User Name
print(user.followers_count) #User Follower Count
Fetching the Quote
Now you will begin with the first step towards building your Bot. As stated above, when someone mentions your bot, it will reply to them with an image having a quote written on it.
So to fetch the quote in the first place, you will be calling a Random quote generator API
To do that, create a Python file
tweet_reply.py and make a new method inside it that will make an HTTP request to this API endpoint and get a quote in response.
For this, you can use Python's
requests library.
The requests library is used to make HTTP requests in Python. It abstracts the complexities of making requests behind a simple API so that you can only focus on interacting with services and consuming data in your application.
def get_quote(): URL = "" try: response = requests.get(URL) except: print("Error while calling API...")
The response looks like this:
{ "_id": "FGX8aUpiiS5z", "tags": [ "famous-quotes" ], "content": "I do not believe in a fate that falls on men however they act, but I do believe in a fate that falls on them unless they act.", "author": "Buddha", "authorSlug": "buddha", "length": 125 }
The API returns a JSON response, so to parse it, you can use the
JSON library.
Json is a part of Python's standard library, so you can directly import it using:
import json.
From the response, you will only need the content and the author, so you will make your function return only those values. Here's how the complete function will look like:
def get_quote(): URL = "" try: response = requests.get(URL) except: print("Error while calling API...") res = json.loads(response.text) return res['content'] + "-" + res['author']
Generating Image
You've got your text. Now you need to create an image and put this text over it.
Whenever you need to deal with any image-related tasks in python, always first look for the
Pillow library. Pillow is Python's imaging library which gives powerful image processing capabilities to the Python interpreter along with providing extensive file format support.
Create a seperate file, name it
Wallpaper.py and add one function that will accept the quote as a string in its parameter and will initialize all the required variables to generate an image:
def get_image(quote): image = Image.new('RGB', (800, 500), color=(0, 0, 0)) font = ImageFont.truetype("Arial.ttf", 40) text_color = (200, 200, 200) text_start_height = 100 write_text_on_image(image, quote, font, text_color, text_start_height) image.save('created_image.png')
Let's understand how this function works:
Image.new()method creates a new image using the provided mode & size. The first parameter is the mode to be used for creating the new image. (It could be RGB, RGBA). The second parameter is size. Size is provided in pixels as a tuple of width & height. The last parameter is the color, i.e., what color to use for the image background (Default is black).
ImageFont.truetype()creates a font object. This function loads a font object from the given font file with the specified font size. In this case, you will be using "Arial", you can also use any other font of your choice by downloading it from here. Make sure the font file has .ttf (TrueType font) file extension, and you save it inside the root directory of your project.
text_color&
text_start_heightas the name itself depicts, these are the color and the start height of the text, respectively. RGB(200,200,200) is a "Light Grey" color that can work well over your black color image.
- You call the
write_text_on_image()function, which will put this text over the image using these variables.
image.save()will finally save the image as
created_image.pngfile in your root folder. If there is an already existing file with this name, it will automatically replace it with the new one.
def write
This is the next function in the same file,
Wallpaper.py, where you will be putting text over the image. Let's understand the working of this function as well :
- The
ImageDrawmodule is used to create 2D image objects.
textwrap.wrap()wraps the single paragraph in text (a string), so every line is at most 'width'(=40) characters long. It returns a list of output lines.
draw.text()Draws the string at the given position. The complete syntax and list of parameters it accepts are defined below:
ImageDraw.Draw.text(XY, text, fill, font)
Parameters:
- XY — Top left corner of the text.
- text — Text to be drawn.
- fill — Color to use for the text.
- font — An ImageFont instance.
In the end, here's how your
Wallpaper.py will look like:
from PIL import Image, ImageDraw, ImageFont import textwrap def get_wallpaper(quote): # image_width image = Image.new('RGB', (800, 400), color=(0, 0, 0)) font = ImageFont.truetype("Arial.ttf", 40) text1 = quote text_color = (200, 200, 200) text_start_height = 100 draw_text_on_image(image, text1, font, text_color, text_start_height) image.save('created_image.png') def draw
Replying to the Mentions by Periodically Checking for Tweets
You have got the quote as well as an image that uses it. Now, the only thing left is to check for such tweets in which you are mentioned. Here, apart from only checking for mentions, you will also look for a specific keyword or a hashtag in it.
If a particular keyword/hashtag is found in the tweet, you will like and send a reply to that particular tweet.
In this case, you can go with "#qod" (short for "Quote Of the Day") as your keyword.
Coming back inside the
tweet_reply.py file, here's the function to achieve this:
def respondToTweet(last_id):: print("Already replied to {}".format(mention.id))
respondToTweet()function takes the last_id as its only parameter. This variable stores the last tweet to which you have responded and is used to only fetch mentions created after those already processed. So for the first time when you call the function, you will set its value as 0, and on the subsequent call, you will keep updating this value.
mentions_timeline()function in the Tweepy module is used to get the most recent tweets. The first parameter, i.e., the last_id, is used to fetch only the tweets newer than this specified ID. By default, it returns 20 most recent tweets. tweet_mode='extended' is used to get the string that contains the entire untruncated text of the Tweet. The default value is "compat" which returns text truncated to 140 characters only.
You then loop through all of those tweets in reversed order, i.e., oldest tweet first, and for each tweet mentioning you, the tweet is liked using
create_favorite(), which just takes the tweet_id as its parameter.
The reply to this tweet is sent using
update_status() which takes the Twitter handle for the original tweet author (You pass it using mention.user.screen_name), text content(if any), original tweet id on which you are replying, and finally the list of media which in your case is the single image you previously generated.
Saving Tweet ID to Avoid Repetition
You need to make sure to avoid replying to the same tweet again. For that, you will simply store the tweet Id to which you have last replied inside a text file
tweet_ID.txt, and you will only check for the tweets that are posted after that. This will be automatically handled by the method
mentions_timeline() as tweet Ids are time sortable.
And now, instead of passing the last_id yourself to the
respondToTweet(), you will pass the file containing this last id, and your function will fetch the Id from the file, and at the end, the file will get updated with the latest one.
Here's how the final version of the
respondToTweet() function will look like:
def respondToTweet(file): last_id = get_last_tweet(file): logger.info("Already replied to {}".format(mention.id)) put_last_tweet(file, new_id)
You will observe that two new utility functions are also added here which are
get_last_tweet() and
put_last_tweet().
get_last_tweet() takes a file name as a parameter and will simply fetch the Id stored inside this text file, whereas
put_last_tweet() along with the file name will take the latest tweet_id and update the file with this latest Id.
After putting everything together, here's how your complete
tweet_reply.py will look like:
import tweepy import json import requests import logging import Wallpaper import credentials consumer_key = credentials.API_key consumer_secret_key = credentials.API_secret_key access_token = credentials.access_token access_token_secret = credentials.access_token_secret auth = tweepy.OAuthHandler(consumer_key, consumer_secret_key) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) # For adding logs in application logger = logging.getLogger() logging.basicConfig(level=logging.INFO) logger.setLevel(logging.INFO) def get_quote(): url = "" try: response = requests.get(url) except: logger.info("Error while calling API...") res = json.loads(response.text) print(res) return res['content'] + "-" + res['author'] def get_last_tweet(file): f = open(file, 'r') lastId = int(f.read().strip()) f.close() return lastId def put_last_tweet(file, Id): f = open(file, 'w') f.write(str(Id)) f.close() logger.info("Updated the file with the latest tweet Id") return def respondToTweet(file='tweet_ID.txt'): last_id = get_last_tweet(file) mentions = api.mentions_timeline(last_id, tweet_mode='extended') if len(mentions) == 0: return new_id = 0 logger.info("someone mentioned me...") for mention in reversed(mentions): logger.info(str(mention.id) + '-' + mention.full_text) new_id = mention.id if '#qod' in mention.full_text.lower(): logger.info("Responding back with QOD to -{}".format(mention.id)) try: tweet = get_quote() Wallpaper.get_wallpaper(tweet) media = api.media_upload("created_image.png") logger.info("liking and replying to tweet") api.create_favorite(mention.id) api.update_status('@' + mention.user.screen_name + " Here's your Quote", mention.id, media_ids=[media.media_id]) except: logger.info("Already replied to {}".format(mention.id)) put_last_tweet(file, new_id) if __name__=="__main__": respondToTweet()
Deploying Bot to the Server
The final step would be to deploy your code to a server. In this section, you'll learn how you can deploy a Python application to AWS Elastic Beanstalk.
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. It reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
Here's how you will proceed:
- Create an AWS Elastic Beanstalk environment for a Python application
- Create a simple Flask Application for your Twitter bot
- Upload your Flask App on AWS Elastic Beanstalk
- Debug errors via logs
Creating Elastic Beanstalk environment
Once you are signed in to your AWS account, go to the search panel at the top, type & select "Elastic Beanstalk," and click create a new application on the top right.
It will ask for your:
- Application name
- Application tags(not mandatory)
- Platform
- Application code
For tags, you can add up to 50 tags for the resources of your AWS Elastic Beanstalk applications. Tags can help you categorize resources. If you're managing multiple AWS application resources, then these tags can come quite useful.
For the platform, select "Python" from the dropdown, and it will fill the "platform branch" and "version" on its own.
For the application code, you will be uploading your code on elastic beanstalk later. So, for now, keep the "sample application" selected and hit the create application button. It should take a few minutes before it gets ready.
Creating a Flask app
While AWS is creating an environment for you, in the meantime, create a new file called
application.py and put the following code in it:
from flask import Flask import tweet_reply import atexit from apscheduler.schedulers.background import BackgroundScheduler application = Flask(__name__) @application.route("/") def index(): return "Follow @zeal_quote!" def job(): tweet_reply.respondToTweet('tweet_ID.txt') print("Success") scheduler = BackgroundScheduler() scheduler.add_job(func=job, trigger="interval", seconds=60) scheduler.start() atexit.register(lambda: scheduler.shutdown()) if __name__ == "__main__": application.run(port=5000, debug=True)
This is a simple flask app where you can create one function called
job(), which will run every minute using
apscheduler, which will eventually call the main function of your
tweet_reply.py file.
Please note that Elastic Beanstalk expects the name of your flask application object instance to be
application. If you will use any other name, then Elastic Beanstalk would fail to load your application.
Upload and configure the application to AWS
To configure the AWS resources and your environment, you can add Elastic Beanstalk configuration files (.ebextensions) to your web application's source code.
Configuration files are YAML formatted documents(JSON is also supported) with a .config file extension that is placed inside the folder named
.ebextensions and deploy along with your application source code.
For this project, create a new folder
.ebextensions in the source directory of your code and create a new file as
python.config in that folder. Copy the code below code in it:
files: "/etc/httpd/conf.d/wsgi_custom.conf": mode: "000644" owner: root group: root content: WSGIApplicationGroup %{GLOBAL}
You will also need to create a
requirements.txt file, which will contain all the external Python libraries that are required in your project for Elastic Beanstalk to configure the environment according to your application's need.
To create this file, simply run the following command:
bash
pip freeze > requirements.txt
Now you will need to zip all the files together to upload the flask app on Elastic Beanstalk. You should now have the following structure inside your project folder:
├── /.ebextensions │ └── app.config ├── application.py ├── tweet_reply.py ├── Wallpaper.py ├── requirements.txt ├── Arial.ttf └── tweet_ID.txt
Select all these files and folders mentioned and Zip all of them together.
Go back to your AWS application & click on Upload Code.
Choose your zip file and click deploy. Then wait until your application gets deployed and the health symbol turns green. If you completed all these steps successfully, your website link should take you to a page saying, "Follow @zeal_quote!"
How to view error logs in your environment
For debugging your application in case any error comes, just follow the below steps to view the logs for your app:
- In your Elastic Beanstalk dashboard, click on "Logs" from your environment section.
- It will take you to another page where you will get a dropdown after clicking "Request Logs". Select the "Last 100 Lines" for the most recent errors, or you also have an option to download "full logs" in case you're debugging the error that occurred a long time back.
- Click "Download" and it will take you to a page where you can view the last 100 lines of logs.
Wrapping Up
In this tutorial, you went through the complete process of developing and deploying a Twitter bot in Python.
You also learned about Tweepy, how to sign up as a Twitter developer to use its API, use Tweepy to invoke the Twitter API, and configure an AWS Elastic Beanstalk environment to deploy your Python application.
All the source code that has been used here is available in this GitHub Repository. To test the final working of the bot, you can look for @zeal_quote on Twitter.
Do check out the complete Tweepy API documentation to make more complex bots that are meaningful to your use case. | https://auth0.com/blog/how-to-make-a-twitter-bot-in-python-using-tweepy/ | CC-MAIN-2021-43 | refinedweb | 3,542 | 55.84 |
Text::MeCab - Alternate Interface To libmecab
use Text::MeCab; my $mecab = Text::MeCab->new({ rcfile => $rcfile, dicdir => $dicdir, userdic => $userdic, lattice_level => $lattice_level, all_morphs => $all_morphs, output_format_type => $output_format_type, partial => $partial, node_format => $node_format, unk_format => $unk_format, bos_format => $bos_format, eos_format => $eos_format, input_buffer_size => $input_buffer_size, allocate_sentence => $allocate_sentence, nbest => $nbest, theta => $theta, }); for (my $node = $mecab->parse($text); $node; $node = $node->next) { # See perdoc for Text::MeCab::Node for list of methods print $node->surface, "\n"; } # use constants use Text::MeCab qw(:all); use Text::MeCab qw(MECAB_NOR_NODE); # check what mecab version we compiled against? print "Compiled with ", &Text::MeCab::MECAB_VERSION, "\n";
libmecab () already has a perl interface built with it, so why a new module? I just feel that while a subtle difference, making the perl interface through a tied hash is just... weird.
So Text::MeCab gives you a more natural, Perl-ish way to access libmecab!
WARNING: Version 0.20000 has only been tested against libmecab 0.96.
mecab allows users to specify an output format, via --*-format options. These are respected ONLY if you use the format() method:
my $mecab = Text::MeCab->new({ output_format_type => "user", node_format => "%m %pn" }); for(my $node = $mecab->parse($text); $node; $node = $node->next) { print $node->format($mecab); }
Note that you also need to set the output_format_type parameter as well.
[NOTE: The memory management issue has been changed since 0.09]
libmecab's default behavior is such that when you analyze a text and get a node back, that node is tied to the mecab "tagger" object that performed the analysis. Therefore, when that tagger is destroyed via mecab_destroy(), all nodes that are associated to it are freed as well.
Text::MeCab defaults to the same behavior, so the following won't work:
sub get_mecab_node { my $mecab = Text::MeCab->new; my $node = $mecab->parse($_[0]); return $node; } my $node = get_mecab_node($text);
By the time get_mecab_node() returns, the Text::MeCab object is DESTROY'ed, and so is $node (actually, the object exists, but it will complain when you try to access the node's internals, because the C struct that was there has already been freed).
In such cases, use the dclone() method. This will copy the *entire* node structure and create a new Text::MeCab::Node::Cloned instance.
sub get_mecab_node { my $mecab = Text::MeCab->new; my $node = $mecab->parse($_[0]); return $node->dclone(); }
The returned Text::MeCab::Node::Cloned object is exactly the same as Text::MeCab::Node object on the surface. It just uses a different but very similar C struct underneath. It is blessed into a different namespace only because we need to use a different memory management strategy.
Do be aware of the memory issue. You WILL use up twice as much memory.
Also please note that if you try the first example, accessing the node *WILL* result in a segfault. This is *NOT* a bug: it's a feature :) While it is possible to control the memory management such that accessing a field in a node that has already expired results in a legal croak(), we do not go to the length to ensure this, because it will result in a performance penalty.
Just remember that unless you dclone() a node, then you are NOT allowed to access it when the original tagger goes out scope:
{ my $mecab = Text::MeCab->new; $node = $mecab->parse(...); } $node->surface; # segfault!!!!
Always remember to dclone() before doing this!
Belows is the result of running tools/benchmark.pl on my PowerBook:
daisuke@beefcake Text-MeCab$ perl tools/benchmark.pl Rate mecab text_mecab mecab 5.53/s -- -63% text_mecab 14.9/s 170% --
Creates a new Text::MeCab instance.
You can either specify a hashref and use named parameters, or you can use the exact command line arguments that the mecab command accepts.
Below is the list of accepted named options. See the man page for mecab for details about each option.
Parses the given text via mecab, and returns a Text::MeCab::Node object.
my $encoding = Text::MeCab::ENCODING
Returns the encoding of the underlying mecab library that was detected at compile time.
The version number from libmecab's mecab_version()
The version number detected at compile time of Text::MeCab.
__TEXT_MECAB_CONSTANTS_POD__
Path to mecab-config, if available.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | http://search.cpan.org/~dmaki/Text-MeCab-0.20013/MeCab.src | CC-MAIN-2014-49 | refinedweb | 719 | 53.41 |
Local outbound address¶
Get or set the local IP address for outbound connections.
Synopsis¶
#include <ts/ts.h>
- TSReturnCode
TSHttpTxnOutgoingAddrSet(TSHttpTxn txnp, sockaddr const* addr)¶
Description¶
These functions concern the local IP address and port, that is the address and port on the Traffic Server side of outbound connections (network connections from Traffic Server to another socket).
The address and optional the port can be set with
TSHttpTxnOutgoingAddrSet(). This must be
done before the outbound connection is made, that is, earlier than the
TS_HTTP_SEND_REQUEST_HDR_HOOK.
A good choice is the
TS_HTTP_POST_REMAP_HOOK, since it is a hook that is always called, and it
is the latest hook that is called before the connection is made.
The addr must be populated with the IP address and port to be used. If the port is not
relevant it can be set to zero, which means use any available local port. This function returns
TS_SUCCESS on success and
TS_ERROR on failure.
Even on a successful call to
TSHttpTxnOutgoingAddrSet(), the local IP address may not match
what was passing addr if
session sharing is enabled.
Conversely
TSHttpTxnOutgoingAddrGet() retrieves the local address and must be called in the
TS_HTTP_SEND_REQUEST_HDR_HOOK or later, after the outbound connection has been established. It returns a
pointer to a
sockaddr which contains the local IP address and port. If there is no valid
outbound connection, addr will be
NULL. The returned pointer is a transient pointer
and must not be referenced after the callback in which
TSHttpTxnOutgoingAddrGet() was called. | https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSHttpTxnOutgoingAddrGet.en.html | CC-MAIN-2020-40 | refinedweb | 247 | 53.81 |
Google Test
Google Test and Google Mock are a pair of unit testing tools developed by Google.
Adding Google Test to your project
Download Google Test from the official repository and extract the contents of googletest-master into an empty folder in your project (for example, Google_tests/lib).
Alternatively, clone Google Test as a git submodule or use CMake to download it (instructions below will not be applicable in the latter case).
Create a CMakeLists.txt file inside the Google_tests folder: right-click it in the project tree and select .
Customize the following lines and add them into your script:# 'Google_test' is the subproject name project(Google_tests) # 'lib' is the folder with Google Test sources add_subdirectory(lib) include_directories(${gtest_SOURCE_DIR}/include ${gtest_SOURCE_DIR}) # 'Google_Tests_run' is the target name # 'test1.cpp tests2.cpp' are source files with tests add_executable(Google_Tests_run test1.cpp tests2.cpp) target_link_libraries(Google_Tests_run gtest gtest_main)
In your root CMakeLists.txt script, add the
add_subdirectory(Google_tests)command to the end, then reload the project.
When writing tests, make sure to add
#include "gtest/gtest.h"at the beginning of every .cpp file with your tests code.
Generate menu for Google Test
Google Test run/debug configuration
Although Google Test provides the
main() entry, and you can run tests as regular applications, we recommend using the dedicated Google Test run/debug configuration. It includes test-related settings and let you benefit from the built-in test runner, which is unavailable if you run tests as regular programs.
To create a Google Test configuration, go to Run | Edit Configurations, click
and select Google Test from the list of templates.
Specify the test or suite to be included in the configuration, or provide a pattern for filtering test names. Auto-completion is available in the fields to help you quickly fill them up:
Set wildcards to specify test patterns, for example:
In other fields of the configuration settings, you can set environment variables and command line options. For example, use Program arguments field to pass the
--gtest_repeatflag and run a Google test multiple times:
The output will look as follows:Repeating all tests (iteration 1) ... Repeating all tests (iteration 2) ... Repeating all tests (iteration 3) ... Google Test configuration, which is greyed out in the configurations list. To save a temporary configuration, select it in the dialog and press
:
Exploring results
When you run tests, the results (and the process) are shown in.
Test tree shows all the tests while they are being executed one by one. For parameterized tests, you will see the parameters in the tree as well. Also, the tree includes disabled tests (those with the
DISABLED prefix in their names) and marks them as skipped with the corresponding icon.
Skipping tests at runtime
You can configure some tests to be skipped based on a condition evaluated at runtime. For this, use the
GTEST_SKIP() macro.
Add the conditional statement and the
GTEST_SKIP() macro to the test you want to skip:
Use the Show ignored
icon to view/hide skipped tests in the Test Runner tree:
| https://www.jetbrains.com/help/clion/creating-google-test-run-debug-configuration-for-test.html | CC-MAIN-2021-04 | refinedweb | 503 | 54.22 |
> > An example of when an app might need to deal with such details
> > is when an app traces the error code. On a platform like
> > Windows or OS/2, it is probably meaningful to trace the
> > ap_status_t value unless it is an OS-specific error. In that
> > case, the application will likely want to trace the OS-specific
> > error value (and call it that) so that the user knows which
> > documentation to refer to for why that error might have
> > occurred.
> >
> > In other words, a message like
> >
> > "system error 4011 opening config file: permission denied"
> >
> > instead of
> >
> > "error 14011 opening config file: permission denied"
>
> Why would a user EVER see this? Apache does not currently print out the
> error number. If you are writing another program that does print out the
> error number, then fix this one of two ways. Implement the macro you have
> asked for, or modify ap_strerror to print out the error number.
Please look at "(13)" in the log message below.
[Mon Apr 10 13:56:20 2000] [crit] (13)Permission denied: make_sock:
could not bind to address 0.0.0.0 port 80
> > Your justification of forcing the app to do this seems to be based
> > on your statement that "it takes time to convert error codes to a
> > common code".
...
> The job of APR is to take portability off the programer's shoulders. I
> guess the best justification for doing it this way is that we started
> doing it the other way (returning a common error code), and we lose
> information. It isn't always possible to go through this transformation:
>
> platform err-code -> common error code -> platform error
> code.
I think your statement here is a very powerful argument. My point was
that the only argument listed in APRDesign wasn't very powerful. I
would suggest that you copy this text into APRDesign at a place you
find appropriate.
> > B. trivialities
> >
> > I have a small number of extremely minor editorial changes to your
> > text which, unless you prefer they be sent to you directly or posted
> > here, I will update in your text myself in a few days if nobody beats
> > me to it. (I can assure you that none of it is related to my
> > sometimes-ignorant use of commas :) )
>
> Please send them to the list.
(I just double checked "dependant" and found that it is a legal
variant of dependent, so that change is relatively asinine.)
Index: APRDesign
===================================================================
RCS file: /cvs/apache/apache-2.0/src/lib/apr/APRDesign,v
retrieving revision 1.8
diff -u -r1.8 APRDesign
--- APRDesign 2000/04/07 14:16:28 1.8
+++ APRDesign 2000/04/10 18:16:03
@@ -183,30 +183,30 @@
APR Error reporting
Most APR functions should return an ap_status_t type. The only time an
-APR function does not return an ap_status_t is if it absolutly CAN NOT
+APR function does not return an ap_status_t is if it absolutely CAN NOT
fail. Examples of this would be filling out an array when you know you are
not beyond the array's range. If it cannot fail on your platform, but it
could conceivably fail on another platform, it should return an ap_status_t.
Unless you are sure, return an ap_status_t. :-)
-All platform return errno values unchanged. Each platform can also have
+All platforms return errno values unchanged. Each platform can also have
one system error type, which can be returned after an offset is added.
-There are five types of error values in APR, each with it's own offset.
+There are five types of error values in APR, each with its own offset.
Name Purpose
0) This is 0 for all platforms and isn't really defined
anywhere, but it is the offset for errno values.
(This has no name because it isn't actually defined,
- but completeness we are discussing it here).
-1) APR_OS_START_ERROR This is platform dependant, and is the offset at which
+ but for completeness we are discussing it here).
+1) APR_OS_START_ERROR This is platform dependent, and is the offset at which
APR errors start to be defined. (Canonical error
values are also defined in this section. [Canonical
error values are discussed later]).
-2) APR_OS_START_STATUS This is platform dependant, and is the offset at which
+2) APR_OS_START_STATUS This is platform dependent, and is the offset at which
APR status values start.
-4) APR_OS_START_USEERR This is platform dependant, and is the offset at which
+4) APR_OS_START_USEERR This is platform dependent, and is the offset at which
APR apps can begin to add their own error codes.
-3) APR_OS_START_SYSERR This is platform dependant, and is the offset at which
+3) APR_OS_START_SYSERR This is platform dependent, and is the offset at which
system error values begin.
All of these definitions can be found in apr_errno.h for all platforms. When
@@ -224,13 +224,13 @@
if (CreateFile(fname, oflags, sharemod, NULL,
createflags, attributes,0) == INVALID_HANDLE_VALUE
- return (GetLAstError() + APR_OS_START_SYSERR);
+ return (GetLastError() + APR_OS_START_SYSERR);
These two examples implement the same function for two different platforms.
Obviously even if the underlying problem is the same on both platforms, this
will result in two different error codes being returned. This is OKAY, and
is correct for APR. APR relies on the fact that most of the time an error
-occurs, the program logs the error and continues, it does not try to
+occurs, the program logs the error and continues and does not try to
programatically solve the problem. This does not mean we have not provided
support for programmatically solving the problem, it just isn't the default
case. We'll get to how this problem is solved in a little while.
@@ -241,14 +241,14 @@
and are self explanatory.
No APR code should ever return a code between APR_OS_START_USEERR and
-APR_OS_START_SYSERR, those codes are reserved for APR applications.
+APR_OS_START_SYSERR; those codes are reserved for APR applications.
To programmatically correct an error in a running application, the error codes
need to be consistent across platforms. This should make sense. To get
consistent error codes, APR provides a function ap_canonicalize_error().
This function will take as input any ap_status_t value, and return a small
subset of canonical APR error codes. These codes will be equivalent to
-Unix errno's. Why is it a small subset? Because we don't want to try to
+Unix errnos. Why is it a small subset? Because we don't want to try to
convert everything in the first pass. As more programs require that more
error codes are converted, they will be added to this function.
@@ -288,7 +288,7 @@
make syscall that fails
convert to common error code
return common error code
- decide execution basd on common error code
+ decide execution based on common error code
Using option 2:
@@ -306,9 +306,9 @@
char *ap_strerror(ap_status_t err)
{
if (err < APR_OS_START_ERRNO2)
- return (platform dependant error string generator)
+ return (platform dependent error string generator)
if (err < APR_OS_START_ERROR)
- return (platform dependant error string generator for
+ return (platform dependent error string generator for
supplemental error values)
if (err < APR_OS_SYSERR)
return (APR generated error or status string)
--
Jeff Trawick | trawick@ibm.net | PGP public key at web site:
Born in Roswell... married an alien... | http://mail-archives.apache.org/mod_mbox/httpd-dev/200004.mbox/%3C200004101818.OAA16443@k5.localdomain%3E | CC-MAIN-2014-23 | refinedweb | 1,187 | 62.58 |
X is a fully prosperous country, especially known for its complicated transportation networks. But recently, for the sake of better controlling by the government, the president Fat Brother thinks it’s time to close some roads in order to make the transportation system more effective.
Country X has N cities, the cities are connected by some undirected roads and it’s possible to travel from one city to any other city by these roads. Now the president Fat Brother wants to know that how many roads can be closed at most such that the distance between any two cities in country X does not change. Note that the distance between city A and city B is the minimum total length of the roads you need to travel from A to B.
Input
The first line of the date is an integer T (1 <= T <= 50), which is the number of the text cases.
Then T cases follow, each case starts with two numbers N, M (1 <= N <= 100, 1 <= M <= 40000) which describe the number of the cities and the number of the roads in country X. Each case goes with M lines, each line consists of three integers x, y, s (1 <= x, y <= N, 1 <= s <= 10, x is not equal to y), which means that there is a road between city x and city y and the length of it is s. Note that there may be more than one roads between two cities.
For each case, output the case number first, then output the number of the roads that could be closed. This number should be as large as possible.
See the sample input and output for more details.
2 2 3 1 2 1 1 2 1 1 2 2 3 3 1 2 1 2 3 1 1 3 1Sample Output
Case 1: 2 Case 2: 0
#include<iostream> #include<cstring> #include<algorithm> using namespace std; #define ll long long const int N=100+10; int dis[N][N],pre[N][N]; bool vis[N][N]; const int inf=0x3f3f3f3f; int main(){ int t; //freopen("in.txt","r",stdin); scanf("%d",&t); int z=1; while(t--){ int n,m; ll ans=0; scanf("%d%d",&n,&m); fill(&dis[0][0],&dis[0][0]+N*N,inf); memset(vis,0,sizeof(vis)); memset(pre,0,sizeof(pre)); for(int i=0;i<m;i++){ int x,y,z; scanf("%d%d%d",&x,&y,&z); if(dis[x][y]!=inf)ans++; dis[x][y]=dis[y][x]=min(dis[x][y],z); vis[x][y]=vis[y][x]=true; pre[y][x]=y; pre[x][y]=x; } for(int k=1;k<=n;k++) for(int i=1;i<=n;i++) for(int j=1;j<=n;j++) if(i!=j&&i!=k&j!=k) { if(dis[i][j]>=dis[i][k]+dis[k][j]){ dis[i][j]=dis[i][k]+dis[k][j]; pre[i][j]=k; } } for(int i=1;i<=n;i++) for(int j=i+1;j<=n;j++){ if(pre[i][j]!=i&&vis[i][j]) ans++; } printf("Case %d: %lld\n",z++,ans); } } | https://blog.csdn.net/ujn20161222/article/details/80041158 | CC-MAIN-2019-04 | refinedweb | 526 | 65.86 |
import "upper.io/db.v3/lib/reflectx"
Package reflectx implements extensions to the standard reflect lib suitable for implementing marshaling and unmarshaling packages. The main Mapper type allows for Go-compatible named attribute access, including accessing embedded struct attributes and the ability to use functions and struct tags to customize field names.
Deref is Indirect for reflect.Types
FieldByIndexes returns a value for a particular struct traversal.
FieldByIndexesReadOnly returns a value for a particular struct traversal, but is not concerned with allocating nil pointers because the value is going to be used for reading and not setting.
ValidFieldByIndexes returns a value for a particular struct traversal.
type FieldInfo struct { Index []int Path string Field reflect.StructField Zero reflect.Value Name string Options map[string]string Embedded bool Children []*FieldInfo Parent *FieldInfo }
A FieldInfo is a collection of metadata about a struct field.
Mapper is a general purpose mapper of names to struct fields. A Mapper behaves like most marshallers, optionally obeying a field tag for name mapping and a function to provide a basic mapping of fields to names.
NewMapper returns a new mapper which optionally obeys the field tag given by tagName. If tagName is the empty string, it is ignored.
NewMapperFunc returns a new mapper which optionally obeys a field tag and a struct field name mapper func given by f. Tags will take precedence, but for any other field, the mapped name will be f(field.Name)
NewMapperTagFunc returns a new mapper which contains a mapper for field names AND a mapper for tag values. This is useful for tags like json which can have values like "name,omitempty".
FieldByName returns a field by the its mapped name as a reflect.Value. Panics if v's Kind is not Struct or v is not Indirectable to a struct Kind. Returns zero Value if the name is not found.
FieldMap returns the mapper's mapping of field names to reflect values. Panics if v's Kind is not Struct, or v is not Indirectable to a struct kind.
FieldsByName returns a slice of values corresponding to the slice of names for the value. Panics if v's Kind is not Struct or v is not Indirectable to a struct Kind. Returns zero Value for each name not found.
TraversalsByName returns a slice of int slices which represent the struct traversals for each mapped name. Panics if t is not a struct or Indirectable to a struct. Returns empty int slice for each name not found.
TypeMap returns a mapping of field strings to int slices representing the traversal down the struct to reach the field.
ValidFieldMap returns the mapper's mapping of field names to reflect valid field values. Panics if v's Kind is not Struct, or v is not Indirectable to a struct kind.
type StructMap struct { Tree *FieldInfo Index []*FieldInfo Paths map[string]*FieldInfo Names map[string]*FieldInfo }
A StructMap is an index of field metadata for a struct.
GetByPath returns a *FieldInfo for a given string path.
GetByTraversal returns a *FieldInfo for a given integer path. It is analogous to reflect.FieldByIndex.
Package reflectx imports 5 packages (graph) and is imported by 4 packages. Updated 2018-11-03. Refresh now. Tools for package owners. | https://godoc.org/upper.io/db.v3/lib/reflectx | CC-MAIN-2018-51 | refinedweb | 540 | 57.47 |
==== version history for MailTools] version 2.20: Mon 22 Jan 18:14:44 CET 2018 Improvements: - rewrite doc syntax to my current standard style. - text corrections rt.cpan.org#123823 [Ville Skyttä] - text corrections rt.cpan.org#123824 [Ville Skyttä] - convert to GIT - move to GitHUB version 2.19: Tue 22 Aug 13:30:41 CEST 2017 Improvements: - block namespace MailTools rt.cpan.org#120905 [Karen Etheridge] version 2.18: Wed 18 May 23:52:30 CEST 2016 Fixes: - Mail::Header should accept \r in empty line which ends the header. rt.cpan.org#114382 [Ricardo Signes] version 2.17: Wed 11 May 17:20:21 CEST 2016 Fixes: - Mail::Header should only accept totally empty lines as header terminator, not to break MIME::Tools regression tests. rt.cpan.org#113918 [David Cantrell] version 2.16: Mon 18 Apr 17:58:23 CEST 2016 Fixes: - Mail::Header regression in parsing files. rt.cpan.org#113874 [John L Berger] version 2.15: Mon 18 Apr 13:55:30 CEST 2016 Fixes: - Mail::Header continue reading after wrongly folders line rt.cpan.org#113464 [Mark Sapiro] Improvements: - Mail::Mailer::open call of exec() explained [Malte Stretz] - fix example in Mail::Address [peter77] version 2.14: Fri Nov 21 17:12:42 CET 2014 Fixes: - threads and Mail::Field initiation rt.cpan.org#99153 [Autumn Lansing] and [Arne Becker] Improvements: - warn when loading of some Mail::Field::* fails [Arne Becker] version 2.13: Sun Jan 5 18:52:25 CET 2014 Fixes: - optional 'from' and 'on' component in in-reply-to are comments rt.cpan.org#89371 [Ward Vandewege] - mailcap \\\\ -> \\ rt.cpan.org#89802 [Kevin Ryde] Improvements: - fix typos rt.cpan.org#87188 [D Steinbrunner]] version 2.11: Wed Aug 29 09:09:47 CEST 2012 Fixes: - typo in Mail::Mailer::smtp, which only shows up in >5.14 [cpantesters].org#75975 [Rolf G] version 2.09: Sat Feb 25 14:47:39 CET 2012 Improvements: - remove dependency to Test::Pod by moving 99pod.t from t/ to xt/ as result of rt.cpan.org#69918 [Martin Mokrejs] version 2.08: Wed Jun 1 13:55:34 CEST 2011 Fixes: - respect errors on closing a Mail::Mailer::smtp/::smtps connection. [Tristam Fenton-May] - Mail::Internet should accept Net::SMTP::SSL as well. rt.cpan.org#68590 [Jonathan Kamens] Improvements: - document that Mail::Mailer::smtps needs Authen::SASL [Marcin WMP Janowski] version 2.07: Fri Oct 1 12:39:43 CEST 2010 Improvements: - update README: MailTools 2.* require 5.8.1 rt.cpan.org#61753 [Alexandre Bouillot] - add "MAIL FROM" to Mail::Mailer::smtp, to be able to communicate with Comcast [David Myers] version 2.06: Tue Jan 26 10:01:22 CET 2010 Improvements: - express more clearly that Authen::SASL needs to be installed manually if you need the functionality - support for smtps via Net::SMTP::SSL, by [Maciej Żenczykowski] version 2.05: Fri Dec 18 22:39:21 CET 2009 Fixes: - no de-ref error when index out of range in Mail::Header::get() [Bob Rogers] - repaired fixed selection of smtp for non-unix systems. Improvements: - do not run pod.t in devel environment. - set default output filename for Mail::Mailer::testfile::PRINT [Kaare Rasmussen[ - warn when no mailers were found. rt.cpan.org#52901 [Christoph Zimmermann] version 2.04: Tue Jul 29 11:44:26 CEST 2008 Fixes: - Mail::Field::_require_dir complained on 5.10 about a closed dirhandle. rt.cpan.org#37114 [Manuel Hecht] - Bcc line removed before collecting addresses. [Jørgen Thomsen] Improvements: - add "die" to "close()" in synopsis of Mail::Send and Mail::Mailer. rt.cpan.org#36103 [Ed Avis] version 2.03: Mon Apr 14 11:13:31 CEST 2008 Fixes: - Netware needs to use smtp as well [Günrich Haubensak] and [Slaven Rezic] Improvements: - use 3-arg open() in Mail::Util. rt.cpan.org#20726 [Steve@sliug] and [Paul@city-fan] version 2.01: Wed Nov 28 10:48:24 CET 2007 Changes: - Remove work-around for Perl 5.8.0. unicode bug from Mail::Address::_extract_name(). Result of rt.cpan.org#30661 [Josh Clark] - Requires on Perl 5.8.1 minimum Fixes: - Mail::Mailer::testfile now also shows Cc destinations, the setting of 'outfile' now works, and it will produce an error when the data cannot be written. All thanks to [Slaven Rezic] version 2.00_03: Tue Sep 25 12:27:28 CEST 2007 - folding of header fields sometimes ended prematurely. Reported by [Anthony W. Kay] - add $sender as 4th argument to Mail::Mailer::*::exec() where missing. Discovered by [David Hand] - add Date::Format and Date::Parse to Makefile.PL. version 2.00_02: Sat Jul 21 12:29:20 CEST 2007 - parts of the documentation were lost, discovered by [Ricardo Signes] - rt.cpan.org #28093 smtp timeout check for local mail server can have short timeout. Patch by [Alexandr Ciornii] - rt.cpan.org #28411 syntax error in Mail::Mailer::smtp reported by [Andreas Koenig] version 2.00_01: Wed Jun 20 14:42:35 CEST 2007 - reorganized installation of MailTools, in a modern way. This may break installation on very old releases of Perl. - added t/pod.t - restructured most code, no functional changes. - added and cleaned a lot of documentation, using OODoc to generate nice manuals in POD and HTML. - extracted Mail::Field::Generic from Mail::Field - added mysteriously missing Mail::Field::AddrList::addr_list() version 1.77: Fri May 11 14:16:01 CEST 2007 - fixed syntax error in qmail.pm, patch by [Alexey Tourbin] also reported by [Volker Paulsen] - die if qmail's exec fails. - require Data::Format - corrected header field folding according to rfc2822, which may break some ancient (poor) applications. Patch by [Christopher Madsen] version 1.76: Tue Apr 10 09:25:29 CEST 2007 - The tag (field label) casing is "normalized" which is not required (as the comment in the code told), but a mis- feature. The feature will not change, to avoid breaking existing code. Original report by [Matt Swift] - Do not ignore unknown argument to Mail::Internet::new(), but complain about it [JPBS] - Document that the \n is still after a header line, but folding is removed. Suggested by [Roberto Jimenoca] - Document that unfolding is too greading, taking all leading blanks where only one should be taken. Suggested by [Roberto Jimenoca] version 1.75: Wed Jun 14 15:30:25 CEST 2006 - [Mike Lerley] reported that environment variables are not thread-safe in mod_perl. Therefore, we must pass the sender of the message explicitly on qmail's command-line. His addapted patch included. version 1.74: Tue Feb 28 08:39:14 CET 2006 - Finally fixed exec with SMTP, with help from [Jun Kuriyama] version 1.73: Sat Jan 21 09:54:13 CET 2006 - Added 'use Mail::Util' to Mail::Mailer::testfile to produce mailaddress(); - Improved exec() call from version 1.67 a little more. Let's hope that SMTP works again. version 1.72: Tue Jan 17 09:04:37 CET 2006 - release 1.70 broke SMTP sending. Detected by [J Meyers] version 1.71: Thu Jan 5 11:20:52 CET 2006 - grrrr tests failed version 1.70: Thu Jan 5 11:17:05 CET 2006 - Mail::Mailer::testfile.pm adds "from" display to trace output. [wurblzap] - fixed regex in Mail::Address [J Meyers] version 1.68: Thu Jan 5 10:29:25 CET 2006 - Updated copyright year. - Removed 'use locale' from Mail::Address, which tainted the parsed address. E-mail addresses are ASCII, so this 'locale' thing seems flawed. - $adr->name will refuse charset-encoded names. Found by [kappa] - Improve parse-regexes in Mail::Address. By [J Meyers] and me. version 1.67: Thu Mar 31 12:05:31 CEST 2005 - Mail::Mailer unfolded the header before sending, which is incorrect. Reported by [Byron Jones] - Mail::Header refolded already folded lines destroying blanks. Signaled by [Byron Jones] - Mail::Utils::maildomain now understands DM$m. Patch by [Nate Mueller] - When a Mail::Mailer exec() failes, DESTROY is called on all parential files. Not anymore thanks to [Randall Lucas] version 1.66: Thu Jan 20 10:16:10 CET 2005 - Extended explanation that Mail::Address is limited. - Added examples/mail-mailer.pl, contributed by [Bruno Negrão] - use Mail::Mailer qw(mail) sets default mailer. Doc update by [Slavan Rezic] - Mail::Mailer::smtp now can authenticate SSL [Aaron J. Mackey] version 1.65: Wed Nov 24 15:43:17 CET 2004 - Remove "minimal" comments from Mail::Address - [Dan Grillo] suggested some improvements to Mail::Address::name(), and some more were added. - [Slavan Rezic] small typo. version 1.64: Tue Aug 17 22:24:22 CEST 2004 - CPAN failed to index 1.63 correctly, so hopefully it will work now. version 1.63: Mon Aug 16 17:28:15 CEST 2004 - [Craig Davison] Fixed date format in Mail::Field::Date to comply to the RFC - [Alex Vandiver] patched the email address parser to be able to understand a list of addresses separated by ';', as Outlook does. The ';' is the group separator, which was not understood by MailTools before, but valid according to the RFCs. - [Torsten Luettgert] found that field labels like '-' where not beautified correctly. - [Slavan Rezic] Updated doc in Mail::Mailer: referred to $command which doesn't mean anything, and "testfile" is working differently. - [chris] Mail::Message::Field::wellformedName() will upper-case *-ID as part in the fieldname. version 1.62: Wed Mar 24 13:29:27 CET 2004 - [Reuven M. Lerner], removed warning by Mail::Address::host() when no e-mail address is provided. - [Ville Skytta] contributed another Mail::Mailer::testfile fix version 1.61: Wed Mar 10 10:51:44 CET 2004 - [Erik Van Roode] Mail::Mailer::test.pm -> Mail::Mailer::testfile.pm - [Jérôme Dion] corrected the folding of lines: folds start only with one blank according to rfc2822. - Added a big warning against automatic sender email address detection as provided by Mail::Util::mailaddress(). Please explicitly set MAILADDRESS. This after a discussion with [Wolfgang Friebel]. - Mail::Address->format should quote phrases with weird character. Patched based on patch by [Marc 'HE' Brockschmidt] - [Ruslan U. Zakirov] reported confusing error message when no MailerType was specified. - [Steve Roberts] fixed folding to produce longer lines. version 1.60: Wed Sep 24 09:20:30 CEST 2003 - [Henrique Martins] found that enclosing parenthesis where not correctly stripped when processing a Mail::Address. - [Tony Bowden] asked for a change in Mail::Address::name, where existing (probably correct) capitization is left intact. The _extract_name() can be called as method, is needed, such that it can be extended. version 1.59: Wed Aug 13 08:13:00 CEST 2003 - Patch by [Shafiek Rasdien] which adds Mail::Internet::smtpsend option MailFrom. - [Ziya Suzen] extended Mail::Mailer::test to provide more test information. - Added SWE (Sender Waranted E-mail) as abbreviation in field names which is always in caps, on request by [Ronnie Paskin] - Added SOAP and LDAP as abbreviation in field names which is always in caps. version 1.58: Tue Jan 14 14:42:29 CET 2003 - And again utf8 [Philip Molter] version 1.57: Tue Jan 14 09:47:46 CET 2003 - Added myself to the copyright notices... dates needed an update as well. - Typos in Mail::Internet [Florian Helmberger] - More tries to program around perl5.8.0's uc/lc-utf8 bugs in regexps [Autrijus Tang and Philip Molter] version 1.56: Mon Jan 6 17:13:17 CET 2003 - And again, the patches of Autrijus had to be adapted to run on a perl 5.6.1 installation. Thanks to [Philip Molter] version 1.55: Mon Jan 6 08:05:58 CET 2003 - One explicit utf8::downgrade for 5.8.0, this time for Mail::Address by [Autrijus Tang]. version 1.54: Mon Jan 6 08:00:00 CET 2003 - Another try to avoid the utf8 problems, this time by [Philip Molter] - Two explicit utf8::downgrades for 5.8.0, this time for Mail::Field by [Autrijus Tang]. version 1.53: Mon Dec 9 17:53:27 CET 2002 - New try on work-around for bug in perl 5.8.0 unicode \U within s/// Patched in Mail::Header by [Autrijus Tang] version 1.52: Fri Nov 29 13:52:00 CET 2002 - Work-around for bug in perl 5.8.0 unicode \U within s/// Patched in Mail::Header by [Autrijus Tang] version 1.51: Tue Oct 29 14:25:28 CET 2002 -] version 1.50: Wed Sep 4 00:38:49 CEST 2002 -] version 1.49: Wed Aug 28 08:36:58 CEST 2002 - t/internet.t defaults $ENV{LOGNAME} to avoid warnings in tests when that variable is not defined. [Chromatic] - Mail::Mailer::_clean_up left an extra space behind each header line. Patched by [Robert Spier] - Mail::Mailer::_clean_up now also trims folded headerlines on more than two lines. version 1.48: Wed Aug 7 22:54:56 CEST 2002 - Mail::Mailer::test only worked in UNIX, because it used the 'test', 'sh' and 'cat' command. [Matt Selsky] provided a patch to remove these dependencies. It may not work on ancient perl versions, but that is not really a problem for a testing facility. - The fix for nested comments in Mail::Address's, which went in a long time ago, broke the parser. As example "Mark Overmeer <mailtools@overmeer.net> (mailtools maintainer)" was parsed into two separate objects.... wrong. [Nicholas Oxhøj] reversed the patch. version 1.47: Fri Jul 5 12:02:55 CEST 2002 - Mail::Mailer::_cleanup_headers unfolds the header lines, but forgot to remove the indentation blanks as was discovered by [Meng Weng Wong] - Mail::Cap::new has two new options: filename => FILENAME, which is just long for FILENAME only take => 'ALL', to include all mail-cap files, not only the first one found. Contributed by [Oleg Muravskiy] version 1.46: Wed May 29 15:08:44 CEST 2002 - [Philip Molter] discovered my typo in Mail/Mailer/rfc822.pm which forced me to release a new version.... version 1.45: Thu May 23 10:15:59 CEST 2002 - [Mark D. Anderson] Add Content-Disposition to the list of structured header fields in Mail::Header. - [David Weeler] Added darwin to `mail' versions which require '-I' in Mail::Mailer. - [Leon Avery] updated Mail/Mailer/rfc822.pm to be more careful with multi-lined, multi-occurrence headers. - [Drieux] small fix in Mail/Mailer/smtp.pm which enables the passing-on of args to Net::SMTP. - {Mark Overmeer] Put a message about Mail::Box in Mail::Internet version 1.44 Sat Mar 23 10:16:47 CET 2002 - [Andreas Marcel Riechert] add -I to mailx for netbsd and openbsd too. - [Nate Mueller] Do respect user's X-Mailer in Mail::Internet above own line. - [Alexey Egorov] Header-line without body lost line-separator in Mail::Header.pm - [Bo Adler] and [Iosif Fettich] Found that I removed a blank before 'sub smtpsend' which caused AutoSplit to misbehave. version 1.43: Fri Feb 8 09:43:25 CET 2002 - [Jason Burnett] Added debug option for Net::SMTP for Mail::Mail::smtp. - [Slavan Rezic] + [Jonathan Stowe] Added eval around getpwuid, to avoid croak on Windows. - [Slavan Rezic] minor doc update. The documentation is still poor. - A lot of people complaint (and that after so many years that the module is active) about folder lines within words or strings. The mistake in the original implementation was that it tried to fold too hard; folding is a service, but no requirement. So: that overly active folding is removed now. version 1.42: Mon Dec 10 19:22:01 CET 2001 - Moved examples from bin/ to examples/, so people may be able to find them. - Mail::Util now also tries sendmail option S for domainname. Patched by [Todd R. Eigenschink] Included Debian changes by [Steve Kowalik]: - Added Mail::Mailer::qmail version 1.41: Wed Nov 14 10:35:57 CET 2001 - Mail::Util::maildomain did not expand variables. Fixed the regular expression. Reported by [Jean-Damien Durand] - [Henrik Gemal] wished better warnings in Mail::Address::parse, which are provided now. - [Lucas Nussbaum] reported incorrect folding of unstructured header lines. The whole idea of folder unstructured fields is flawed, as far as I know, but anyway noone apparently had sufficient long subject fields to find-out ;) Fixed in Mail::Mailer. version 1.40: Fri Aug 24 20:15:30 CEST 2001 - mailaddress defaults to username, not gcos in Mail/Util.pm Patched by [Vivek Khera] - Increased all version-numbers to show that maintainer-address did change. Suggested by [Tassilo v Parseval] All packages in this bundle with have the same version!!! The highest number used was 1.33. version 1.16: Wed Aug 8 11:28:26 CEST 2001 by Mark Overmeer <mailtools@overmeer.net> From now on MailTools will be maintained by Mark Overmeer <mailtools@overmeer.net> - Updated all manual-pages to include address of new maintainer. - Prohibition to modify header should be respected in Mail::Header. Patch by [Tatsuhiko Miyagawa] - Securely close socket in Mail::Mailer::smtp. Patch by [Heikki Korpela] - Fixed "bad file-descriptor" errors in Mail::Mailer::smtp. Patch by [Aaron J Mackey] - Some long header-lines caused the following line in the header to be indented too. This should be fixed. Reported by [Simon Cozens] - Small modifications to Mail::Mailer should make the module work for os2. Patch by [Ilya Zakharevich] - Fix to be able to specify an index at the first addition of a header-line to the Mail::Header structure. Patch by [Lucas Fisher] Change 583 on 2000/09/04 by <gbarr@pobox.com> (Graham Barr) Mail::Address - Remove some unneeded \'s in regex patterns (to keep 5.7.0 quiet) Change 582 on 2000/09/04 by <gbarr@pobox.com> (Graham Barr) Mail::Alias - Removed. Now distributed separatly and maintained by Tom Zeltwanger (ZELT) Change 581 on 2000/09/04 by <gbarr@pobox.com> (Graham Barr) Mail::Mailer - Remove newlines from the lines in the Mail::Header object Change 575 on 2000/08/24 by <gbarr@pobox.com> (Graham Barr) Mail::Mailer::mail - Fix problems with open(STDERR) when using under FCGI Change 571 on 2000/08/24 by <gbarr@pobox.com> (Graham Barr) Mail::Mailer - Deafulr Win32 to smtp Change 521 on 2000/05/16 by <gbarr@pobox.com> (Graham Barr) Mail::Internet - Added Debug and Port options to smtpsend Change 520 on 2000/05/16 by <gbarr@pobox.com> (Graham Barr) Mail::Header - Another fix for badly formed headers in _fold_line - get MIME right in _tag_case Change 519 on 2000/05/16 by <gbarr@pobox.com> (Graham Barr) t/mailcap.t - Do not assume user has perl in $PATH Change 502 on 2000/05/02 by <gbarr@pobox.com> (Graham Barr) Mail::Field - readdir returns files in the correct case, duh! Change 501 on 2000/04/30 by <gbarr@pobox.com> (Graham Barr) Mail::Header * Don't attempt to do a structured fold on non-structured header lines Change 498 on 2000/04/30 by <gbarr@pobox.com> (Graham Barr) Mail::Cap - Fix pod typo Change 490 on 2000/04/14 by <gbarr@pobox.com> (Graham Barr) Remove test in t/internet.t that sends an Email Change 457 on 2000/03/29 by <gbarr@pobox.com> (Graham Barr) Release 1.14 Change 456 on 2000/03/29 by <gbarr@pobox.com> (Graham Barr) Makefile.PL - Added PPD stuff Change 429 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) Makefile.PL changes Change 428 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) Mail::Mailer::sendmail - Remove @$to from command line as we pass -t Change 427 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) Mail::Send - to,cc and bcc should pass addresses as a list not as single string of , separated addresses Change 426 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) Mail::Mailer::smtp - override the close method from Mail::Mailer Change 425 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) Mail::Internet - _prephdr needed to use Mail::Util Change 424 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) Mail::Field - Generic packages do not have a file to require, so only require if !$pkg->can('stringify') Change 416 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) undef warning fix in Mail::Mailer::is_exe Change 415 on 2000/03/28 by <gbarr@pobox.com> (Graham Barr) Changes from <tobiasb@funcom.com> (Tobias Brox) Mail::Internet - now have a send sub for sending emails Mail::Header - now have a header_hashref sub which allows modification of the object through hashrefs Change 360 on 2000/02/16 by <gbarr@pobox.com> (Graham Barr) Mail::Address - Fix for nested comments Change 350 on 2000/01/26 by <gbarr@pobox.com> (Graham Barr) Mail::Header - combine() should just return the line if there is only one Change 349 on 2000/01/26 by <gbarr@pobox.com> (Graham Barr) Mail::Header - Fix bug in fold_line for when a header line only contains a tag Change 335 on 1999/09/24 by <gbarr@pobox.com> (Graham Barr) Mail::Internet - Added Hello option to smtpsend() Change 292 on 1999/03/31 by <gbarr@pobox.com> (Graham Barr) Release 1.13 Change 291 on 1999/03/31 by <gbarr@pobox.com> (Graham Barr) Mail::Header - fold_line now skips X-Face lines Mail::Filter - Applied patch from <pncu_ss@uhura.cc.rochester.edu> (Josh Pincus) * Added return value to _filter() so that the function returns the result of the last subroutine in the list of filters. (the manpage specifies that one should have been able to do this originally.) Mail::Mailer - Treat VMS the same as MacOS as neither have sendmail et al. Mail::Mailer::smtp - Server can now be specified to Mail::Mailer contructor Mail::Alias, Mail::Util,Mail:Internet, Mail::Cap - local-ize some globals used Mail::Cap - check in $ENV{HOME} is defined Mail::Address - Fix capitalization problems with names like "Ließegang" Change 290 on 1999/03/31 by <gbarr@pobox.com> (Graham Barr) Increment version Change 213 on 1998/10/22 by <gbarr@pobox.com> (Graham Barr) Mail::Address - Fix use of uninitialized warning Change 190 on 1998/09/26 by <gbarr@pobox.com> (Graham Barr) Update Makefile.PL for release 1.12 Change 189 on 1998/09/26 by <gbarr@pobox.com> (Graham Barr) Mail::Internet - Added options to smtpsend Mail::Send - Updated docs for 'smtp' Change 188 on 1998/09/26 by <gbarr@pobox.com> (Graham Barr) Mail::Header - Fix _fold_line for lines which contain quoted strings Change 172 on 1998/07/10 by <gbarr@pobox.com> (Graham Barr) Mail::Address - avoid warnings if undef is passed to parse() Change 169 on 1998/07/04 by <gbarr@pobox.com> (Graham Barr) Mail::Address - tweak to format to ensure comment is delimeted by () - typo in docs Change 168 on 1998/07/04 by <gbarr@pobox.com> (Graham Barr) - Documentation update to Mail::Internet Change 166 on 1998/07/03 by <gbarr@pobox.com> (Graham Barr) Mail::Cap - Fixed mailcap search so it works on MacOS Change 165 on 1998/07/03 by <gbarr@pobox.com> (Graham Barr) Mail::Mailer - Change to use Mail::Util::mailaddress Mail::Util - updated mailaddess to be aware of MacOS Change 164 on 1998/06/30 by <gbarr@pobox.com> (Graham Barr) Mail::Header - fix read(0 and extract() not to require non-whitespace characters on continuation lines, a single leading whitespace char is all that is needed. Change 163 on 1998/06/30 by <gbarr@pobox.com> (Graham Barr) - Applied patch from Roderick Schertler to - Two places in Mail::Header are changed so they don't use $'. - A Mail::Header::as_string method is added. - Mail::Internet::as_string and as_mbox_string methods are added. The mbox variant does encoding appropriate for appending a message to a Unix mbox file. - Tests for the three new methods are added. Change 162 on 1998/06/30 by <gbarr@pobox.com> (Graham Barr) Mail::Util - tweak to what maildomain looks for in the sendmail config file Sun Jun 28 1998 <gbarr@pobox.com> (Graham Barr) Mail::Address - Split out real handlers into their own .pm files - Added Mail::Mailer::smtp, this is the default for MacOS Wed Jun 17 1998 <gbarr@pobox.com> (Graham Barr) Mail::Mailer - Applied patch from Slaven Rezic <eserte@cs.tu-berlin.de> to support FreeBSD properly Mail::Address - Applied patch from Chuck O'Donnell to improve name extraction t/extract.t - change for new extraction Sat Apr 4 1998 <gbarr@pobox.com> (Graham Barr) bin/*.PL - change "#!$Config{'scriptdir'}/perl -w\n" ot $Config{'startperl'}," -w\n" Thu Mar 19 1998 <gbarr@pobox.com> (Graham Barr) Mail::Field - modified so it works with perl < 5.004 Makefile.PL - removed code to prevent installation of Mail::Field Wed Feb 18 1998 <gbarr@pobox.com> (Graham Barr) Mail::Header - Added \Q and \E to some regexp's Tue Feb 17 1998 <gbarr@pobox.com> (Graham Barr) Mail::Mailer - Added patch from Jeff Slovin to pass correct args to mailx on DG/UX *** Release 1.11 Fri Jan 2 1998 <gbarr@pobox.com> (Graham Barr) Mail::Internet - Documentation updates Mail::Util - Fixed "Use of inherited AUTOLOAD" warning Mail::Mailer - Some version of mail do not like STDIN bot being a terminal and also print 'EOT' to stdout when done. Opened STDOUT/ERR to /dev/null Makefile.PL - Changed so that Mail::Field is not installed if perl version is less than 5.004 Mail::Mailer - removed all for(my $i ...) and foreach my $d as they break compatibility with pre perl5.004 Tue Nov 25 1997 <gbarr@pobox.com> (Graham Barr) Mail::Mailer - Incremented VERSION, for some unknown reason it went backwards. Mon Nov 17 1997 <gbarr@pobox.com> (Graham Barr) Mail::Util - Added /var/adm/sendmail to the list of directories to search for sendmail.cf Mon Nov 17 1997 <gbarr@pobox.com> (Graham Barr) Mail::Internet - added options to nntppost Mail::Mailer.pm - Added check for bsdos to add -I option to Mail t/mailcap.t - MAde less unix specific by changing from using 'test' to using perl Sun Nov 16 1997 <gbarr@pobox.com> (Graham Barr) Added Mail::Field::AddrList to MANIFEST *** Release 1.10 Wed Nov 12 1997 <gbarr@pobox.com> (Graham Barr) Mail::Field::AddrList, Mail::Filter - new modules Mail::Field - Changes to the way sub-classes are registered and handled. Wed Nov 5 1997 <gbarr@pobox.com> (Graham Barr) Mail::Mailer - Modified code that searches for the executable to run --- --- -- 1997 <gbarr@pobox.com> (Graham Barr) Mail::Address - Documentation updates Mail::Header - Small tweak to _fold_line for lines that are just shorter than the fold width, but include whitespace Mail::Internet - does not inherit from AutoLoader. Instead AUTOLOAD is GLOB'd to AutoLoader::AUTOLOAD Mail::Mailer and Mail::Send - Modified PODs to reflect that Tim Bunce is not the maintainer. Mon Feb 24 1997 o Release 1.09 o Mail::Header Fixed a de-reference problem in unfold() _fold_line will no longer fold the From line that gets added by the user mail agent. o Mail::Internet Added DESTROY, to stop AutoLoader errors o Mail::Mailer Fixed an undef problem in new o Tests Added t/send.t and t/mailer.t Tue Jan 07 1996 o Release 1.08 o fixed Mail::Mailer::new so that it uses Symbol properly to generate the anonymous glob. Thu Jan 02 1996 Graham Barr <gbarr@.ti.com> o Release 1.07 o Removed Mail::MIME as it is now redundant. See $CPAN/authors/id/ERYQ/MIME-tools-x.xx for MIME related modules o Attempt to make Mail::Mailer find the correct mail program to invoke o Added Mail::Internet::unescape_from at the request of <kjj@primenet.com> o Fixed a bug in _fmt_line, was appling a s/// to a ref ???, now de-ref o Added Mail::Internet::escape_from at the request of <kjj@primenet.com> o Modified Mail::Internet::new so that it no longer accepts the message as an array. It now accepts an arg and key-value aoptions o Fixed a mis-spelling of Received in Internet.pm o Fixed a problem in Header.pm when return-ing line text and tag == 'From ' length($tag) + 2 is incorrect Wed Jul 24 1996 Graham Barr <gbarr@.ti.com> o Mail::Send, Mail::Mailer Incorporated a patch from Nathan Torkington <gnat@frii.com> to allow headers to be passed as scalars as well as list-refs. It also included some doc updates. Many thanks to Nathan Tue Nov 21 1995 Graham Barr <gbarr@.ti.com> o Added Mail::Internet::nntppost and Mail::Internet::smtpsend as AutoLoaded methods o Some small tweaks to mailaddress() Thu Nov 16 1995 Graham Barr <gbarr@.ti.com> o Modified Mail::Util to use Net::Domain Tue Nov 7 1995 Graham Barr <gbarr@.ti.com> o Changed name of Mail::RFC822 to Mail::Internet Wed Nov 1 1995 Graham Barr <gbarr@.ti.com> o Fixed remove_signature to be anchor'd to the start of the line o Re-vamped the reply to method Fri Sep 8 1995 Graham Barr <gbarr@.ti.com> o Applied patch from Andreas Koenig to fix problem when the user defined $\ Wed Aug 30 1995 Graham Barr <gbarr@.ti.com> o Updated documentation Tue Aug 29 1995 Graham Barr <gbarr@.ti.com> o Modified Mail::Util::maildomain to look in a list of places for sendmail.cf Thu Aug 24 1995 Graham Barr <gbarr@.ti.com> o Modified maildomain to look for /usr/lib/smail/config before attempting smtp Wed Aug 16 1995 Graham Barr <gbarr@.ti.com> o Modified maildomain to prepend hostname to domainname if it cannot find the address via SMTP o Added mailaddress() to Mail::Util Tue Aug 15 1995 Graham Barr <gbarr@.ti.com> o Modified Mail::Util::maildomain to parse /etc/sendmail.cf if it exists and extract the mail domain Mon Aug 14 1995 Graham Barr <gbarr@.ti.com> o Added maildomain into Mail::Util o Applied Andreas Koenig's patches to Mail::Mailer and Mail::Send Wed Jul 12 1995 Graham Barr <gbarr@.ti.com> o Added -a/-s switches to rplyto to enable a choice of reply to all or just the sender | https://web-stage.metacpan.org/dist/MailTools/changes | CC-MAIN-2022-27 | refinedweb | 5,009 | 67.55 |
29 August 2013 16:11 [Source: ICIS news]
HOUSTON (ICIS)--Although the geopolitical concerns of a potential military strike against Syria by Western powers has helped to push NYMEX WTI and Brent crude higher, the reaction in US petrochemicals has been mixed, according to trade participants on Thursday.
Prices in the aromatics markets late on Wednesday were mostly stronger on the day, with trade participants pointing to concerns of a military conflict with ?xml:namespace>
Prompt
Current benzene spot prices this morning were down by 2 cents/gal but were still up from the previous week.
Meanwhile, buying interest for toluene and mixed xylene (MX) late on Wednesday was stronger. Prompt MX bid/offer levels were at $4.35-4.45/gal FOB, as bids firmed from $4.25/gal the previous day while offer levels were flat.
Prompt toluene bid/offers also narrowed to $4.00-4.10/gal FOB during the day with bids moving up from $3.95/gal the previous day.
The
August ethylene prices were steady as material traded at 55 cents/lb, flat with two trades the previous day.
August polymer-grade propylene (PGP) bid/offer levels narrowed to 66.50-67.75 cents/lb from 66.00-69.00 cents/lb the previous | http://www.icis.com/Articles/2013/08/29/9701650/us-chemical-markets-reaction-mixed-ahead-of-potential-strike-against.html | CC-MAIN-2014-52 | refinedweb | 210 | 66.84 |
Free RAM: 1029Type is FAT16File size 5MBStarting write test. Please wait up to a minuteWrite 198.11 KB/secMaximum latency: 81396 usec, Avg Latency: 500 usecStarting read test. Please wait up to a minuteRead 299.33 KB/secMaximum latency: 2720 usec, Avg Latency: 329 usec
This allows occasional pauses by the card to erase large areas of flash. You can only write to erased flash.
Best performance is indeed made with buffers equal to the sector size of the SD card (512 bytes).You say one record consists of 14 bytes, [millis, f1,f2] comma separated fields. But how do you separate records?Consider writing pure binary: (the commas do not add anything except 2/14 slack ~14%.4 bytes long millis4 bytes float f14 bytes float f2So every 12 bytes is one record. 512 bytes can hold 512/12 = 42 records, leaving 8 bytes . So in your main loop() you should have a counter that writes the 512 bytes buffer to SD after 42 records. while writing this buffer you should fill buffer 2. etcIf you think reading 12 bytes records is difficult you could consider writing 16 bytes records of which exactly 32 fit in 512 bytes. Room for an additional sensor
To log 1000 samples per second requires a totally different technique than your sketch.I suggest you look at example sketches in fastLoggerBeta20110802.zip here advice like 512 byte buffers unless you are doing raw writes to the SD.The big problem is that a write to a file on an SD can take 200 milliseconds. You must capture data while the SD write is in progress. This means interrupts, probably using timers, and cleaver buffering schemes.
#include <SD.h>char buffer[512]; // For storing dataint i = 0; // countint r = 1234; // "reference"int delta = 10; // sampling interval, mslong previousMillis = 0;const int chipSelect = 4;const int yPin = A5; // motor output, after scalingvoid setup() { Serial.begin(115200); Serial.print("Initialising SD card..."); pinMode(10, OUTPUT); // CS pin pinMode(yPin, INPUT); // see if the card is present and can be initialised: if (!SD.begin(chipSelect)) { Serial.println("Card failed, or not present"); return; } Serial.println("card initialised.");}void loop() { File dataFile = SD.open("graphlog.txt", FILE_WRITE); unsigned long currentMillis = millis(); int y = analogRead(yPin); String graphString = String(millis()); graphString += ","; graphString += String(r); graphString += ","; graphString += String(y); if(currentMillis - previousMillis == delta) { previousMillis = currentMillis; for (i=0; i < 510; i++) { // 512? buffer[i] = graphString; } }}void fill_buffer() { for (int i=0; i < 512; i++) { buffer[i] = i; }}
If you do a 1000 samples per second, why add the millis() timestamp? If evenly divided you just get consecutive numbers, if not you get multiple samples with the same timestamp (implying a zero duration between). using the micros as timestamp gives at least an - sub milli - interval between two samples.
For raw writes you must write 512 bytes.For file writes there is some gain if all writes are 512 bytes and aligned on 512 byte boundaries.For most apps this gain is not worth the effort. My benchmark example in SdFat uses a 100 byte buffer. With a SanDisk 1 GB Ultra II card formatted FAT16 with the SdFormatter example I get this result:This is a very good card for data logging. Not because of the average write rate of 198 KB/sec but the max latency of 81396 usec. Some cards have a max write latency of 200000 usec.The 198 KB/sec is way faster than needed to log 1000 records per second with 14 byte records.Once again the design problem is to overcome the occasional long write latency that is inherent in SD cards.Even class 10 cards have this problem. The assumption is that devices like video cameras have lots of buffer so they achieve high average write rates for very large files. This allows occasional pauses by the card to erase large areas of flash. You can only write to erased flash.
Type any character to startFree RAM: 1043Type is FAT16File size 5MBStarting write test. Please wait up to a minuteWrite 165.30 KB/secMaximum latency: 107708 usec, Avg Latency: 600 usecStarting read test. Please wait up to a minuteRead 325.54 KB/secMaximum latency: 2336 usec, Avg Latency: 302 usec
(back to the goal)Can you explain why you need 1000 samples per second? What is changing so fast you want to monitor?maybe tell something more about the project?
ADC_DELAY is for high impedance sensors. If you have low impedance sources set ADC_DELAy to zero.If ADC_DELAY is nonzero, analogRead is called and the value is discarded to switch the ADC MUX to the channel then delay(ADC_DELAY) is called and finally analogRead() is called again to get the value.This allows the allows the ADC sample and hold to stabilize with high impedance sensors.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=100366.msg761337 | CC-MAIN-2015-18 | refinedweb | 837 | 66.13 |
Introduction to SYCL
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
Host Setup and SYCL Queue
This exercise is a slightly modified version of the vector addition we did in the previous example. You are going to take over in this one and complete the code for this application by yourself. Instructions are provided on each step of completion, so you should be able to do it. Let's begin.
By now, you should know that you need to include the SYCL header file in order to use SYCL in your application. It is already included at the top of the source file -
#include <CL/sycl.hpp>.
Host Setup
Description
The first step is to initialize the vector data on the host.
We will be using:
cl::sycl::float4
which is a type alias for
cl::sycl::vec<float, 4>
It is a SYCL type struct that provides OpenCL vector functionality for host and device.
Task
Define 2 input vectors and 1 output vector.
Inputs:
- vector
a:
{1.0, 1.0, 1.0, 1.0}
- vector
b:
{1.0, 1.0, 1.0, 1.0}
Output:
- vector
c:
{0.0, 0.0, 0.0, 0.0}
Location in the source code:
// <<Setup host memory>>
Hint
sycl::float4 a = { 1.0f, 1.0f, 1.0f, 1.0f }; // input 1
Initialize SYCL Queue
Description
SYCL queue is constructed from the selection of a supported device.
The system is set to always force the execution of the SYCL examples on the CPU device. Thus, the default selector will select the CPU because of its heuristics to identify the supported platforms and device on our system.
Task
Initialize a SYCL queue with either a CPU or default device selector.
Location in the source code:
// <<Setup SYCL queue>>
Hint
sycl::queue myQueue(sycl::default_selector{}); // explicitly target the CPU: sycl::cpu_selector{} | https://tech.io/playgrounds/48226/introduction-to-sycl/host-setup-and-sycl-queue | CC-MAIN-2022-05 | refinedweb | 318 | 64.51 |
Enables int64 Addition, subtraction, multiplication, division and modulus.
4 Downloads
Updated 05 Nov 2010
This submission enables the following operations for the int64 and uint64 data types:
* Addition
* Subtraction
* Multiplication (element-wise and matrix)
* Division (element-wise only)
* mod, abs, bitshift
See the published file for more details.
Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.
dmitrii (view profile)
Hello! Tried to run the script, but there was an error:
int64 plus...
int64 minus...
int64 times...
int64 matrix multiplication...
Internal error 1028 line 163
incorrect char m
leal ( movl -40(%ebp),wildcard??? Error using ==> mex at 218
Unable to complete successfully.
Error in ==> compile_int64 at 10
mex int64matmul.c
Has anyone encountered this? How to avoid?
craq (view profile)
Great functionality for those of us with pre-2011b versions. I had the same error as Vincent, and his solution worked for me too. Off the top of my head I couldn't figure out how to do the lshift and rshift, but I don't actually need them.
Jon (view profile)
Seamless use. Well done Petter and thanks!
Petter (view profile)
Vincent: Interesting. I thought the standard allowed the pasting of operators. Is this not true?
Vincent (view profile)
Hi Petter,
Thanks for your great package.
It didn't compile out of the box, however.
I tried to compile this on a Debian Stable machine, having gcc-4.3.2 installed.
The compilation of the first mex function aborts with the following error:
int64operation.c:38:1: error: pasting "OPERATOR" and "=" does not give a valid preprocessing token
I had to apply the following modifications:
Current situation:
One macro in each operator source files (#define OPERATOR +) and the OPERATOR_EQ defined as OPERATOR ## = in the (u)int64operation.c source files.
New situation:
Use two separate macro functions.
For example, the contens of int64plus.c is now:
---
#define UNARY_OPERATOR(a,b) ((a) += (b))
#define BINARY_OPERATOR(a,b) ((a) + (b))
#include "int64operation.c"
---
Where the OPERATOR and OPERATOR_EQ were used in int64operation.c:
---
c[i] OPERATOR_EQ b[i];
becomes
UNARY_OPERATOR(c[i], b[i]);
---
c[i] = a[0] OPERATOR b[i];
becomes
c[i] = BINARY_OPERATOR(a[0], b[i]);
---
I don't know if this works for every compiler around there, but for me, this works like a charm...
If you're interested, I can send you my modified version to the address specified in the README.txt file, so that you can more easily update your package.
Jan Simon (view profile)
Although INT64 values can be very large, you can still get an overflow. While Matlab uses saturated arithmetic for integer types, your tools do not. It is important to mention this in the documentation. Implementing a saturation (means: value is the max or min INT value if it exceeds the valid range) is not trivial, and omitting it is no reason to reduce the 5 star rating.
You mention precompiled in the ReadMe - where can I find them?
Following Loren's block about inplace operations, I suggest to replace e.g.:
function c = abs(a) c = a; end
by: function a = abs(a), end
Jan Simon (view profile)
Thank you for the fast update!
Igor (view profile)
Thanks a lot for this. Could you also add support for _colonobj so that one doesn't get a "Undefined function or method '_colonobj' for input arguments of type 'uint64'" when writing 1:n
Petter (view profile)
1. The reason the check is made in an .m file is that otherwise the MEX files would have to be quite verbose with lots of conversions from various types to int64.
2. I'll implement this optimization
3. Same here
4. OK, I'll change that
5. OK, I guess I could follow Matlab here
6. long long works on GCC, MSVC++ and LCC and feels more standard
7. True, I'll change that
Thanks for the suggestions.
Jan Simon (view profile)
A really needed and useful submission!
The efficiency can be improved significantly:
1. checkinputs: "if length(size(a)) ~= length(size(b)) ... if ~all(size(a) == size(b))" can be combined to: "if isequal(size(a), size(b))". Performing the check in the Mex and omitting the intermediate call of the M-functions would reduce the overhead.
2. (u)int64abs: At first the input is duplicated to plhs[0]. Then it is not necessary to copy positive values from the input to the output. A faster approach:
void mexFunction(...) {
signed long long *c, *cf;
plhs[0] = mxDuplicateArray(prhs[0]);
if (!plhs[0]) ...
c = (signed long long *) mxGetData(plhs[0]); // not mxGetPr
cf = c + mxGetNumberOfElements(plhs[0]);
for ( ; c < cf; c++) {
if (*c < 0L) {*c = -*c; }
}
}
3. Similar improvements for plus, minus, times, rdived: If the 1st input is duplicated already, the operation does not need to access the values from the inputs again. So these lines in (u)int64operations.c:
c[i] = a[i] OPERATOR b[i]
can be changed to the faster:
c[i] OPERATOR b[i]
using e.g. "+=" as operator instead of "+".
4. TMW suggests to use mxGetData instead of mxGetPr to get a pointer to non-DOUBLE arrays. But at least both method works currently.
5. Matlab's MOD can handle 0 as second input, and a user might expect this for the (U)INT64 version also.
6. "long long" is supported by the LCC compiler shipped with Matlab. For (all) other compilers, you can use uint64_T and int64_T as defined in "tmwtypes.h".
7. The check for division by zero can be included in the same loop as the operation to save time.
Petter (view profile)
Never mind, I misunderstood what you meant. Implementing a crude A*B is very easy, so I might do that.
Petter (view profile)
Array multiplication .* is already implemented.
John D'Errico (view profile)
I can't test this myself, so I cannot rate it. But if it now prevents A*B when A and B are both matrices, neither of which is a scalar, then this is good if it would otherwise have done the wrong thing. Better of course is to implement array multiplication too (perhaps in a future version.) Regardless, there have been a few times recently when I wanted to have int64 operations defined, so this is a good addition to MATLAB.
Petter (view profile)
John, the reason the * operator is included at all is because it is desirable to support operations like
a = int64(2);
b = int64( [1 2; 3 4] );
c = a * b
That is the reason * was included. The next update will issue an error if both a and b are matrices. Thank you for your suggestions.
Derek O'Connor (view profile)
John -- Thanks for pointing out that Int64 matrix multiplication is element-wise. I should have been more careful, given that the description clearly states that "multiplication and division are element-wise only". This of course explains the O(n^2) behavior.
On a more general point, I wish there was a consistent standard for matrix multiplicaton. For example, in O-Matrix, C= A*B is the same as Matlab, but C=A^2 gives cij = (aij)^2. This is really confusing. In Fortran 90, C=A*B gives cij = aij*bij.
John D'Errico (view profile)
Derek - it points out that even though the function mtimes is provided, this package only claims to support ELEMENTWISE operations.
The quadratic behavior tells me that this code did not do a true matrix multiply. Did you check that A64*B64 was correct in your test? Element wise multiplication will take O(N^2) flops, whereas a matrix multiply (as * is supposed to generate) is O(N^3).
I would have preferred that an error be generated if you use * on a pair of matrices, when that operation will not return the proper result. This would normally make me downrate this code. Since I cannot test it without compiling it, I won't give any rating at all.
Derek O'Connor (view profile)
This seems to be an excellent package. I used it to test matrix multiplication
and obtained very surprising results : multiplying two 10^4x10^4 int64 matrices took about 2 secs while two doubles took about 35 secs. The double mat-mult used the Math Kernel and all 8 cores while the int64 did not use the Math Kernel.
Also, a 2-degree polyfit of the int64 mat-mult times was very good, i.e., t(n) is O(n^2)!
Can anyone enlighten me? Here is the test function:
function times = TimesInt64(v,degree);
% Test matrix multiplication using int64arithmetic
% int64arithmetic compiled with Microsoft Visual C++ 2008
% Dell Precision 690, 2xQuadcore Xeon 5345, 2.3GHz, 16GB
% Windows Vista 64
% Example : >> times = TimesInt64(1:15,2);
% Derek O'Connor Sept 2009
nvals = v'*10^3;
times = zeros(length(nvals),1);
c = intmax('int64');
for k = 1:length(nvals)
n = nvals(k);
A64=int64(floor(c*rand(n,n)));
B64=int64(floor(c*rand(n,n)));
tic; A64*B64; times(k) = toc;
end;
p=polyfit(nvals,times,degree);
ptimes = polyval(p,nvals);
plot(nvals,times,'.',nvals,ptimes,'-')
% End Function TimesInt64(v,degree)
% n times (secs)
% 1000 0.0176
% 2000 0.06972
% 3000 0.15354
% 4000 0.34136
% 5000 0.50145
% 6000 0.63655
% 7000 0.9838
% 8000 1.2829
% 9000 1.5717
% 10000 2.0059
% 11000 2.4728
% 12000 3.0439
% 13000 3.4914
% 14000 4.2588
% 15000 4.9525
% p(x) = 0.098649 - 5.6075e-005*x + 2.5027e-008*x^2
Bruno Luong (view profile)
What a relief to have finally a basic int64 arithmetic operators supported.
Petter (view profile)
I used Microsoft Visual C++ Express 2008 to compile the library. "long long" _is_ supported in later versions of Visual C++. Also, "long long" is more supported by other compilers and will probably end up in the next C++ standard as well.
Bruno Luong (view profile)
It is also puzzling this can work
a=int64([0 1; 2 3])
b=int64([4 5 6 7]')
a+b
Bruno Luong (view profile)
After a quick look I have few comments that I wish the author could correct:
As stated, the C Mex can't be compiled by the popular MS VISUAL C (type LONG LONG must be replaced by __int64)
It is dangerous when no checking is performed after mxDuplicateArray that could returns NULL and results in a crash if there is not enough memory. | http://www.mathworks.com/matlabcentral/fileexchange/24725-int64-arithmetic-in-matlab?requestedDomain=www.mathworks.com&nocookie=true | CC-MAIN-2017-47 | refinedweb | 1,732 | 67.25 |
"Serge E. Hallyn" <serge@hallyn.com> writes:>.Given that the simple cases are so easy it probably doesn't matter inthat sense.However we now have the case where user namespaces own pid namespaces,and uts namespaces, and network namespaces, and ipc namespaces, andfilesystems. Throw in some mount propagation and use of setns andthings could get confusing. It is something that will need to befigured out if CRIU is going to properly checkpoint containerscontaining containers containing containers containing containers.Did I mention I like recursion?>> > That said in a normal use scenario I don't think that information is>> > needed.>> >>> > Do you have a particular use case besides checkpoint/restart where this>> > is useful? That might help in coming up with a good userspace interface>> > for this information.>> >> So, I spend a moderate amount of time working with people to introduce>> them to the namespaces infrastructure, and one topic that comes up now>> and this introspection/visualization tools. For example,>> nowadays--thanks to the (bizarrely misnamed) NStgid and NSpid fields>> in /proc/PID--it's possible to (and someone I was working with did)>> write tools that introspect the PID namespace hierarchy to show all of>> process's and their PIDs in the various namespace instance. It's a>> natural enough thing to want to do, when confronted with the>> complexity of the namespaces.>> >> Someone else then asked me a question that led me to wonder about>> generally introspecting on the parental relationships between user>> namespaces and the association of other namespaces types with user>> namespaces. One use would be visualization, in order to understand the>> running system. Another would be to answer the question I already>> mentioned: what capability does process X have to perform operations>> on a resource governed by namespace Y?>> I agree they'll probably want it, but if we want for a real need and> use case we can do a better job of providing what's needed.That two which is why I mentioned CRIU. But yeah it will probably takea little while to get there.Eric | https://lkml.org/lkml/2016/7/6/425 | CC-MAIN-2017-43 | refinedweb | 343 | 52.19 |
Linux devices, drivers and device requests. More...
#include <ipxe/list.h>
#include <ipxe/device.h>
#include <ipxe/settings.h>
Go to the source code of this file.
Get linux device driver-private data.
Definition at line 88 of file linux.h.
References linux_device::priv.
Look for the last occurrence of a setting with the specified name.
Apply a list of linux settings to a settings block.
List of requested devices.
Filled by the UI code. Linux root_driver walks over this list looking for an appropriate driver to handle each request by matching the driver's name.
Referenced by linux_args_cleanup(), and parse_net_args().
List of global settings to apply.
Filled by the UI code. Linux root_driver applies these settings.
Referenced by linux_args_cleanup(), and parse_settings_args(). | http://dox.ipxe.org/include_2ipxe_2linux_8h.html | CC-MAIN-2019-39 | refinedweb | 122 | 64.37 |
I have followed the tutorial to create a post converter script, from here:-
In the tutorial a script file is referenced as a plain .cs file.
I now want to be able to attach a debugger in my code so I can step through it and check out some info. I've found this question:-
It gave a little idea about how attaching the debugger, but I couldn't find anything listing all the steps involved. I assume I need to point the converter to my DLL, which will be alongside my PDB file.
I've tried setting:-
Configuration -> Converters -> Conversion Scripts -> Script File
to the DLL in my Class Library project's /bin/Debug folder.
and set:-
Configuration -> Converters -> Conversion Scripts -> Type Name
to the fully qualified 'namespace.class, assembly' value.
I then save it all, attach my debugger to the CESConverter7.exe process (under Managed mode), and then rebuild the source from the CES Admin tool.
The symbols don't get loaded, and so no breakpoints will ever be hit.
Can anyone tell me what I am missing?
Answer by sholmesby · Jun 18, 2015 at 10:08 AM
I worked out that:-
the Script File needs to point to the DLL in the /Debug folder of my project
the c# class library project needs to have a Target Framework of .NET 3.5
When debugging, you need to select 'Managed (v3.5, v3.0, v2.0) code'. - Thanks @sdesilets for the tip
I've detailed the full setup in my blog post, here:-
Answer by Sebastien Desilets · Jun 17, 2015 at 06:21 PM
When building
Make sure your conversion assembly targets v3.5, v3.0 or v2.0 of the .Net framework.
The CESConverter7 process uses the CLR v2.0.
When debugging
Attach to CESConverter7.exe in "Managed (v3.5, v3.0, v2.0) code".
It was 'Managed (v4.5, v4.0) code'. Should my project be .NET 3.5 based? Have I properly setup the DLL path and Type Name?
Oops sorry I didn't notice the type name. No, don't specify the assembly name, only the full class name (namespace + class name).
It also helps to have the Index log opened when you rebuild or refresh the CES Source you added your conversion script to. In it you should see something like this: Rebuild on source YOURSOURCE(with conversion script YOURSCRIPTNAME) has been requested by YOU. Loading plugin 'YOURCOMPLETEDLLPATH'
or Loading plugin 'YOURCOMPLETEDLLPATH' [SERVERNAME] Error loading custom converter: Type names passed to Assembly.GetType() must not specify an assembly.
Answers Answers and Comments
No one has followed this question yet. | https://answers.coveo.com/questions/5195/how-do-you-debug-a-net-c-conversion-script.html | CC-MAIN-2018-09 | refinedweb | 436 | 74.19 |
gadgetsgadgets
A toolkit for web development, which enhances Scalatags and unifies it with Scala.Rx
There are a million different options for doing Web development in Scala.js, including frameworks ranging from React to Suzaku to Bindings to Udash. All of them are lovely, and if you find one that suits you, use it.
However, a lot of folks decide to work at a lower level: in particular, they find that they like @lihaoyi's Scalatags library, and want to build their site using that. Then they discover Scala.Rx, and decide that they want to use that. Then they try to combine the two, and find that it isn't quite that easy.
Querki is one of the oldest shipping products built on Scala.js; its client predates pretty much all of the major frameworks. So I wound up in just that position, building something myself out of Scalatags and Scala.Rx, and puzzling out how to use them together to produce a reasonably serious FRP framework. This library is the result.
Using Gadgets in Your ApplicationUsing Gadgets in Your Application
To use Gadgets in your Scala.js project, include it as usual:
libraryDependencies += "org.querki" %%% "gadgets" % "0.2"
Gadgets has transitive dependencies on Scalatags and Scala.Rx, as well as sQuery and of course the DOM facade. All of these will be pulled in automatically.
Gadgets currently requires the use of jQuery, so you will need to include that in your top-level HTML files. The jquery-facade is pulled in transitively. I plan to phase out the use of jQuery eventually, in favor of sQuery, but that's a long-term project.
About GadgetsAbout Gadgets
The Gadgets toolkit is mainly focused on extending the core notion of Scalatags -- being able to build HTML nodes using simple Scala functions -- and enabling you to build reusable higher-level Gadgets that fit into those functions naturally. It also provides helpers for some of the common problems you encounter when building complex UIs.
Gadgets is specifically not pure-functional in the way some frameworks are: it doesn't use a VDOM, and admits that it is closely connected to the actual DOM. State is principally managed with Scala.Rx
Vars, in a slightly old-fashioned data-binding model. In return, it is a bit "closer to the metal". Personally, I find it slightly easier to understand how the code relates to what shows up on-screen, but it's very much a matter of taste.
This library is just a toolkit, not a full-fledged framework. There is a full, opinionated framework in Querki, built on top of this, and I might at some point refactor that out and make it available as a separate library. But the Gadgets are quite useful on their own, whether or not you are incorporating them into a framework.
A Motivating ExampleA Motivating Example
Here's an example of a hypothetical Gadget -- in this case, a pane composed of a date range, and a section that displays the results of a server query for that date range. It's not fully-fleshed out (and as of this writing might have bugs), but illustrates what a typical mid-level Gadget looks like. Note that this is just an illustration of a partial program -- a full working example would require several other Gadgets, such as TransactionList.
import org.scalajs.dom import org.querki.gadgets._ import rx._ import scalatags.JsDom.all._ class TransactionRangeGadget(implicit ctx:Ctx.Owner) extends Gadget[dom.html.Div] { val startDate = Var[Date](Date.now) val endDate = Var[Date](Date.now) val datePair = Rx { (startDate(), endDate()) } datePair.triggerLater { transactionsPane.foreach { tlist => tlist.updateWithDates(datePair.now) } } val transactionsPane = GadgetRef[TransactionList] .whenRendered { tlist => tlist.updateWithDates(datePair.now) } def doRender() = div( p("Please specify the date range to show:"), span( new DatePickerGadget(startDate), " through ", new DatePickerGadget(endDate) ), transactionsPane <= new TransactionList() ) }
That doesn't show everything (and isn't the only way you might organize this problem), but it illustrates some of the most useful bits:
- You can freely mix Gadgets with HTML nodes in your Scalatags
- Gadgets can and frequently do manage data-binding with Scala.Rx
- You can define class members for particular nodes in the Scalatags tree, and access them elsewhere (in this case, the
transactionsPane) -- this makes it much easier to build complex, inter-related UIs
Building a GadgetBuilding a Gadget
TODOTODO
- Choose a richer example: start simple, and build it up in the later sections
- Relationship of Gadgets to Elements
- Gadgets are (essentially) live Scala code attached to the DOM
- ManagedFrag, and the key difference between Gadgets and Scalatags: Gadgets actually maintain the relationship to the DOM
- The meaning of "render" in ManagedFrag
- The onCreate() and onRendered() hooks
- Accessing the tree via parentOptRx
- GadgetLookup, and finding Gadgets from the DOM elements
- Some of the above probably belongs in an "Advanced Concepts" section
GadgetRefGadgetRef
TODOTODO
- The point of GadgetRef: to let you build a readable HTML tree, and easily refer to Gadgets inside it
- A holder to a reference to a Gadget in the tree
- The all-important opt member
- map, the iffily-named flatMap, and foreach
- mapOrElse, mapElem
- isDefined and isEmpty
- assignment: the <= and <~ operators
- whenRendered (and maybe whenSet, although make clear that that's usually not right)
- Gadget.of for hooking plain HTML nodes
Using Scala.Rx in GadgetsUsing Scala.Rx in Gadgets
TODOTODO
- Proper use of Rx vs Var as parameters -- consume vs produce
- Talk about
Ctx.Owner, and how to use it -- thread it down from the top.
- RxAttr, to let you use Rx'es as attribute values
- RxEmptyable, once we spruce that typeclass up and promote it into the library
- Eventually, docs for the various common Rx utility types
Version HistoryVersion History
- 0.3 -- Significant changes to the functions exposed by
GadgetRef, because the signatures of
map()and
flatMap()were kind of scungy, and didn't do what you typically want with the current version of Scala.Rx. The old functions still exists as
mapNow()and
flatMapNow(), to signify their "now-ness", but there are also
mapRx()and
flatMapRx(), which produce proper Rx's and work as you would expect. Also added
mapRxOrElse()and
flatMapRxOrElse()as boilerplate-killers.
- 0.2 -- Added the RxDiv and RxTextFrag components, and the RxEmptyable typeclass.
- 0.1 -- Initial release. This is called 0.1 because it's incomplete: it only contains the basics, none of the actual Gadgets that we use in Querki yet. But what is here is pretty battle-tested, and has been in use in Querki for a couple of years now. | https://index.scala-lang.org/jducoeur/gadgets/gadgets/0.2?target=_sjs0.6_2.12 | CC-MAIN-2019-35 | refinedweb | 1,093 | 54.83 |
HC06 Bluetooth 2.0 Module is one of the short-range and cheapest communication modules. It consumes very low power and with 2.0 Bluetooth technology. The basic purpose of the module is to establish short-range communication between two microcontrollers and systems. The Bluetooth module mostly uses with Arduino/Microcontroller only in developing projects because the module has all the communication a modern Bluetooth offers. The only problem is the module is to perform the slave functions only. Even it has an internal storage method for the last slave to connect them without any permission or verification. The autoconnection and other methods are changeable through command mode in HC06.
HC06 Pin Configuration
Module HC06 is a slave Bluetooth device that offers serial communication (UART). The module has only communication and power pins. It also didn’t have any indication like other Bluetooth modules have. All the pins of HC06 are:
VCC
The module HC06 has a single power-up the pin, which connects with +5V to power up the device.
GND
The module connects with external devices and power supply to operate and to make it happen a common ground is necessary. The GND pin helps to make the common ground.
TX
The communication method in the device is UART and TX pins are for data sending.
RX
This pin helps to receive the data in UART communication with external devices like Arduino or Microcontrollers.
HC06 Bluetooth Module Features
- HC06 BT module has a feature to remember the last device name which helps to connect them automatically.
- It can send data with up to 2-3MBs speed at a short distance only.
- Bluetooth Module HC06 is a total slave and is changeable by even command mode.
- The module has both command and data mode by default.
- The device can operate with any TTL/CMOS device due to its operating voltage which is 5V and its internal structure.
- There won’t be any antenna requirement in the module because of its builtin antenna of 2.4GHz.
- The data transfer for HC06 is based on GFSK technology. Therefore, it also has encryption and authentication for data safety.
- The operating temperature of the module is -20 to +55 and it operates at 20mA current.
Alternative BT options
How to use HC06
This Bluetooth module is useable with any microcontroller with UART communication. In every device the hardware serial and in most of the device software serial is available. Here you will learn how to use HC06 with Arduino and the same method will help to work with other microcontrollers and even boards. First, connect the module according to the given circuit diagram. Then understand first how the device works.
It acts as a slave. Hc06 can connect with external devices just by power on with the use of the password.
- Default Password – 1234 or 0000
BT will start transferring the data to the serial pins and to read them the serial read programming of Arduino/microcontroller will help.
Hardware Serial Communication Arduino
In Arduino, the two methods are helpful to read and write the data serially. The first one is the Hardware serial which is the default method. To use the hardware serial the following code will help.
void setup() { Serial.begin(9600); } void loop() { if (Serial.available() > 0 ) { byte data = Serial.read(); Serial.print(data); } Serial.print("DATA"); Serial.write("DATA"); }
The first command serial.begin is for initializing the baud rate of communication. The default baud rate of the module HC06 is 9600 but the baud rate is changeable. Every device with UART communication has builtin methods that offer the change in baud rate at specific levels. The second command is serial.available which is to detect the serial incoming and the last most important is serial.read. The serial.read is for reading the data from the external module. Therefore, always keep in mind to read the data in the byte variable because the data travels between the module are always in bytes. The Serial.write/Serial.print is for sending the data. Any data is spendable using the following command.
Command Mode HC06
The command mode is one of the defaults setting modes for hc06 BT module, where its factory settings are changeable. To access the command mode the above code is useable but the baud rate needs to be different. The baud rate for command mode is 38400. After making the baud rate at 38400 the commands will operate from the serial monitor of the Arduino IDE. These are following general commands which help to change the device settings.
There are two methods to send commands. The first one is writing them directly and second is with Serial.print command in COM port of Arduino. just enter the command like the following example:
Serial.print("AT");
In some devices the one method works, which depend on one of their module and software versions.
Software Serial Arduino for HC06
The software serial method is the best method to use the device with UART communication. The software serial uses the digital pins to generate the data in bytes and transmit and receive them exactly like the UART. The following code will help to use the software serial and avoid the hardware serial for other devices:
#include <SoftwareSerial.h> SoftwareSerial mySerial(10, 11); // RX, TX void setup() { mySerial.begin(38400); } void loop() { if (mySerial.available()) Serial.print(mySerial.read()); } mySerial.print(Serial.read()); mySerial.write(Serial.read()); }
In the above code, the pin 10 will act as RX pin and 11 well as TX. Both pins are changeable.
The basic purpose of software serial communication is to leave the hardware serail for other devices. So, they can operate without any interruption with Arduino. The software serial is compatible with Bluetooth HC06 for every kind of data because of its data transfering in bytes. Otherwise, the software serial is hard to implement because of it only able to send data from 0-255 in integer form.
IOT Example HC06 With Arduino and DHT11
Here we will integrate the Bluetooth for an IoT application. In-home bases, the Bluetooth is useable for room temperate detection. We will use the DHT11 for temperature sensing and HC06 Bluetooth for transmitting the data. The Arduino will perform the basic operations of both devices. Here’s the following circuit.
To make it work the Arduino code needs to upload and here’s the code.
#include "DHT.h" DHT dht(2, DHT11); void setup() { Serial.begin(9600); dht.begin(); pinMode(3, OUTPUT); } void loop() { float temp = dht.readTemperature(); float hum = dht.readHumidity(); Serial.print(temp); delay(100); Serial.print(hum); delay(1000); if (temp == 25) digitalWrite(3, HIGH); else digitalWrite(3, HIGH); }
The code will take the data from analog input pins and then it will transmit them to the hc06 Bluetooth. The Bluetooth will transmit the data to the Bluetooth receiving device. The receiving device could be anything like a screen, computer or smartphone. Then the broadcast signal is viewable with every second. The code has a limit for temperature increase. In case of an increase in temperature from above 50 degrees, the buzzer will activate to alarm the users.
Applications
- In most remote controllers the module is common due to its cheapness.
- For robots and cars, the device is best to communicate. It helps to avoid complex connectivity.
- In engineering projects for bulk Bluetooth device, the module is common.
- In mobile phone accessories, the HC06 module is common because the mobile mostly acts as a master and device as a slave. | https://microcontrollerslab.com/hc06-bluetooth-module-pinout-arduino-interfacing-examples/ | CC-MAIN-2021-39 | refinedweb | 1,256 | 58.38 |
I want to create two arrays, one is a character array and the second is an integer array. Both are created dynamically when user provides input about the number of integers element. All the integers are delimited by spaces.
Input:
First line consists of
N
N
N
5 10 23 456 2
int_array[i], int_array[i+1]
5,10
#include <stdio.h>
#include <malloc.h>
#include <string.h>
#include <stdlib.h>
int *cstois(char *char_array, int *int_array, int n) {
int i, j;
for (i = 0, j = 0; i < n; i++) {
if (char_array[i] >= '0' && char_array[i] <= '9') {
int_array[j] == int_array[j] * 10 + (char_array[i] - '0');
} else
if (char_array[i] == ' ') {
j++;
} else
continue;
}
return int_array;
}
int main() {
int i, n;
printf("enter no. of elements");
scanf("%d\n", &n);
char *char_array;
char_array = (char*)malloc(n * sizeof(int));
fgets(char_array, sizeof(char_array), stdin);
int *int_array = (int*)calloc(n, sizeof(int));
cstois(&char_array[0], &int_array[0], n);
for (i = 0; i < n; i++)
printf("%d\n", int_array[i]);
free(char_array);
free(int_array);
return 0;
}
There are multiple problems in your code:
There is no standard header
<malloc.h>.
malloc() is defined in
<stdlib.h>.
in function
cstois(), you do not correctly update the number because of a typo:
int_array[j] == int_array[j] * 10 + (char_array[i] - '0'); should be
int_array[j] = int_array[j] * 10 + (char_array[i] - '0');
You ignore all characters that are not digits nor the space character. This is not necessarily correct:
1,2,3 would be parsed as
123 instead of being reported as an error.
Much worse even, you do not check for
'\0' when scanning the input buffer. If there are not enough numbers in the line read from the user, you scan beynd the end of the string and potentially beyond the end of the array, invoking undefined behavior.
You skip to the next element upon the space character. Separating numbers with multiple spaces would cause numbers to be skipped incorrectly.
You cannot handle negative numbers.
in function
main(), the input array is allocated as
char_array = (char*)malloc(n * sizeof(int));. This is incorrect as it limits the average number of digits to 3 for 32 bit ints... You should allocate at least 20 digits per number to allow for large integers.
The line read from the user with
fgets() is tragically short:
sizeof(char_array) is the size of the pointer, not the size allocated for the array. Save that size to a variable and pass it to both
malloc() and
fgets().
You do not check the return value of
scanf(), nor that of
fgets() or
malloc(). An empty file will not be handled correctly. | https://codedump.io/share/ZFQO0k681zja/1/how-to-convert-dynamically-created-character-array-into-integer-array-when-the-input-is-integers-delimited-with-space | CC-MAIN-2017-13 | refinedweb | 437 | 63.49 |
Alexandre,
thanks for the info. I didn't realise that jaxme is an open-source
variant of jaxb. Looks like I'll have to try it out.
I did know that there are no sun jars on ibiblio - however there are
poms for many of them, and I thought namespace.jar was one of them, or I
remember seeing it before I think. Perhaps I'm wrong.
However, it is seriously only 2 classes - I am surprised that jaxme have
not included an implementation.
And I also really like the XJC capability!
Adam
Alexandre Poitras on 13/11/05 19:38, wrote:
> Unfortunately, it is a sun jar. Refer to this guide :
><>
> /mini/guide-coping-with-sun-jar<>
> s.html <>
> I had the same problem and just switched to Apache JAXB implementation
> (Jaxme2 <>). What is great with this distribution
> is that you can use the xjc
> <>plugin during the
> generate-sources phase to generate your Java files and be
> sure they are always in sync with the changes you made to your XML Schemas.
> Hope it's help!
>
> On 11/13/05, Adam Hardy <adam.maven@cyberspaceroad.com> wrote:
>
>>I searched ibiblio for namespace.jar that is a dependency for JAXB, and
>>I can't find it anywhere. Has it got another name or is it just not there?
>>
>>It comes as part of JWSDP 1.6, package name javax.xml.namespace, class
>>in package == NamespaceContext
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@maven.apache.org
For additional commands, e-mail: users-help@maven.apache.org | http://mail-archives.apache.org/mod_mbox/maven-users/200511.mbox/%3C4377BBCC.3050201@cyberspaceroad.com%3E | CC-MAIN-2014-23 | refinedweb | 255 | 69.38 |
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.
I am creating signalr hub in asp.net web app
My wpf application is client for this signalr hub
I have login facility in my wpf application , I want to store this users on hub created in asp.net , so that I can send information to specific user . I want to store two properties UserName and UserToken , the list of these properties in hub , How I can send
this properties information to hub
I tried using Client.Caller but its not getting any value in connected or disconnected event on hub
any suggestions?
How I can do this?
Ramesh
Been a while since I used SignalR.... but IIRC...
You can use query string parameters, handing them over as a Dictionary<string, string>
Key and value.
And this page here
Seems to confirm my recall.
var querystringData = new Dictionary<string, string>();
querystringData.Add("contosochatversion", "1.0");
var connection = new HubConnection("", querystringData);
and
public class StockTickerHub : Hub
{
public override Task OnConnected()
{
var version = Context.QueryString["contosochatversion"];
if (version != "1.0")
{
Clients.Caller.notifyWrongVersion();
}
return base.OnConnected();
}
}
Hope that helps. | https://social.msdn.microsoft.com/Forums/en-US/a41b2f1b-ee14-4377-8c4a-80e7d00c4bdb/wpf-client-and-aspnet-web-app-signalr-hub-save-users-list-logged-in-from-wpf-client-in-signalr?forum=wpf | CC-MAIN-2022-40 | refinedweb | 193 | 60.21 |
Hi guys,
Total newbie here (no computer science background – the only ‘programming language’ I know is Excel VBA :p)..
I just downloaded Felgo yesterday, and got the installation done without any hitch.
Now I’ve launched the app and was following the “Welcome To Felgo Apps” tutorial. A couple of questions:
- I got as far as creating the app button on my Main page and going to the second page, but I can’t figure out how to add stuff on my second page. I’ve tried
import Felgo 3.0 import QtQuick 2.0 Page { title: "Detail Page" text:"asdf" }
but when I run the app and got this error after clicking the button (the page still pushes to the second page)
‘AppPlayground-Desktop_Qt_5_8_0_MinGW_32bit-Debug/qml/DetailPage.qml:8
Cannot assign to non-existent property “text” ‘
*It works alright if I drop the text:”asdf” part on the second page. I just get a blank second page*.
Can anyone tell me what I’m doing wrong? Thanks a lot, and I think it’s quite a good SDK 🙂
p.s. A couple other issues I had with the tutorial is
- First time I built my app it took maybe 3-5 mins for it to pop up. I thought I made a mistake in the installation but it seems to have fixed itself without me doing anything.
- Totally missed the fact that “id: page” needed to be inserted in the Mainpage for the AppButton to work properly. The only hint I got was afterwards in the tutorial when they say ” (don’t forget to add the id to the Page component)”. Totally missed that the first run through the tutorial :p | https://felgo.com/developers/forums/t/trouble-getting-past-the-welcome-tutorial | CC-MAIN-2019-22 | refinedweb | 282 | 71.75 |
Streamlining Mobile App Databases with Realm.io
This article was peer reviewed by Marc Towler. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
Despite most modern mobile apps requiring online access to send and receive data, at some point in app development you will need to store data locally on the device. For this you will need a local database.
Databases are generally slow at reading and writing data and Realm.io is a promising solution that can make development easy and offers a performance upgrade from SQLite on Android and Core Data on iOS.
Realm is a cross-platform mobile database released to the public in July of 2014 built from the ground up dedicated to run directly on phones, tablets and wearables. Realm is simple, fast and compatible across iOS & Android.
If you are not sure about its efficiency or why should you use it, check out its performance graphs.
Getting started with Realm on Android
You can download an example project based on this tutorial from GitHub.
Open build.gradle (app), add the dependencies needed and sync the project.
dependencies { … compile 'io.realm:realm-android:0.86.0' }
Read the Java documentation for more detailed instructions.
Data models
Realm data models are based on Java Beans principles. A data model object looks something like this.
public class Contact extends RealmObject { private String name; private String email; private String address; private int age; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } }
The java class presented above will represent a
RealmObject of a person’s personal contact info and will store new contacts in memory and make queries to return those stored contacts.
Realm basic operations
As with any other database platform, the basic operations for a Realm database are known as CRUD operations (Create/Read/Update/Delete). This operations take place inside transaction blocks.
Realm supports both transactions on the main UI thread for small operations and transactions in a background thread to avoid blocking the UI thread for more complex operations.
To work with Realm, first get a Realm instance for the main UI thread.
Realm realm = Realm.getInstance(this);
Use this instance to work with a Realm database. Let’s try creating a
Contact object and write it to Realm.
Realm handles write operations wrapped inside transaction blocks to ensure safe multithreading. A write operation looks something like this:
realm.beginTransaction(); Contact contact = realm.createObject(Contact.class); contact.setName("Contact's Name"); contact.setEmail("Contact@hostname.com"); contact.setAddress("Contact's Address"); contact.setAge(20); realm.commitTransaction();
Here Realm creates a new
RealmObject based on the model class
Contact.java. I added some values to the new object and the
commitTransaction() method is called to end the transaction block. During the commit, all changes will be written to disk.
The same process can be accomplished by using the
executeTransaction() method. This method automatically handles begin/commit transaction methods.
realm.executeTransaction(new Realm.Transaction() { @Override public void execute(Realm realm) { Contact contact = realm.createObject(Contact.class); contact.setName("Contact's Name"); contact.setEmail("Contact@hostname.com"); contact.setAddress("Contact's Address"); contact.setAge(20); } });
It’s good practice with Android to handle all writes on a background thread to avoid blocking the UI thread, so Realm supports asynchronous transactions. In the previous
executeTransaction() method, just add a
Realm.Transaction.Callback() parameter to make the process asynchronous.
realm.executeTransaction(new Realm.Transaction() { @Override public void execute(Realm realm) { Contact contact = realm.createObject(Contact.class); contact.setName("Contact's Name"); contact.setEmail("Contact@hostname.com"); contact.setAddress("Contact's Address"); contact.setAge(20); } }, new Realm.Transaction.Callback() { @Override public void onSuccess() { //Contact saved } @Override public void onError(Exception e) { //Transaction is rolled-back } });
Queries
Working with queries is also simple with Realm. You don’t need to build complex queries and handle database exceptions like SQLException or CursorIndexOutOfBoundsException, the two most likely exceptions in SQLite.
//Query to retrieve all Contacts RealmQuery<Contact> query = realm.where(Contact.class); // Add query conditions: age over 18 query.greaterThan("age", 18); // Execute the query: RealmResults<Contact> result = query.findAll(); //Contacts stored in result
Or if you want it to run asynchronously:
// Execute the query asynchronously: RealmResults<Contact> result = query.findAllAsync();
Deleting data from Realm is easy.
realm.beginTransaction(); result.remove(0); realm.commitTransaction();
Remember, Realm operations are handled inside transaction blocks.
For those developers who are attached to traditional database principles, Realm offers a way to assign the
RealmObject fields with annotations, special field properties to make them more ‘database field like’.
@Required– Enforces checks to disallow null values
@Ignore– Field should not be persisted to disk
@Index– Add a search index to the field
@PrimaryKey– Uniquely identifies each record in the database.
Relationships between RealmObjects are treated like relationships between tables in SQL databases.
If you are developing with a JSON data format, Realm has integrated methods to store and retrieve JSON data.
Something extra from Realm
It’s possible to integrate Realm with other commonly used libraries for Android. Make your app development easier and check out the libraries Realm integrates with.
Conclusion
To store data in an Android application you typically have to know SQL, understand the concept of Primary-keys, Foreign-keys and how to build a query and manage table relations.
Realm simplifies this and I feel will be a milestone in mobile storage. What do you think? | https://www.sitepoint.com/streamlining-mobile-app-databases-with-realm-io/ | CC-MAIN-2018-51 | refinedweb | 943 | 50.02 |
Background
Even though the first western contact with the Apache of the American southwest was friendly (Francisco de Coronado, 1540he called them "Querechos"), by the late 16th century, they had gained their reputation through raids on the Spanish settlements and other tribes (it's thought they are partly responsible for "checking" the northern expansion of Spain and Mexico). In defense, the Spanish built a chain of forts (" presidios") throughout the area. This did not dissuade the Indians, who continued, making small and fast raids, then disappearing into the desert before the military could react in force.
Even with Mexican independence from Spain, the raids continued ( horses and cattle being prime targets). In 1848, the Mexican-American War ended with the Treaty of Guadalupe Hildago, in which the United States paid 15 million dollars for territory encompassing parts of (present day) Arizona, California, Colorado, Nevada, New Mexico, Utah, and Wyoming (Mexico also agreed to recognize the annexation of Texas). This began to bring settlement, soldiers, and others passing throughespecially due to the discovery of gold in California that year.
While the United States "held the deed" to the land, the Apache point of view was that since they had never been defeated or conquered by the Mexicans (same with the Spanish), it was still their landthe Americans having no valid claim, were considered tresspassers, if not invaders of a sort. The 1850s saw the raids concentrating on northern Mexican settlements but as the years went on and the US presence increased, confrontations between the two became more common. That led to active campaigns to "subdue" the Indians, placing the remainder on reservations (which generally had land unfit for most agriculture or livestock, insufficient supplies, overcrowdingwhich often led to disease, and were poorly managed by the "overseers," leading to many "breaking" from the reservations and the perpetuation of the raiding).
Eskiminzin and Lt. Whitman
As with most groups, not all Apache can be painted with the same "aggressive," "warlike," or "fierce" (or, in the words of General George Crook: "tigers of the human species") brushstrokes. Eskiminzin was the leader of a group of about 150 Aravaipa Apache (one of the western bands). He wanted peace for his people and safety from the pursuit of "bluecoats" who (as is usually the case in the course of Indian relations) didn't differentiate between different groups and Indians and attacked just because they were Apache.
All Eskiminzin wished for, as terms of peace, was to live on their own lands where they could grow food ("mescal"not the alcoholic drink, but the roasted leaves of the agave plant that was a staple of the diet for many Apaches). Having heard that the leader of Camp Grant (a small post where Aravaipa Creek met the San Pedro), Lieutenant Royal E. Whitman, was a fair and kind man, he approached the post in February 1871.
He explained his feelings and requested not to be sent to the reservation, stating "let us go to the Aravaipa and make a final peace and never break it." Whitman had no authority to make peace with the band, but genuinely desired to help. He asked that they surrender the their firearms and then they would be allowed to stay near the post until he could discuss the matter with superiors (in this way they would be considered technical prisoners of war, though they were not treated as such in practice).
The Apaches set up a camp a few miles away, planting corn (maize) and making mescal. Whitman was impressed by their hard work and began employing some of themmostly for cutting hay to feed the cavalry horsesand paying them money, with which they bought supplies. Even some nearby ranchers hired some as laborers. The situation, being both peaceful and mutually beneficial, encouraged other Apaches to join the village (more than one hundred, some from other bands). Meanwhile, Whitman had sent an inquiry to his superiors (which was later returned for "resubmission on proper government forms"). It was April and he was concerned because the Indians were under his watch and he was responsible for their behavior (which had not been a problem) and protection.
The Tucson expedition
That year, other Apache raids had taken place in the general area of Tucson (a true "frontier" town made up of " gamblers, saloon-keepers, traders, freighters, miners, and a few contractors who had made fortunes during the Civil War and hopeful of continuing profits with an Indian war" which was over fifty miles away from Camp Grant). People were terrorized and some killed, livestock and horses taken. The citizenry were outraged and formed a "Committee on Public Safety" for protection.
Interestingly, none of the "incidents" took place at or just outside of Tucson, only in neighboring communities, often miles away. But it didn't stop the "group" from readying itself and chasing off after the offending parties. The people of Tucson decided that the raiding Apaches were coming from the Camp Grant encampment (this, despite the distance they would have to travel and the apparent lack of any evidence to support it)Indians who were under the protection of the military, making things worse. It wasn't helped by the local newspaper that printed an editorial asking "Will the Department Commander permit the murderers to be fed by supplies purchased with the people's money?" (curious that it was also put into economic terms as well as "safety").
More raids took place. In one instance the Committee chased the party for some fifty miles, caught one Indian and declared him of the Aravaipa band. Another incident occurredthirty miles away from Camp Grant. The paper and the citizens were further outraged and determined to take care of the problem. An expedition was organized. It consisted of six Americans, forty-two Mexicans, and ninety-two Papago (Tohono O'odham) Indians. The Indians had, partly for their own protection and partly for diplomatic reasons, allied themselves with other tribes and Americans against the Apache. (It is also interesting to note that they were largely "Christianized" as a result of early, amicable relations with the Spanishsomething one wonders about given the actions taken later on...though I suppose no more so than that of the "Christianized" whites who organized and went along.)
The Massacre
The expedition arrived in the early hours of the morning before the sun came up (around 4:30). The camp was asleep. The public safety-minded group set up along the bluffs and along the creek. Then proceeded to open fire on the sleeping village. Any who tried to escape were shot at (a number did manage to get away, including Eskiminzin). In a matter of perhaps thirty minutes, nearly all the inhabitants had been slaughtered (as many as 150). Twenty-seven to twenty-nine of the children (the only captives) were taken away and sold as slaves in Mexico by the Papagos. As is sadly typical of many massacres (the Sand Creek Massacre being an extreme, but not isolated, example), the depredations were not relegated to just the murders (see below).
About 7:30, a mounted messenger brought news to Whitman that there was a large party of armed men on the way to the Apache camp (neither had any way of knowing how tragically late the news had come). Whitman immediately dispatched two men who could interpret the warning to the Aravaipa people. They were to be told to pack up and quickly move to the post for protection. The men returned within an hour to report they could find no living Indians (one woman managed to survive).
Whitman and others rushed to the scene. They were greeted not only by death, but fire and mutilation. Whitman:.
Surgeon C. B. Briesly:
.
Hoping to communicate his sadness and regret and to let the Indians know it was not his fault and concerned for those survivors who had escaped to the hills, Whitman began burying the bodies. During that, some Apache did return (briefly) to express grief that, in Whitman's words, was "too wild and terrible to be described." His count gave the dead as "an old man and a well-grown boyall the rest women and children."
Aftermath
Whitman pledged to the survivors that he would see to it that justice was served. Feeling he was to be trusted, many returned and rebuilt the village. And true (nearly) to his word, he finally managed to bring 104 members of the expedition to trial (with the help of President Ulysses S. Grant, who called the incident "purely murder"). It was claimed that they had "followed the trail of murdering Apaches straight to Aravaipa village." Countering that was post guide Oscar Hutton who testified that "I give it as my deliberate judgment that no raiding party was ever made up of Indians at this post." Giving supporting testimony were the post trader, the beef contractor, and the man who carried the mail between the Camp and Tucson.
The trial lasted for five days. Deliberation lasted nineteen minutes. No one was found guilty.
Whitman was personally and professionally damaged. He was, of course, the person who not only defended the "bloodthirsty marauding" Apache, but he tried to have the (let's use the correct terminology) vigilante-murderers convicted for their "civic-minded" actions. After being denied promotions and escaping three court-martials, Whitman resigned.
A few years after the massacre, the Indians were forced to move from their village when the army moved Camp Grant sixty miles away. It was also an excuse to place them on a reservation. After that had made an attempt to (once again) start over as peaceful farmers, but there was an uprising in which an officer was killed. Despite having nothing to do with it, the Aravaipa were blamed and Eskiminzin was chained and jailed (for what was termed a "military precaution"). He and his people escaped, and after four months of cold and starvation, had no choice but to return. Once again, their leader and his subchiefs were confined and chained together: again, a "military precaution." He was released the next summer and allowed to return to his people (though never to the peaceful, undisturbed life he had hoped).
(Sources: Dee Brown Bury My Heart at Wounded Knee: an Indian history of the American West 1970, quotes taken from there; Carl Waldman Biographical Dictionary of American Indian History to 1900 rev. ed. 2001; Atlas of the North American Indian rev. ed. 2000; Encyclopedia of Native American Tribes rev. ed. 1999;)
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://everything2.com/title/Camp+Grant+Massacre | CC-MAIN-2018-51 | refinedweb | 1,769 | 58.62 |
Issues
Capturing an image using pygame and saving it
I had a problem with respect to the capturing of a image from a webcam and saving it. I wrote a small code to do this, but on executing the program i am just getting a black(blank) image of size(640,480).
"import pygame
import pygame.camera
from pygame.locals import *
pygame.init()
pygame.camera.init()
window = pygame.display.set_mode((640,480),0)
cam = pygame.camera.Camera(0)
cam.start()
image = cam.get_image()
pygame.image.save(window,'abc.jpg')
cam.stop()"
It also opens the pygame window but that is also blank, and it goes to the NOT RESPONDING state in a seconds time. Any solutions you could suggest me regarding the above?
Thank you Nirav(nrp).
I am working on a Image processing task.
Is there any method to control the exposure time and frame rate of the camera from the pygame package? I am asking this because when the images are captured in dark situations then the same image is copied to around 30 frames before the next frame is captured and copied for the next 30 frames. Can this be because of the Buffer involved. Is the buffer slow ??
Any Inputs ??
Which OS are you on?
I am on Windows 7.
Change the line from your code:
pygame.image.save(window,'abc.jpg')
to:
pygame.image.save(image,'abc.jpg')
You saved the window screen, and not the camera image. | https://bitbucket.org/pygame/pygame/issues/78/capturing-an-image-using-pygame-and-saving | CC-MAIN-2017-30 | refinedweb | 243 | 69.89 |
- with CyberNeko HTML Parser
NekoHTML is a library which allows you to parse HTML documents (which may not be well-formed) and treat them as XML documents (i.e. XHTML). NekoHTML automatically inserts missing closing tags and does various other things to clean up the HTML if required - just as browsers do - and then makes the result available for use by normal XML parsing techniques.
Here is an example of using NekoHTML with XmlParser to find '.html' hyperlinks on the groovy homepage:
We turned off namespace processing which lets us select nodes using '.A' with no namespace.
Here is one way to do the same example with XmlSlurper:
We didn't turn off namespace processing but do the selection using just the local name, i.e. '.name()'.
Here is the output in both cases:
Now that we have the links we could do various kinds of assertions, e.g. check the number of links, check that a particular link was always on the page, or check that there are no broken links.
Groovy and HtmlUnit
The following example tests the Google search engine using HtmlUnit:
Note that to use HtmlUnit with Groovy you should not include the xml-apis-*.jar included with HtmlUnit in your CLASSPATH.:
Alternatively, we could have written the whole test in Groovy using AntBuilder as follows:
Grails can automatically create this style of test for your generated CRUD applications, see Grails Functional Testing for more details.
1 Comment
Aaron Blondeau
For the Watij example, be sure you have either run the file "launchWatijBeanShell.bat" at least once, or manually copied the jniwrap.dll file that comes with Watij to your "%Windir%\system32\" directory. Otherwise you may get an "Could not create IEAutomation instance" error. | http://docs.codehaus.org/display/GROOVY/Testing+Web+Applications | CC-MAIN-2014-10 | refinedweb | 288 | 61.16 |
I display the clickdata of a clicked Point on a scatter map box. Is there a way to reset the clickdata when clicking somewhere on the map (but not a point)?
1 Like
Hey @konsti_papa, I am facing the same issue. Did you find a solutions? I have been looking for such things as ClickData.Timestamp or so, did not find anything
@qdumont yes i found a Solution!
Wrap the map in a div and then just an simple callback like this
@app.callback(Output(‘map’, ‘clickData’),
[Input(‘map-Container’, ‘n_clicks’)])
def reset_clickData(n_clicks):
return None
4 Likes
Perfect workaround Konsti - Thx a lot !
Can someone elaborate this a bit better?
I can’t make it work with the plot wrapped in a div.
My plot’s clickData is always none in this way.
Thanks
I created the account just to thank you!!
This solved a big problem for me!!
Thank you!!!
1 Like | https://community.plotly.com/t/reset-click-data-when-clicking-on-map/22709 | CC-MAIN-2021-49 | refinedweb | 154 | 77.03 |
22 November 2014
The RESTafarian flame wars – common disagreements over REST API design
The phrase “RESTafarian” was coined by Mike Schinkel to describe over-zealous proponents of REST. It’s a description you can readily recognise if you have ever been caught in aggressively meticulous debates about what is and is not “RESTful“.
I have some sympathy with the zealots as REST does describe a particular architectural style whose subtleties are often poorly understood. That said, as with most architectural debates one always should usually prefer pragmatism over dogma.
The bottom line is that none of the main points of contention are significant enough to make the difference between success and failure. You will not look back on the smoking crater of a failed API development and wish you’d used POST rather than PUT, neither will you regard HATEOAS as the one thing that could have saved your project.
It’s important to retain a sense of proportion as being “right” isn’t always the same thing as being “successful”. Then again, if you’re going to call it “RESTful” then you might as well try to do it properly and consider the main points of contention that seem to keep RESTafarians awake at night.
If it doesn’t leverage hypertext, then it isn’t REST
The oft-quoted Richardson maturity model implies that an API can progress along a continuum from basic RPC calls over HTTP towards full REST. The first step involves adopting resources, the second leverages HTTP verbs to manipulate resources and the final step of HATEOAS is required for the full “glory” of REST.
Maturity models can be unhelpful as they often describe a false continuum. People can gain the impression that there are different “styles” of REST or that HATEOAS is normally part of a more “mature” API. REST is a very specific architectural style rather than a menu of features. If you’re not using resources, methods and HATEOAS then it isn’t really a RESTful API.
Roy Fielding wrote the initial thesis that defined REST and is pretty explicit about this:
“If the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API.”
He does go on to hammer the point home in greater detail:
“A REST API should be entered with no prior knowledge beyond the initial URI… From that point on, all application state transitions must be driven by client selection of server-provided choice… “
It can be difficult to realise the benefits of HATEOAS as you are dependent on savvy consumers who understand the intention of your design. You can’t stop them from hacking URLs directly rather than using the resource links provided to them by the API. You can take a horse to water, but you can’t make it navigate your resources correctly.
Some API designers decide that HATEOAS is more trouble than it’s worth. That’s perfectly valid and many excellent APIs do not adopt HATEOAS. It’s just that they’re not really “RESTful” in a strict sense.
PUT vs POST
Which should you use for creating new resources? The correct answer, inevitably, is “it sort of depends“.
If you know the location of the resource then use PUT, otherwise POST is probably more appropriate. It’s also important to bear in mind that PUT should be idempotent, i.e. repeating the operation yields the same result every time.
Therefore, you could use either to create new resources. Adding a new resource without knowing what the new URI will be is a POST operation as each repeated call will yield a new resource. If the URI is known then PUT is more appropriate because successive calls will not create a new resource.
Doing PATCH “properly”
According to the HTTP specification, PUT must take the full resource representation in the request. This can be a bit cumbersome, so PATCH is increasingly being adopted for partial updates.
Given that PATCH is only a proposed standard there are details around the semantics of the method that are not widely understood. It’s not a simple replacement for POST and PUT where you supply a flat list of values to change. The request should supply a set of instructions for updating an entity and these should be applied atomically.
A “set of instructions” is very different from a “set of values” and the specification clearly states that a request should be a different content type to the resource being modified. The detail of the representation is down to you, but RFC6902 describes a JSON format for PATCH where each object represents a single operation, e.g. “add”, “copy”, “delete” and so on. Each type of operation is described in eye-watering detail in the specification.
The point here is that there is a lot more to PATCH than meets the eye. Unless you adopt a complex semantic format for describing change then you are likely to arouse the ire of the “you’re doing it wrong” brigade.
Authentication
REST doesn’t say anything about authentication, so there’s no “correct” way of doing it.
With any authentication mechanism it’s important to remember the “stateless” aspect of REST. All the information necessary for the call should be contained within the request itself. You should not be tempted to do anything that adds state into the equation, such as login mechanisms that set cookies.
You should use fixed addresses for resources
This is the HATEOAS argument in a different skin. Roy Fielding is pretty clear about this:
“A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs.”
This is rarely understood by developers who often can expend vast amounts of energy considering the “correct” format for URLs. Your integrators shouldn’t care as they are just following links provided by your resources. Well, they do if it’s “RESTful”.
REST and CRUD
REST and CRUD are not the same thing.
CRUD describes a series of common operations that can be performed on entities in a data repository. REST is an architectural style of API where operations are performed on abstract representations of resources.
That’s all there is to say, really.
Is REST part of HTTP?
Once again, we can count on Roy:
“A REST API should not be dependent on any single communication protocol”
Strictly speaking, REST is an architectural style is not necessarily bound to any protocols such as HTTP. However, in practice you will never be likely to use anything else.
Versioning
Software evolves, so you inevitably need some kind of strategy for versioning an API. REST doesn’t provide for any specific versioning but the more commonly used approaches fall into three camps: putting it on the URI, using a custom request header or a adding it to the HTTP Accept header.
All of these approaches have their faults and can be undermined quite easily. URIs should refer to a unique resource rather than versions of a resource. A custom header seems like an unnecessary repetition of the Accept header specified by HTTP, while using the Accept header itself undermines the notion of being able to identify the resource through the URI.
Clearly, you can tie yourself in knots with versioning and REST. Some people even advocate avoiding the issue altogether and adopting techniques such as tolerant readers to allow for loosely defined and flexible contracts. That involves opening a wholly different can of worms.
Finally… remember what actually matters…
The very most important thing is that you have an API that your consumers find consistent and usable. This is not necessarily the same thing as being “RESTful”. I have seen many projects flounder as architectural purity takes precedence over usability.
Many design decisions should be dictated by a clear-eyed view of what your consumers expect from an API. Will consumers benefit from an API driven by hypertext or do they just want to hack a few simple URLs? There’s nothing wrong with a good old HTTP API that borrows a few aspects of REST. Just don’t call it a “RESTful” API…
Filed under API design, Favourite posts, Rants, REST, Web services. | http://www.ben-morris.com/the-restafarian-flame-wars-common-points-of-disagreement-over-rest-api-design/ | CC-MAIN-2017-22 | refinedweb | 1,396 | 60.95 |
DEPO Delivery Service Bindings for Python
Project description
DEPO Delivery Service Bindings for Python
Python wrapper for DEPO's application interface. At the moment, it provides easy access to contracted places (input / output) and placing and canceling orders.
Detailed information of DEPO's API can be found at their website. If you feel like you need covering additional API methods, please open an issue or create a pull request.
Setup
You can install this package by using
pip:
pip install depo
If you fancy
pipenv use:
pipenv install depo
To install from source, run:
python setup.py install
For the API client to work you would need Python 2.7+ or Python 3.4+.
To install via
requirements file from your project, add the following for the moment before updating dependencies:
git+git://github.com/palosopko/depo-python.git#egg=depo
Usage
First off, you need to require the library and provide authentication information by providing your user name and password do DEPO's admin interface
import depo dibuk.api_credentials = ('email', 'password')
Getting contracted places is accomplished by calling
depo.Place.all(). The method returns a list with
depo.Place objects containing all the relevant details. Please note boolean properties
is_input and
is_output – if you are just trying to implement DEPO's service for your customers to get their orders, you will only be interested in the latter.
To place a new order you need to run
depo.Order.create() with code of a place (or
Place object), recipient's name, phone, email and order's amount, product amount and optionally an order reference. Method returns dictionary with delivery details including reference number under the
number key.
Contributing
- Check for open issues or open a new issue for a feature request or a bug.
- Fork the repository and make your changes to the master branch (or branch off of it).
- Send a pull request.
Development
Run all tests on all supported Python versions:
make test
Run the linter with:
make lint
The client library uses Black for code formatting. Code must be formatted with Black before PRs are submitted. Run the formatter with:
make fmt
Changelog
v0.2.0: 30/09/2019
Python 3 compatibility for real, code formatting is covered by Black and various small fixes to make everything better and easier including first test.
v0.1.1: 04/04/2017
Fixes creating order from place's identifiers sent as unicode.
v0.1.0: 28/03/2017
Initial version with support for getting pickup places.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/depo/ | CC-MAIN-2020-40 | refinedweb | 445 | 56.15 |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Lennart Regebro wrote: > On 7/3/06, Chris McDonough > <[EMAIL PROTECTED]> wrote: >> Actually,. > > Right. So the best and quickest is to prepare all the data in pure > disk-based python. > It's easy to do if you use the view methodology you get with Five, but > there are other ways to do it if you don't want to use views.
Advertising
My 'pushpage' package is an attempt to make this the "standard" pattern (pushpage templates don't have *any* top-level names available to them (i.e., 'context', 'container', 'request', etc.) except those passed to their '__call__' as keyword arguments (which traditional ZPT exposes under 'options'): The templates can be used from within methods of view classes, and can also be published as "pages" via ZCML with the help of a user-defined callable which computes the keyword arguments. E.g.: <pushpage:view In this example, 'somemodule.somefunction' returns the mapping which 'sometemplate' will use as its top-level namespace. The callable is passed 'context' and 'request' as arguments.qTxk+gerLs4ltQ4RArSWAJ90L7NT76aPuaIk9a7cQj222qYM6QCfRPrZ gN4Q7fIm63hA4HUoViIwdBk= =2aTX -----END PGP SIGNATURE----- _______________________________________________ Zope maillist - Zope@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope@zope.org/msg24368.html | CC-MAIN-2016-44 | refinedweb | 197 | 63.29 |
2007 is shaping up to be the most exciting year since the community drove off the XML highway into the Web services swamp half a decade ago. XQuery, Atom, Atom Publishing Protocol (APP), XProc, and GRDDL are all promising new power. Some slightly older technologies like XForms and XSLT are having new life breathed into them. 2007 will be a very good year to work with XML.
XQuery's been a "next year" technology for at least four or five years now, but in 2007 it finally arrives. First, the finished set of XQuery 1.0, XPath 2.0, and XSLT 2.0 specifications will reach full recommendation status, and sooner rather than later. In fact, it did so the day I was doing the final edits to this piece. :-)
Furthermore, the update part of XQuery is marching forward rapidly. It probably won't be finished this year, but it's already solid enough to be implemented, as long as users don't mind modifying their code a tad with each new draft. The situation will only improve throughout the year.
Moreover, 2007 should see betas of
javax.xml.xquery.
This is a standard API for connecting Java™ programs to XQuery engines and databases.
Think of it as Java Database Connectivity (JDBC) for XQuery. It enables you to mix XQuery into your Java code.
This will become a standard part of the Java class library with the release of Java 7 in 2008.
Finally, native XML databases are hitting the market in force, and users are starting to take notice. On the low end, eXist and Sleepycat's (now Oracle's) dbXML are looking better and better. Hybrid solutions like Oracle Database 10g Release 2 and IBM® DB2 9 PureXML will drive XQuery adoption among their existing customers who need to mix some documents with their traditional tables. Pure XML databases like Mark Logic will continue to convert big publishers that can afford the cost of entry.. By the end of the year, look for at least one each of Wiki, Content Management System, and blog engine to sit on top of eXist or another XQuery database. On the down side, I predict that at least one and probably all three of these products will turn XQuery injection from a theoretical problem into a practical one.
Office documents will be another driver for XML database adoption. A huge amount of corporate and noncorporate data isn't stored in XML at all. It's tied up in Microsoft® Word, Excel®, and PowerPoint® files, often on individual computers. Now that these programs are saving to native XML, it becomes possible to store such documents in centrally managed XML databases.
If I had to choose one big story for next year, it would be the Atom Publishing Protocol (APP). APP started out as a standard way to post blog entries, but it's turning into much, much more. APP and Atom stand ready to do for Web authoring what the Hypertext Transfer Protocol (HTTP) and Hypertext Markup Language (HTML) did for Web browsing. Tim Berners-Lee always meant the Web to be a read-write medium, but it didn't work out that way. Only the publishing/reading half of the system has been in place for the last 15 years. Writing happened using severely limited HTML forms or non-HTTP methods like File Transfer Protocol (FTP).
APP defines a standard means of publishing new content that all servers can implement. Independent software vendors can write their own authoring tools that talk to APP services on the different servers. You'll finally be able to use full-blown editors like Word or Emacs to write Web content, rather than the limited tools you find in a browser. Uploading content can become as simple as saving a file on the local hard drive is today..
If I'm right, and APP takes off, then this will have a couple of important consequences. First, APP will be a nice example that shows people how to design new systems RESTfully. Second, it will force a lot of naive firewalls and proxy servers to be reconfigured to allow PUT and DELETE to pass through, along with POST and GET. This should help eliminate the need to tunnel everything through POST, and make other RESTful apps a lot more plausible.
Another problem besides broken proxies that has held back full adoption of REST is the inability of a browser form to allow any methods other than GET and POST. This too will change, and APP is the major use case driving that change. XForms and Web Forms 2.0 are also scheduled to get a REST upgrade by allowing PUT and DELETE as browser actions in addition to the current GET and POST. Once implemented by browser vendors and learned by the Web developer community, these methods will improve Web security. However, this takes time, and the full impact isn't likely to come about until 2008 at the earliest.
I don't see a big future for the WHAT Working Group's Web Forms 2.0. It's got some nice bells and whistles, but it doesn't change anything. It's a modest cosmetic improvement -- nothing revolutionary. Some Web developers will begin adopting it, but most will ignore it, just as Microsoft has. Many features can be supplied inside Windows® Internet Explorer® through JavaScript; but these are the same features that are available today through JavaScript. They'll just be a little more standard. Web Forms 2.0 doesn't bring anything new to the table, and it doesn't radically alter how you think about or design your forms.
XForms, by contrast, is ready to take off. The spec's been finished for years, and the implementations are finally beginning to catch up. Despite limited browser support, look for increasing adoption in intranet solutions. Why will XForms succeed when Web Forms 2 won't? Because XForms goes much further. Unlike Web Forms, it changes the architecture of the house, not just the color of the paint. Developing an XForms-enabled application is different than developing a classic HTML form-based application. The simple fact is that HTML forms were always a hack: a quick and dirty solution for simple problems. They were never intended to bear the weight developers have loaded on top of them, and they've been creaking for years. They aren't an adequate foundation to replace desktop applications and usher in Web 2.0. There's a good reason so many Web 2.0 applications are more JavaScript than HTML: You can't do what you need to do with HTML forms.
XForms goes back to the drawing board and redefines the architecture. There are now separate models, views, and controllers. With XForms, Web applications start to look like clean programs designed by professionals, not hacks thrown together by graphic artists who took a one-semester course in Basic in early high school.
Of course, the most benefit will accrue to the people writing the more complex applications. Not everyone needs the full power of XForms. Simple contact forms, mailing-list subscriptions, online polls, one-click purchases, and the like are adequately served by classic HTML forms. However, more complex forms for multipage checkouts, blog management, firewall administration, and so forth will benefit greatly from XForms. The more client-side JavaScript and server-side PHP that you're using to manage your forms today, the more likely that you'll benefit from XForms tomorrow. Declarative markup will replace much brittle procedural code.
The World Wide Web Consortium (W3C) has produced a large number of standards and technologies for working with XML: namespaces, Infoset, XInclude, XSLT, schemas, canonical XML, and more. What they haven't done is specify how all these pieces fit together. In particular, they haven't specified details like whether you should do schema validation before or after XInclude resolution.
This isn't an oversight. For example, the W3C wanted to allow XInclude resolution to happen before or after schema validation. Sometimes you want it before; sometimes you want it after. The question then becomes how you organize the different possible chains of processing. This is where XProc enters the picture.
XProc is an XML format that specifies what to do to an XML document in which order. XProc defines a pipeline of operations. The input to each step in the pipeline is an XML document or documents, and the output from a nonterminal step is also an XML document. (Terminal steps sometimes generate non-XML.) Steps can specify validation, XInclusion, transformation, custom processing using the Simple API for XML (SAX), and more. An XProc processor can read an XProc document and then apply the specified steps in the specified order. This makes writing document-processing applications much simpler. For example, complicated XSLT transformations can sometimes be broken into two simpler pieces that are applied in sequence. XProc lets you glue the pieces together without writing a custom driver for your application.
Developers have been asking for this functionality for a long time. 2007 should see it delivered..
If publishers can't be relied on to provide metadata, then where can you get it from? How about from the data itself? Gleaning Resource Descriptions from Dialects of Languages (GRDDL) is the first Semantic Web technology that does away with the notion of publishers generating their own metadata. GRDDL consults XSLT stylesheets supplied by third parties to scrape the metadata off Web pages. The output from these stylesheets are Resource Description Framework (RDF) triples that you can process with the underutilized RDF toolset. Different stylesheets can be applied to different sites as needed. Indeed, different consumers can use different stylesheets that provide the information they find of most value.
It's a clever plan, and it seems like it might work (which is more than I can say for most of the Semantic Web). But this is the last chance. If GRDDL can't make the Semantic Web happen, nothing can.
OpenDocument smacked Microsoft upside the head in 2006. Look for Microsoft and its Office Open XML format to continue losing ground in 2007. Office Open XML isn't a reasonable file format: It's packed with over a decade's worth of legacy crud, and it's unimplementable and untestable. It's a paper standard, nothing more. Smart, competent governments will recognize this and make OpenDocument their standard, possibly along with PDF or HTML.
Many businesses and other organizations will continue to choose Microsoft Office because they have a legacy investment in it, or just because that's what they think businesses buy. Office Open XML will be a genuine boon to these users. It might not be the best choice, but it's a better one than they have now. However, many secretaries, salespeople, and CEOs who don't know what word processor they use, much less what format they save their data in, will be quietly upgraded to OpenOffice by their tech staff.
Microsoft will unintentionally help this migration. Software audits, digital restrictions management, and onerous activation schemes embedded in new versions of Office and Windows will push many businesses off the Microsoft platform for the first time in decades. For 2007, many of these organizations will stay with older versions of Microsoft products; but eventually they'll crossgrade to OpenOffice and its XML formats.
Most important, however, I predict zero adoptions of Office Open XML by any other office program. Its use will be limited to the Microsoft Office ecosystem, where it will ease development for vendors of small plug-ins, office utilities, and add-ons. However, independent products that need a native format for their own word processor, spreadsheet, presentation, and so forth will pick OpenDocument instead. (One exception: Drawing programs will choose Scalable Vector Graphics [SVG].)
2007 will be the first year in which almost every significant browser fully supports XSLT 1.0. It will finally become possible to publish real XML directly on the Web without prerendering it to HTML first. Although this won't be common practice by any stretch of the imagination, I predict that at least one major site (and quite a few minor ones) will begin doing this. I also predict that nobody will notice, because it will all just work.
Longer term (probably after 2007), I predict that this will render many of the debates about HTML 5 and XHTML 2 moot. Sites will publish content in whatever XML vocabulary they like, and provide stylesheets that convert it to HTML for browser display. Changing your document format will no longer require a years-long process of W3C working groups and slow browser adoption.
Put a fork in it; they're done
WS-* (pronounced WS-splat) has peaked. Even a derailed train has a lot of momentum, so people will still be talking about Web services in 2007. However, nobody will be listening.. There is a limit to how much complexity any organization can manage, and WS-* has long since exceeded that threshold.
Instead, look for the emergence of Plain Old XML (POX). People will start (in many cases, continue) sending XML documents over HTTP. Authentication, caching, and reliable delivery will be managed using the built-in functionality of HTTP. Applications will be hooked together with a mixture of XSLT, XProc, duct tape, and spackle. Developers will give up on the idea of assembling services and systems without manual configuration (a dream most of them never bought into in the first place). Service-oriented architecture (SOA) might have a role to play, if any of the architecture astronauts pushing it come down to earth long enough to explain what they're doing.
The browser wars continue
The Mozilla project will release Firefox 3. Firefox will finally pass the Acid2 test for CSS2 compliance, leaving Internet Explorer as the last major browser to fail it. However, Firefox won't yet add native support for XForms. That will have to wait until next year.
Apple will release Safari 3 along with Leopard. Although it will focus mostly on Apple proprietary extensions, Safari 3 will add support for Scalable Vector Graphics (SVG) for the first time.
Internet Explorer 7 will make Web developers' jobs easier, but not easier enough. Internet Explorer 8 won't be out in time to make a difference. Internet Explorer will continue to lose market share to both Firefox and Safari. Its market share will drop below 70% by the end of the year and will be below 50% in at least one, probably several, Western European countries.
XHTML2, HTML 5, Web Forms 2.0, and CSS3 won't be finished in 2007. Although pieces of HTML 5 and Web Forms 2.0 will be implemented by browsers with small market shares, most innovation in Web applications will continue to come from Ajax and server-side frameworks.
XML backlash and the counterrevolution
In 2007, alternative, non-XML formats will continue to gain ground among developers with simple problems in constrained environments. In particular, Web programmers will still be enamored of JavaScript Serialized Object Notation (JSON). However, early adopters moving on to more complex problems will begin to realize they're reinventing large parts of XML. I also predict at least one major security breach as a direct result of passing JSON data to the
eval() function.
The W3C will release a draft binary encoding of the XML infoset. However, the promised performance, speed, and battery life gains won't materialize outside of contrived benchmarks. The vendors who really can't live with XML won't be able to stomach binary XML either. They will end up designing their own formats. Eventually, everyone else will give up on the pipe dream of binary XML and learn to love text (but not by the end of 2007).
2007 is shaping up to be a very interesting year for XML. XQuery is finally ready for production, and APP is ready to break out. If I were looking to invest money or time in XML in 2007, these are the technologies I'd focus on.
XForms is on a slower, more linear growth curve. However, it's definitely coming up in the world. The same is true for browsers, which have become mature, reliable technology. There'll be new billion-dollar businesses on the Web in 2007, and most of them will use XML. However, they'll be driven by the content and the ideas for the Web applications, not by the underlying technology.
The real story for 2007 will be the continued migration to open, accessible data formats. Whether the customers' documents are stored in a file system or an XML database is a secondary question. The key is that it will be XML, and document owners will be able to process it and manage it with their preferred tools. The days when software vendors could lock up users' data with nary a peep from their customers are over..
Learn
- Don't Let Architecture Astronauts Scare You (Joel Spolsky, April 2001): If "architecture astronauts" is a new phrase to you, read Joel's diagnosis.
- An introduction to XQuery (Howard Katz, developerWorks, January 2006): Look at the W3C's proposed standard for an XML query language, including background history, a road map into the documentation, and a snapshot of the current state of the specification.
- Atom Publishing Protocol (APP): Read draft 12 of the APP, an application-level protocol for publishing and editing Web resources that is based on HTTP transport of Atom-formatted representations.
- Why XForms? (Elliotte Rusty Harold, developerWorks, October 2006): Discover which problems XForms are intended to solve, including internationalization, accessibility, and device independence
- Is Open XML a one way specification for most people? (Bob Sutor's Open Blog, October 2006): See why Bob thinks the Office Open XML specification is a one-way spec for most people.
- XML 2006 trip report (Elliotte Rusty Harold, developerWorks, December 2006): In this summary of the XML 2006 conference, find that a few topics stand out, including XQuery, native XML databases, the Atom Publishing Protocol, Web 2.0, and the extraction of implicit metadata from data.
- XML in 2006 (Elliotte Rusty Harold, developerWorks, January 2007): Look back at XML with the author's review of notable happenings in the world of XML this past
- eXist: Experiment with XQuery using the native XML database.
- IBM trial software: Build your next development project with trial software available for download directly from developerWorks.
Discuss
- Participate in the discussion forum.
-?
- Atom and RSS forum: Find tips, tricks, and answers about Atom, RSS, or other syndication topics in this forum.
-. | http://www.ibm.com/developerworks/xml/library/x-xml2007predictions/index.html | crawl-003 | refinedweb | 3,103 | 63.49 |
Notes: This is thanks to Andreass, Judson and Billy ^^
Objective: The finality of this is to show a way to interact with a Web Service created in By Design Studio and how to consume this web service with an external client. This will be with an object that gets two numbers, then adds them together, and multiply together.
Steps:
- Create the BO
- Create the script to this BO
- Create the screens
- Modify the screens (if necessary)
- Create the Web Service of this BO
- Agregate the new view and assign to a Work Center.
- Define the authorizations in ByDesign.
- Download WSDL
- Test with an external client.
PROCEDURE
1. Create the BO, this has two elements. In Solution Explorer right clic > Agregate > New element > Business Object with name “AddMult”
Code for BO AddMult:
import AP.Common.GDT as apCommonGDT;
businessobject AddMult {
[AlternativeKey]element OperationID: ID;
element Value1 : IntegerValue;
element Value2 :IntegerValue;
element Result1 :IntegerValue;
element Result2 :IntegerValue;
}
After creation, you must to activate the BO.
2.Create script, for this, right click to the BO and Create ScriptFiles and select “Event-AfterModify.absl”
Use this code:
import ABSL;
this.Result1 = this.Value1 + this.Value2;
this.Result2 = this.Value1 * this.Value2;
Activate the script (right click and Activate)
3. Create the screens for this BO, right click to the BO and select Create Screens > Floorplan with Navigation > OK.
This will create the screens and the work center that we will need.
4. You can modify the screens, in this case QA Screen (Quick Activity), OWL Screen (Object Work List), the elements are in disorder.
a. For QA select “SectionGroup” and in Properties Tab > Contents > Collection, with the arrows you can give the adecuate order, then OK.
b.For OWL select “List” in Properties Tab > Child Elements > ListColumns, then if asks if you want to replace the name, select No, then with the arrows you can modify the order, then OK.
c. Save the screens and Activate.
d. Assign the authorizations for this user in ByDesign.
1. You need to enter in your corresponding tenant
2. In “Application and User Management” Work Center search for the user will have the authorizations.
3. Select the new work center and its view.
e. Then you can do operations in the new Work Center “AddMult”
1. You need to write the OperationID, Value1, Value2 and Save, the script will activate and write the corresponding resuts
2. You can see the results in ByDesign
5. Create the Web Service from the BO created. Right click to the BO > Create Web Service
a. Give a name for this web service
b. Select the elements would be involved in this web service (in this case, all of them)
c. For the option “Create”, only mark the OperationID, Value1, Value2.
d. For the option “Read”, mark all of them, so you can see all the fields.
e. Query and Action not necessary for this example.
f. Create a new authorized view you will use to add it a to a Work Center, give it a name. (Maybe you can use an existing view)
g. Then you can select the operations for this view.
h. The field “External UI Application” is empty in wsauth.
i. Open the Work Center in UI Designer (is that called WCF).
j. In Configuration Explorer you use the created view for this web service, add it to the view.
(use Test View, there would be two)
k. Now you can see the view has the name of the Web Service View (maybe you need close and re-open the WCF).
l. If there is no view where to assign, you can add it in Properties of Work Center > View Switches (Selecting AddMult Folder).
m. Activate Web Service in Solution Explorer.
n. Again in By Design you need to assign the view for this User (4.d from this Tutorial).
o. Download WSDL (Right click to the created web service and Download WSDL)
6. Use the download WSDL to tests.
a. Open SOAP UI
b. New soapUI Project
c. Browse the downloaded WSDL
d. Now with XML you can do the corresponding calls.
Use Authentication Type: Preemptive.
Try with the user and password you assigned in By Design
e. You can do the calls
Read (need to use a valid ID, this case, only created operations)
Create(use all the fields, OperationID, Value1, Value2)
f. You can see the change in ByDesign
That’s all, any doubts, tell me =)
Angel.
Great Tutorial… I had considered writing the same one…. Too many projects…
Thanks
Nice job!
I done a step by step in accordance with the your method , but finally display the
following error:
“Tue Oct 09 23:53:08 CST 2012:INFO:my*****5.sapbydesign.com:443 requires authentication with the realm ‘SAP NetWeaver Application Server [H7M/007][alias]'”.
Why? How can i fix it ?
My mail adress is: eric.peng@avatech.com.cn
Hi Eric,
I have tried to create a web service with Byd too.
I get the same issue in visual studio 2010.
“Tue Oct 09 23:53:08 CST 2012:INFO:my*****5.sapbydesign.com:443 requires authentication with the realm ‘SAP NetWeaver Application Server ][alias]'”.
Did you solve it ?
Many thanks and best regards
With this issue, I would say that you need to check three things:
1. Step 5.h -> In ByD Studio -> External UI should not be empty…
2. In the ByDesign page, the user should have assigned the view for the WoC. Step 4.d
3. The WoC (Step 5.k) should have the view assigned too.
After that, you should right click the solution and ” Update Authorizations and Access Rights”, and then download again the WSDL …. =)
Hi Angel,
Thanks for your help! My problem has been solved !
Many thanks and best regards
Hi Angel,
Great ! “Update Authorizations and Access Rights” was the solution.
Thks
Im glad that you could solve this n_n, best regards.
I tried to use soapui for consuming the service “querycustomerin1” but i have this error message :
<faultstring xml:lang=”en”>Web service processing error; more details in the web service error log on provider side (UTC timestamp 20130117151106; Transaction ID 00163E0340311EE2989703BE827D06D1)</faultstring>
And in BYD i have : Combination of interval boundary type code and upper and lower boundaries in not allowed.
Thanks for your help.
Hi Mohamend, can you paste the message you’re sending to the WS? this in order to look where you have the error
Hi Mohamend,
I hope this is useful to you!
Step1:Login corresponding BYD system.
Step2: In the Application and User Management ->Input and Output Management->Business Communication Monitoring view, show Message-Independent Errors.
Step3:Click Go button.You can see detailed error messages in the Details label about your web service.
Regards
Eric
Hello,
@ Arnulfo this my code in SoapUi of the web service Querycustomerin1:
<soapenv:Envelope xmlns:soapenv=”” xmlns:glob=”“>
<soapenv:Header/>
<soapenv:Body>
<glob:CustomerByIdentificationQuery_sync>
<!–Optional:–>
<CustomerSelectionByIdentification>
<!–Zero or more repetitions:–>
<SelectionByInternalID>
<!–Optional:–>
<InclusionExclusionCode>I</InclusionExclusionCode>
<!–Optional:–>
<IntervalBoundaryTypeCode>1</IntervalBoundaryTypeCode>
<!–Optional:–>
<UpperBoundaryInternalID>Z0001</UpperBoundaryInternalID>
</SelectionByInternalID>
</CustomerSelectionByIdentification>
<!–Zero or more repetitions:–>
</glob:CustomerByIdentificationQuery_sync>
</soapenv:Body>
</soapenv:Envelope>
In the right i have as result :
>
In the http log i have after lignes processing this result :
Fri Jan 18 09:03:11 GMT 2013:DEBUG:<< “HTTP/1.1 500 Internal Server Error[\r][\n]”
Fri Jan 18 09:03:11 GMT 2013:DEBUG:<< “set-cookie: sap-usercontext=sap-client=085; path=/[\r][\n]”
Fri Jan 18 09:03:11 GMT 2013:DEBUG:<< “content-type: text/xml; charset=utf-8[\r][\n]”
Fri Jan 18 09:03:11 GMT 2013:DEBUG:<< “content-length: 436[\r][\n]”
Fri Jan 18 09:03:11 GMT 2013:DEBUG:<< “accept: text/xml[\r][\n]”
Fri Jan 18 09:03:11 GMT 2013:DEBUG:<< “sap-srt_id: 20130118/090311/v1.00_final_6.40/00163E0340311ED298A9BC4969E5B553[\r][\n]”
Fri Jan 18 09:13:54 GMT 2013:DEBUG:>> “Content-Length: 903[\r][\n]”
Fri Jan 18 09:03:11 GMT 2013:DEBUG:<< “>”
@ wenlei in BYD i have this msg error :
Exception raised by Web Service application : Combination of interval boundary type code and upper and lower boundaries in not allowed.
Thanks for your help
Hi Mohamend,
If there is only one condition , then it can only Lower.
<UpperBoundaryInternalID>Z0001</UpperBoundaryInternalID>
=>
<LowerBoundaryInternalID>Z0001</LowerBoundaryInternalID>
I hope this is useful to you!
i solved the problem before i see you reponse hh.
But i don’t know why it not works with upper and only lower.
Thanks
Great!!
Best regards
Hi, I’m new to ByDesign development. I’ve tried to create a web service to access extended saleorder field that I’ve created. But I can not consume the service from an external client. I get the following error : Authorization role missing for service ServiceInterface …” . Can anyone please help me solve that issue.
Thank,
Hi Carles, well, I think this error is because you have no authorization to use the WS, when you create the WS you need to create a view or use an existed one to be able of give to your user access to it, then you just need use your user and pass to consume this web service without get that error, you can do it in a web services test tool, I use Soap UI, there you must do something like this:
I really hope this helps.
Regards.
Hi Arnulfo Díaz Ruiz
Thank you so much for your help.
I’ve created the view, but I can’t see it in ByDesign “Application and User managment” where can I give access to my web service to the user ?
Thank you
Well, I think your problem is that, you need to create a WCF Screen for your BO, and then asign your WCView to your new WCF, once you do this you just have to save and activate and you must be able to see your view in the Application and User Managment
I’m using SAP salesOrder BO extention. And when I “enhance screen” the WCF view does not show. If I add the WCF from “Add New Item” contextual menu on the project, I can’t find that WCF in the “Application and managment”. So I do not understand.
Any Ideas ?
Hi Arnulfo Díaz Ruiz,
Your solution work for me. I just
create a view for my web service, and give my test user access to that view from the “Application and Managmenent work center”, then I consume my web service from Visual Studio C# console application. it work fine.
thank again.
Hi Angel,
We have followed all steps – but when calling the Create operation of the service from SoapUI – nothing happens on ByD side.
We get an answer in SoapUI that operation was successful. Reading the same instance – returns an error that instance does not exist and the instance does not appear in ByD WorkCenter.That everything was fine .
Any idea? Thank you very much for your help.
Regards,
Helen
Hi,
How can I change the response messages language?
Regards | https://blogs.sap.com/2012/08/01/how-to-consume-a-byd-web-service-with-an-external-client/ | CC-MAIN-2017-34 | refinedweb | 1,828 | 64.91 |
populate Combo Box dynamically
populate Combo Box dynamically Hi,
How to populate the Combo Box dynamically using Flex with Jsp's
combo box
combo box Hi,
[_|] dropdown box
[ ] [INCLUDE... a screen like this using jsp-servlet(or DAO,DTO),in that drop down box i should get
Flex ComboBox controls
is
any type of event, that show how to access the value of combo box in flex... to access the value of Combo Box.
For Example:-
<?xml version="1.0...{
//Alert.show("Combo Box
is "+event.type);
}
private
function
combo box - JSP-Servlet
combo box how to get a combo box in jsp page which allows editing as well as list box
Flex Combo Box example
Flex Combo Box example
... control inside
your flex file. In the example you will learn to build two ComboBox
controls. The Example below shows combo boxes with nicely formatted string
combo box code problem
combo box code problem in this my problem related to :
when i select state MP then i wil open the its corresponding city but in database it only stores the option value no like MP at option value 10 then it will stores the 10
Sub combo box problem - Development process
Sub combo box problem Hi, In this code Sub-Combo box is not working. plz solve the problem
Search Page
var arr = new...
X
coding |
Compiling
MXML files with FlexBuilder |
Flex Combo Box
|
Flex Combo Box
selecteditem |
Flex
Check Box control |
Flex wipe
behavior |
Flex Alert Box
| Flex
Validator |
Flex Tab Navigator
Flex combobox selecteditem
the flex combo box
control. Below example contains a flex combo box control with id value
combo. This
flex control contains eleven items and with selectedItem... is included in the combo box item list.
Syntax for using the property
Flex Tutorials
inside the flex combo box
control. Below example contains a flex combo box... Combo Box example
In this tutorial page you will be taught to utilize...
Combo Box selecteditem
In this tutorial page you will learn how to utilize
doubt in combobox in flex - XML
access to combo box item name as string.
As per your query i coded an example... in flex which is a combination of mxml and actioscript . In my project i has....
Wishing for
how can retrive value from combo box in servlet?
how can retrive value from combo box in servlet? i have a jsp page with combobox. And i want to get value from combox to servlet
Flex Examples
using Flex Builder.
Flex Combo Box example : In
this section you will learn about how to use combo box in your flex
application. As well as you will also learn about combo box controlling.
Flex Combo Box selectedItem
Flex Combobox
/flex-combo-box.shtml...Flex Combobox flex combobox with database as dataprovider
...;
<mx:ComboBox
<mx:dataProvider>
Flex Combobox
Flex Combobox flex combobox with database as dataprovider
You can visit the following link for detailed tutorial on the topic. May this will be helpful to you.
Alert Box in Flex
Flex Alert Box Components
Adobe Flex provides many types of components, in this tutorial we will study visual component.
1. Alert Box: In the present tutorial we will see how to display simple text using Alert Box. An Alert Box
combo program - Java Beginners
combo program I want to give DOB in combo box for that i have to give 1 to 30 for days and 1900 to 2010. Without giving all, using for loop i have to get the values. Please help me..
Hi Friend,
Please
Flex ComboBox Component
Adobe Flex Combo Box Component:
The ComboBox component of Flex...;0x00FFFF'];
]]>
</mx:Script>
<mx:Panel x="0" y="0" width="344" height="158" title="Combo Box add a DropDown List in Flex DataGrid
asking for. But combo box is editable, I do not want the user to edit but just...How to add a DropDown List in Flex DataGrid hi
I am trying to add...'}
]);
]]>
</fx:Script>
<s:DataGrid id="myDG" x="85" y="57" width="393
Flex event
Flex event Hi.....
How many events are fired when your focus goes in one text box, you enter some text and then press tab?
Please give the name of all events which is used in this.....
Thanks
Flex Alert Box example
Flex Alert Box example
Alert box is a dialog box that appears on window with some... Box is
also referred to as pop-up window.
Among these two words add a DropDown List in Flex DataGrid
asking for. But combo box is editable, I do not want the user to edit but just...How to add a DropDown List in Flex DataGrid hi
I am trying to add a DropDownList in a DataGrid table. After the user selects one of the items from
How to design Form Layout in Flex Using Container
. In the combo box you can use several tags to integrate several items in one field. The combo box combines the items in a group that can be arranged in an array which... properties.
Changes in Combo Box
<mx:FormItem label="State
JCombo Box problem
JCombo Box problem I have three combo boxes
First combo box display the year
Second combo box display the month for the selected year.
Third combo box display number of week in a selected month for the year.
I am select year
SubCombo box not working - Development process
SubCombo box not working Hi, in the following code subcombo box is not working and while storing combo box numbers(1,2,3) are stored instead of values.
Search Page
var arr = new Array();
arr[0] = new
Navigation with Combo box and Java Script
Navigation with Combo box and Java
Script
... in Navigation with Combo box?
JavaScript is a 2-level combo box menu script... selection box. The navigation that requires absolutely no DHTML or Javascript
NumericStepper in Flex
NumericStepper
We can use NumericStepper control of Flex to select a number from a set. This
Control looks like a list box which contains a text box... indicates the up arrow is to display the values in the text
box by one unit
Flex Cascading Style Sheets example
on flex Button and Combo box controls.
cascade.mxml...
Flex Cascading Style Sheets example
... selector, here class selector is
used with name Alexander.
Like other flex
Flex DateChooser
Flex DateChooser Controls:-
DataChooserv is a flex controls and that controls provide the simple
process to use calendar in flex. DateChooser.... It is provide a calendar
used in the flex application. User can use
Box Layout Container
on
the direction specified by the direction attribute of Box container. By default...; to the direction
attribute.
But VBox (Vertical Box) and HBox (Horizontal Box) containers...
Box Layout Container
Flex Container
Flex Container
Every flex application is composed of so many visual components. Every component resides in any container. Flex provides several containers to lay child
Box Container in Flex4
Box container in Flex4:
The Box container is a MX component. There is
no Spark Component. The Box container is used for both horizontal and vertical... and
vertical layout.
The tag of Box Container is <mx:Box>.
In this example you can
flex
Effect in flex
;
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:mx="library://ns.adobe.com/flex/mx"
minWidth="955" minHeight="600"...='file:/C:/work/bikrant/image/roseindia.gif')"
x="125" y
dhtmlxCombo - Ajax Autocomplete Combobox
combo box with Ajax support. It is easy to configure. It also have autocomplete...;
Four modes used by DhtmlxCombo are as below:
1. Editable select box.... Read-only select box: Select value from the list of choices.
3. Filter: List
Calling Flex Function From JavaScript
;
}
.style4 {
border-style: solid;
border-width: 1px;
}
Calling Flex Function From JavaScript
Sometimes, you may need to call flex function from JavaScript to
pass value in the flex application. For this you need to register flex
PHP List Box Post
The PHP Post List box is used in the form. It contains multiple value
User can select one or more values from the PHP list Box
This PHP Post List Box is the alternate of Combo box
PHP Post List Box Example
<?php
Java Message Box
field or combo box.
showMessageDialog : Display a message with one button...Java Message Box
In this tutorials you will learn about how to create Message Box in Java.
Java API provide a class to create a message Box in java
combo boxes
combo boxes how do you store numbers in the choices. ex. soda is $.75 and tax is $.07 then it equals a new total. I've been trying to use the Itemlistener but i can't figure out how to store the numbers
import javax.swing.
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/81682 | CC-MAIN-2015-32 | refinedweb | 1,466 | 64.91 |
nullpointerexception error
Theresa Marlin
Ranch Hand
Joined: Sep 23, 2009
Posts: 49
posted
Dec 21, 2009 17:22:29
0
I had to write a program that created a pantry which was an array of three jams. When I tried to print out my array, I got a nullpointerexception. What is a nullpointer exception? How can they be fixed?
Tester:
class PantryTester { public static void main ( String[] args ) { Jam goose = new Jam( "Gooseberry", "7/4/86", 12 ); Jam apple = new Jam( "Crab Apple", "9/30/99", 8 ); Jam rhub = new Jam( "Rhubarb", "10/31/99", 3 ); Pantry hubbard = new Pantry( goose, apple, rhub ); hubbard.print(); hubbard.select(1); hubbard.spread(2); hubbard.print(); hubbard.select(3); hubbard.spread(4); hubbard.print(); } }
Class 1:
class Pantry { private Jam jar1 ; private Jam jar2 ; private Jam jar3 ; private Jam selected ; Pantry( Jam jar1, Jam jar2, Jam jar3 ) { Jam [] pantry1 = new Jam[3]; pantry1[0] = jar1; pantry1[1] = jar2; pantry1[2] = jar3; } public void print() { Jam [] pantry1 = new Jam[3]; for(int j = 0; j < pantry1.length; j++) { pantry1[j].print(); } } public boolean select( int jarNumber ) { selected = null; if ( jarNumber == 1 ) selected = jar1 ; else if ( jarNumber == 2 ) selected = jar2 ; else selected = jar3 ; if (selected == null) return false; else return true; } // spread the selected jam public void spread( int oz ) { selected.spread( oz ) ; } public void replace( Jam j, int slot ) { Jam currentSelection; if (slot == 1) currentSelection = jar1; if (slot == 2) currentSelection = jar2; else currentSelection = jar3; currentSelection = j; if (currentSelection == jar1) jar1 = currentSelection; if (currentSelection == jar2) jar2 = currentSelection; else jar3 = currentSelection; } public void mixedFruit() { if(jar1.getCapacity() <= 2 && jar2.getCapacity() <= 2 && jar3.getCapacity() <=2) { Jam mix = new Jam("Mixed Fruit", "7/4/86", (jar1.getCapacity() + jar2.getCapacity() + jar3.getCapacity())); jar1 = mix; jar2 = null; jar3 = null; } } }
Class 2
import java.io.*; class Jam { // Instance Variables String contents ; // type of fruit in the jar String date ; // date of canning int capacity ; // amount of jam in the jar // Constructors Jam( String contents, String date, int size ) { this.contents = contents; this.date = date; capacity = size; } // Methods public boolean empty () { return ( capacity== 0 ) ; } public void print () { System.out.println(contents + " " + date + " " + capacity + " fl. oz."); } public void spread ( int fluidOz) { if ( !empty() ) { if ( fluidOz <= capacity ) { System.out.println("Spreading " + fluidOz + " fluid ounces of " + contents ); capacity = capacity - fluidOz ; } else { System.out.println("Spreading " + capacity + " fluid ounces of " + contents ); capacity = 0 ; System.out.println("No jam in the Jar!"); } System.out.println(""); } else System.out.println("No jam in the Jar!"); } public int getCapacity() { return capacity; } }
Thank you for your help!!!
Sebastian Janisch
Ranch Hand
Joined: Feb 23, 2009
Posts: 1183
posted
Dec 21, 2009 17:39:06
0
hi.
first, please always post the exact exception so it's easier to find the piece of code that causes trouble.
I went through it real quick and think that I found the spot.
public void print() { Jam [] pantry1 = new Jam[3]; for(int j = 0; j < pantry1.length; j++) { pantry1[j].print(); } }
What you do is create a Jam array as a local variable but don't put any values in it. Hence the value of each index is null which means it has not gotten a reference yet.
Upon invoking a method on a variable that is null, you'll get a big fat nullpointer exception.
What you want to do is make your Jam array an instance variable and pass the values in the constructor.
JDBCSupport
-
An easy to use, light-weight JDBC framework
-
Geoff Prewett
Greenhorn
Joined: Dec 22, 2009
Posts: 2
posted
Dec 22, 2009 14:34:37
0
Sebastian's answer looks like the problem, to me, too. Also, I noticed you're assigning things to the 'patry1' local variable in the Pantry constructor. Since you don't do anything with them, I assume that you probably meant them to persist, in which case you would want to assign them to a member variable. Something like:
class Pantry { private Jam[] items; public Pantry(Jam jar1, Jam jar2, Jam jar3) { items[0] = jar1; items[1] = jar2; items[2] = jar3; } public void print() { for (Jam j : items) j.print(); } }
(the foreach loop iterates over every item in the array. You probably do not want "for (int i = 0; i < 3; i++)" since if you ever change the length of the array your code will break. "for (int i = 0; i < items.length; ++i)" is better, but the foreach loop is more concise.)
The
NullPointerException
name is a bit confusing, because what most CS people call "pointers",
Java
usually calls "references". Basically if you try to do anything will a null reference besides checking to see if it's null, you'll get a
NullPointerException
. Look at the output window and you'll see a stack trace that points you to the exact line that causes the problem.
Learn Java:
It is sorta covered in the
JavaRanch Style Guide
.
subject: nullpointerexception error
Similar Threads
jtips ques
How to Compare contents of Jar files?
importing classes into java programs
Invoke a Class from a Particular JAR file
abstract compareTo() errors
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/475814/java/java/nullpointerexception-error | CC-MAIN-2015-14 | refinedweb | 854 | 63.7 |
How to: Run Unit Tests on UML Extensions
To help keep your code stable through successive changes, we recommend that you write unit tests and perform them as part of a regular build process. For more information, see Verifying Code by Using Unit Tests. To set up tests for Visual Studio modeling extensions, you need some key pieces of information. In summary:
Setting up a Unit Test for VSIX Extensions
Run tests with the VS IDE host adapter. Either prefix each test method with [HostType("VS IDE")], or set VS IDE in the Hosts option of the Local.testsettings file. This host adapter starts Visual Studio when your tests run.
Accessing DTE and ModelStore
Typically, you will have to open a model and its diagrams and access the IModelStore in the test initialization.
You can cast EnvDTE.ProjectItem to and from IDiagramContext.
Performing Changes in the UI Thread
Tests that make changes to the model store must be performed in the UI thread. You can use Microsoft.VSSDK.Tools.VsIdeTesting.UIThreadInvoker for solution. For more information, see How to: Define a Menu Command on a Modeling Diagram, How to: Define a Drop and Double-Click Handler on a Modeling Diagram, or How to: Define Validation Constraints for UML Models.
Make sure that when you build the solution, the assembly (.dll file) is copied to the build output folder. In many VSIX projects, the assembly is not copied because the .vsix file is all that is required for installation. To make sure that the assembly is copied,.
Right-click the method for which you want to generate a unit test, and then click Create Unit Tests.
In the Create Tests dialog box, you can select additional methods for which you want to create tests.
Set the Host Type of the tests to VS IDE. To do this, follow these steps:
Open Solution Items\Local.testsettings, click Hosts and then select VS IDE;
- or -
Prefix the attribute [HostType("VS IDE")] to each test method.
For more information, see How to run tests in the Visual Studio process.
Add the following assembly references to your test
Create a Visual Studio solution that contains a modeling project. You will use this as the initial state of your tests. It should be a separate solution from the extension and test code.
Write a method to open a modeling project in Visual Studio. Typically, you want to open a solution only once in each test run. To run the method only once, prefix the method with the [AssemblyInitialize] attribute.:
// private IDiagram diagram; // This class contains unit tests: [TestClass] public class MyTestClass { // Map filenames to open diagram files: private static Dictionary<string, IDiagram> diagrams = new Dictionary<string, IDiagram>(); // This method will be called once for this test class: [ClassInitialize]((MethodInvoker] public void Test2() { UIThreadInvoker.Invoke((MethodInvoker)delegate() { // Pass context items to class under test: Class1 item1 = new Class1(this.linkedUndoContext); item1.Method1(); // Can use linkedUndoContext }); } }: | https://msdn.microsoft.com/en-us/library/gg985355(v=vs.100).aspx | CC-MAIN-2017-47 | refinedweb | 488 | 65.52 |
Content Server 4 help(Brandon_Holly) Oct 10, 2008 11:26 AM
I've managed to get all three of the status checks to give me all green check marks, but I really have no idea where to go next to even use the program. I haven't been able to log in to the admin console either. I had to contact support to get the build.xml file needed to build the UploadTest-1_1.jar file that the user guide said was included, but wasn't. I don't know how to work that either. The command that support gave me just results in errors. I know very little about Java.
The user guide seems to give a good explanation of how the program works, but in my opinion doesn't explain how to actually use the program (some of the setup was a bit confusing as well). I don't know where to start to make it do what it is supposed to do.
If anyone has any hints for me I would greatly appreciate them. Thanks!
MySQL 5.0.67
MySQL Connector Java 5.1.6
Apache Tomcat 6.0.18
JDK 6u3
Windows XP sp3
1. Re: Content Server 4 help(Andrew_Arnold) Oct 16, 2008 8:48 AM (in response to (Brandon_Holly))I totally relate to your problems since they are all exactly the same ones I have encountered and continue to encounter. Installation was much more challenging than it need to be and after I was done I was left thinking, "How do I use this?"
Your assessment of the user's guide is kind. I find it badly organized and outright misleading. The specific references to deliverables (the conf files and the uploader jar) which are absolutely NOT delivered in the current bundle are an outrageous time-waster. Why not just take two seconds and bundle them into the distributed zip?
Key parts of the installation are broken up into distant sections of the guide, like the license info that needs to be included in the conf files, which are "covered" much earlier in the document. The status checks will absolutely fail if you only follow instructions up to the point they tell you to run them.
You couldn't log into the admin console? No surprise. That's because the default password, park345, is not supplied in the user manual or anywhere else. I had to ask. That was another lost 24 hours. Frankly it doesn't matter, though, because there will be nothing in the admin console until you manage to get some books in there. You thought maybe you could add PDFs that way? Me too. It would make sense wouldn't it? Sorry, but no.
The jar file that Adobe refers to, but does not supply, supposedly allows you to do that, though I have not gotten it to work.
The syntax for it, since Adobe does not bother to provide any full, real world examples, is:
java -jar UploadTest-1_1.jar /localpath/your.pdf.
You can pass flags to indicate the presence of things like an XML file that controls the permissions or a thumbnail, e.g.
java -jar UploadTest-1_1.jar /localpath/raw.pdf -xml -jpg
You don't tell it the name of the XML or jpg. You are supposed to use identical filenames and locations as the PDF or Epub.
Even though I figured all that out I still get an error when I run the thing:
There was an error with the Package Request
It means nothing to me and the logs don't help. Maybe someone reading this can kindly tell me what to do since the manual leaves me nowhere and I am still waiting for email support on this issue.
2. Re: Content Server 4 help(Brandon_Holly) Oct 16, 2008 10:17 AM (in response to (Brandon_Holly))I also got to that point with the ...MISSING_ELEMENT error, and got an email from support with the solution. You are missing the admin operator password from the command...
java -jar UploadTest-1_1.jar /localpath/raw.pdf -pass yourpassword -xml -jpg
This is the same password that gets you in to the admin console. Mine now has some books in it! Now I just need to figure out how to get the download URL built. I got the sample store PHP stuff running with my sample book list, but none of my download links want to work. I get a "E_LIC_WRONG_OPERATOR_KEY" error. I don't know anything about PHP so I don't know if it's the sample store not working with my system or if I have something set up wrong.
Also, I had an issue with getting an error when logging in on the operatorClient.jar after trying to create a second certificate. Support told me that the did find a bug, and emailed me a new version of operatorClient.jar.
After reading through the user guide several more times...you are right. It isn't as good and I had originally thought.
3. Re: Content Server 4 help(Andrew_Arnold) Oct 16, 2008 12:07 PM (in response to (Brandon_Holly))Wow. It actually worked. Thanks for the tip.
So not only do they NOT supply a crucial piece of helper software that gets a large amount of attention in the manual, they actually neglect to include a critical part of that software's instructions. There is no mention of the -pass flag at all. I'm not even surprised anymore.
And couldn't they have done just a little bit of error catching so that it says, "Needs password" or some such?
So, on to the store portion of the system. I am sure I will have as little luck as you but if I make any progress I will post it here.
4. Re: Content Server 4 help(Andrew_Arnold) Oct 20, 2008 8:44 AM (in response to (Brandon_Holly))Follow Up: As I predicted the store did not function and I got EXACTLY the same error message as you.
Guess why. Because the instructions were wrong, again.
I finally heard back from support about this problem. There is a line in the fulfillment config file that should look like this:
com.adobe.adept.serviceURL=
Instead the user's guide (section 4.5.5.) shows it this way:
com.adobe.adept.serviceURL =
Note the spaces between the equals sign. These are not allowed. Section 4.5.6 has an identical mistyped line for the packaging server, though this had no apparent effect. I wonder if the line is even necessary for the packaging server. You have to wonder why the status check worked, don't you?
Once I corrected this line I had a fully functioning ACS4 system.
5. Re: Content Server 4 help(Brandon_Holly) Oct 20, 2008 10:59 AM (in response to (Brandon_Holly))Thank you! That seems to have mine working now as well.
Now to get it integrated into our system and set up to go live. Hopefully it's easier the second time around. :)
6. Re: Content Server 4 help(Brandon_Holly) Oct 21, 2008 8:53 AM (in response to (Brandon_Holly))I came up with another question regarding the UploadTest deal. The books sent through this tool go into the Content Server under the Operator Inventory. I had to move the books into a Distributor's invetory in order to serve them. Is there a flag for the UploadTest that will specify a Distributor, or do they need to be moved by hand? Having to pick through the All Items for new files would be a pain.
7. Re: Content Server 4 help(Brandon_Holly) Oct 22, 2008 10:50 AM (in response to (Brandon_Holly))Apparently the whole GBLink generating the download link thing in the user guide is something that ACS3 did. ACS4 doesn't do that (even though the user guide says it does)...we have to "study and copy the PHP code in the store to provide this functionality".
I've been trying to do this with C#.NET. My signature-creating functions seem to be OK (they passed my tests), but my links always give me the error...
error data="E_ADEPT_URL_SIGNATURE_ERR GBLINK_AUTH_SIGNATURE_NOMATCH"/
...so there is something not right in my string somewhere.
I found another issue when switching from MySQL to MS SQL Server. When the user guide says you can use 'MS SQL Server', it really means 'MS SQL Server 2005 or higher'. One of the tables has a column that is type varbinary(max), which SQL Server 2000 can't do.
8. Re: Content Server 4 help(karnesh) Oct 23, 2008 8:18 AM (in response to (Brandon_Holly))Hi,<br /><br />I like Brandon, have also been tasked with setting up a test environment for acs4. Using the user manual as a guide, I have run into numerous obstacles and am having difficulties in getting this up and running.<br /><br />I have amended the config.xml page for the sample store so that it contains details got from the Admin Console e.g. the id & shared secret. when i click on download in the sample store, i too get the following error message:<br /> <error xmlns="" data="E_ADEPT_URL_SIGNATURE_ERR GBLINK_AUTH_SIGNATURE_NOMATCH" /> <br /><br />Can I ask you both......<br />1. How you got the UploadTest-1_1.zip working? <br />2. Where do you run the command: java -jar UploadTest-1_1.jar /localpath/raw.pdf -pass yourpassword -xml -jpg , is it in cygwin tool? <br />3. How do you get a list of books/items displaying in the Admin Console?<br /><br />Any help would be great.<br /><br />thanks,<br /><br />karnesh
9. Re: Content Server 4 help(Brandon_Holly) Oct 23, 2008 9:52 AM (in response to (Brandon_Holly))For UploadTest-1_1, I first had to get a file from support called build.xml, then download some program from Apache called Ant which used build.xml to actually build the UploadTest-1_1.jar file. It also builds a couple other .jar files that I haven't really checked out yet.
The command can be run from the command prompt as you have it above, with all of the proper details of course.
NOTE: Make sure you go through the ACS4 configuration files and remove any spaces around the = signs, as Andrew pointed out in a previous post.
At this point if you have everything set up right, the UploadTest-1_1.jar command should load the books into ACS4 and you will be able to see them in the Admin console.
Here you must also create a new Distributor in the admin console, and move the books to that distributor's inventory in order to serve them. (I'm still looking for a more efficient way to do this as books are being uploaded.)
Once you have an inventory for a Distributor, you can export a books.xml and config.xml file from within the console (the "Export Sample Site" button). Replace the two files with these names in the sample store with the new ones you exported. If all goes well you should now be able to download the books you 'buy' from the sample store.
My problem with the GBLINK_AUTH_SIGNATURE_NOMATCH error was because I wasn't decoding the shared secret in my program, which the sample store does, I just missed that part the first time I looked through it. The shared secret is Base64-encoded, and must be decoded before it can be used in the hmac-sha1 encryption.
Some of this stuff is very new to me so hopefully I am at least making a little sense and not missing some critical detail. :)
10. Re: Content Server 4 help(karnesh) Oct 24, 2008 4:41 AM (in response to (Brandon_Holly))Hi Brandon,
Thanks for the information, I have emailed Adobe for the build.xml file so that I can make use of the UploadTest-1_1.zip tool and finally get the sample store working.
It took us a while also to indentify that spaces in config files caused problems.
Again thanks for updating this forum, is proving to be a valuable source of information for us here in the UK.
11. Re: Content Server 4 help(Brandon_Holly) Oct 27, 2008 8:51 AM (in response to (Brandon_Holly))I think I found an inconsistency in the DistribTool.java file.
Towards the beginning it is talking about the XML file that this tool needs and says...
*
- serverURL = Contains the full URL to the fulfillment server. The
* server URL must point to the ManageDistributionRights API.
* ()
*
This wouldn't work for me...I kept getting an error. I looked just a little farther down in the DistribTool.java file and found the example that contains the line...
* <serverURL></serverURL>
So, it appears that the ManageDistributionRights API is part of the admin webapp, NOT fulfillment. Unless I was reading that wrong. But it works fine for me now using the 2nd example.
On a site note, I did discover that the UploadTool.java file DOES mention the -pass flag that the user guide failed to mention.
12. Re: Content Server 4 help(Kemet) Oct 30, 2008 2:02 AM (in response to (Brandon_Holly))well guys thanks for the useful information .. but anyone try to re-code GBLink generation in C# i need some help with that
13. Re: Content Server 4 help(Steve_Gibbings) Nov 3, 2008 4:16 AM (in response to (Brandon_Holly))Brandon,
Would you be willing to post or make your .NET fulfilment link code available?
You say "apparently" using GBLink was the old ACS3 way of generating links (and I'd hate to use an ActiveX) but where did that information come from? I'm at the distributor end of things and it's hard to get technical details on best practices, just the "most common" architectre in the documentation. However nowhere can I find details of alternative approaches.
Thanks,
Steve.
14. Re: Content Server 4 help(Brandon_Holly) Nov 3, 2008 8:23 AM (in response to (Brandon_Holly))This is how I understand it. The user guide says that there are certain values in the download link that are generated by the Adobe GBLink service. I am new to Content Server so I don't know how ACS3 works. What I wasn't clear about was that the user guide meant that the ACS3 GBLink did this, but ACS4 does not. It does recognize the use of the same format just so existing systems can be upgraded to ACS4 and still work. Support told me that I had to study the sample store php code to figure out how to do it myself.
I whipped up a quick little C# program to figure out what data I needed and what I needed to do with that data to generate the link just so I could test our server. Most of the link is pretty straightforward, it was coming up with the proper signature to attach to the end of the link that was a little tricky for me. PHP did this with ease in the sample store, but C# needed a little more to get it right.
I picked apart one of the links generated by the sample store, Googled a couple of the PHP functions used, and Googled how to do the whole HMAC-SHA1 thing and I eventually got it working. Just remember that the Distributor's shared secret is Base64-encoded. Once I realized I needed to decode that then everything was better.
15. Re: Content Server 4 help(Steve_Gibbings) Nov 3, 2008 9:23 AM (in response to (Brandon_Holly))You don't fancy sharing your code then? No pressure if you don't want to (I'd check it all anyway) but it would certainly short cut things for us here.
In any event thanks for the reply.
Steve
16. Re: Content Server 4 help(Yasser_Salama) Nov 4, 2008 12:11 AM (in response to (Brandon_Holly))Steve,
I'm using .NET too to generate the GBLink... but it's not quite easy as in PHP code.. anyway I've no luck to make it work but anyway I'm pasting my code maybe it would help you to start implementing yours..
first this is the function that decode the shared secret of the distributor...I've notice that there's extra char of the result of the decoding in .NET code than the result generated in PHP code so i made an extra code to remove this char.. you can remove this part to test it.. i've tested both ways but no luck :(
public static););
// This part removes extra encoded char from the result those extra char not containing
// in the result of the shared secret decoded in the PHP code. you can remove this part
// and return "result" instead of return "x" and test if ot's works with you
StringBuilder x = new StringBuilder(result);
int removeCount = 0;
for (int i = 0; i < result.ToCharArray().Length;i++ )
{
if (Convert.ToInt32(result.ToCharArray()[i]) == 65533)
{
x.Remove(i-removeCount, 1);
removeCount++;
}
}
return x.ToString();
}
catch (Exception e)
{
throw new Exception("Error in base64Decode" + e.Message);
}
}
other two functions that signs the query string of the GBLink "all the stuff after the '?' mark except the '&auth=' itself" with the decoded key generated by the first function, the other function below convert the hashing result to HEX STRING LOWERCASE format as required:
public static string HashingHmacSha1(string message, string key)
{
System.Text.UTF8Encoding encoding = new System.Text.UTF8Encoding();
byte[] keyByte = encoding.GetBytes(key);
HMACSHA1 hmacsha1 = new HMACSHA1(keyByte);
byte[] messageBytes = encoding.GetBytes(message);
byte[] hashmessage = hmacsha1.ComputeHash(messageBytes);
return ByteToString(hashmessage);
}
public static string ByteToString(byte[] buff)
{
string sbinary = "";
for (int i = 0; i < buff.Length; i++)
{
sbinary += buff[i].ToString("X2"); // hex format
}
return (sbinary.ToLower());
}
good luck & have a nice day
17. Re: Content Server 4 help(Steve_Gibbings) Nov 4, 2008 1:12 AM (in response to (Brandon_Holly))Thanks Yasser, exactly what problems and errors are you receiving. It might help debug. I'll reply with whatever I discover and a working solution (assuming I get there!).
Steve.
18. Re: Content Server 4 help(Yasser_Salama) Nov 4, 2008 1:40 AM (in response to (Brandon_Holly))Steve,
I followed the PHP code and came with same decoded shared secret and the problem is when using that decoded result to hash the query string it gives me different signature than the PHP code gives me so the fulfillment service always return with signature no match error...
in other way to figure out wheres my problem i've passed simple string as a key and the same query string for both signing functions (PHP & .NET) and give me the same signature so i think the problem in the decoding function.. hope this could help you
Yasser
19. Re: Content Server 4 help(Brandon_Holly) Nov 4, 2008 6:47 AM (in response to (Brandon_Holly))To decode the shared secret I just used...
byte[] sharedSecretBytes = Convert.FromBase64String(sharedSecret);
...then passed sharedSecretBytes into my hashing function instead of a string. I think I was passing a string and then getting the bytes in the function originally and it wouldn't work for me that way.
I also used ASCIIEncoding instead of UTF8Encoding.
20. Re: Content Server 4 help(Yasser_Salama) Nov 4, 2008 11:26 PM (in response to (Brandon_Holly))Thanks alot Brandon you've saved my day :) it's works this way
Yasser
21. Re: Content Server 4 help(karnesh) Nov 21, 2008 4:03 AM (in response to (Brandon_Holly))Hi,
Has anyone run into issues with uploading large epub files using the uploadtest tool?
thanks,
karnesh
22. Re: Content Server 4 help(robert_x) Jan 8, 2009 4:06 AM (in response to (Brandon_Holly))hi <br /><br />I am trying to set adobe content server on my pc to try it , so when i try to pack a pdf using uploadtest-1_1.jar ,I get the following error : Java.io.FileNotFoundException :<br /> at sim/reflect.NativeconstructorAccessorImpl.newInstance0<native Method>...<br /><br />Any help would be great. <br /><br />thanks, <br /><br />Robert
23. Re: Content Server 4 help(karnesh) Jan 14, 2009 5:55 AM (in response to (Brandon_Holly))Hi Robert,
Can you tell what command you are running?
The help guide is not very clear. The command i would run to upload a single book is:
java -jar -Xmx1024M UploadTest-1_1.jar c:/books/abook.epub -pass myACS4password
Or for many books in a folder
java -Xmx1024M -jar UploadTest-1_1.jar c:/myBookFolder -pass myACS4password
Hope this helps. If you contact Adobe support and ask them to send you some updated documentation. They released a quick start guide which has more info on this.
thanks,
karnesh
24. Re: Content Server 4 help(robert_x) Jan 14, 2009 6:40 AM (in response to (Brandon_Holly))Dear karnesh , thanks alot for ur response<br /><br />I followed what is written in the quick start guid<br /><br />here is the command that i use:<br />C:\>java -Xmx1024M -jar UploadTest-1_1.jar c:\\srcbooks\lepetitprince.pdf -pass myACS4password <br /><br />the result is:<br /><br />Creating package request for: c:\\srcbooks\lepetitprince.pdf<br />Creating connection to Packaging Server:<br /><br />Sending Package Request<br />java.io.FileNotFoundException:<br /> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)<br /><br /> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)<br /><br /> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)<br /> at java.lang.reflect.Constructor.newInstance(Unknown Source)<br /> at sun.net.(Unknown Source)<br /> at java.security.AccessController.doPrivileged(Native Method)<br /> at sun.net.(Unknown Source)<br /> at sun.net.(Unknown Source)<br /> at com.adobe.adept.upload.UploadTest.sendContent(Unknown Source)<br /> at com.adobe.adept.upload.UploadTest.<init>(Unknown Source)<br /> at com.adobe.adept.upload.UploadTest.main(Unknown Source)<br />Caused by: java.io.FileNotFoundException:<br /> at sun.net.(Unknown Source)<br /> at java.net.HttpURLConnection.getResponseCode(Unknown Source)<br /> at com.adobe.adept.upload.UploadTest.sendContent(Unknown Source)<br /> ... 2 more<br /><br />Finished!<br />Successful packages created: 0<br />Unsuccessful package attempts:1<br />Here are the files that failed to package:<br /><br />-------------------------------------<br />I am totaly new to java <br />should i have a folder "package" in the packaging folder on apache tomcat server where i deployed the packaging.war?<br />the return all true<br />I use windows xp pro with sp3 ? do I need windows server for asc4?<br /><br />Thanks in advanced<br /><br />Robert
25. Re: Content Server 4 help(karnesh) Jan 14, 2009 9:23 AM (in response to (Brandon_Holly))Hi Robert,
I created a folder called "Package" in C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\packaging\ , then placed the UploadTest-1_1.jar into the folder. If you have to also create this, remember to restart the apache service.
Also i noticed that you have a double \\ in c:\\srcbooks\lepetitprince.pdf .
thanks,
karnesh
26. Re: Content Server 4 help(adobedevuser) Mar 3, 2009 4:08 PM (in response to (Brandon_Holly))When I run this I get the following error: Unable to access jarfile UploadTest-1.2.jar
C:\>java -Xmx1024M -jar UploadTest-1.2.jar
package c:\acs4\srcbooks\MonteCristo.epub -pass park123
I tried creating a package folder under packaging and I stopped and restarted my service but same message.
27. Re: Content Server 4 help(robert_x) Mar 3, 2009 11:34 PM (in response to (Brandon_Holly))does the uploadTest-1.2.jar exist on c:\ ?
28. Re: Content Server 4 help(adobedevuser) Mar 4, 2009 10:15 AM (in response to (Brandon_Holly))I had two mistakes. One it didn't exist on the C drive. And the second one was that uploadTest-1_2.jar was spelled incorrectly. -
29. Re: Content Server 4 help(robert_x) Mar 12, 2009 4:27 AM (in response to (Brandon_Holly))Dear all,
I am a litle bit confused about the display permissions on ebooks
what does "allowed on any device" means ? does that means ,unlimited downloads of the ebook using same fulfillment link?
and what "allowed on a single device" means?
and adobe mentioned that the user can download the ebook on up to 6 devices (on what permission, will that work?
Regards
30. Re: Content Server 4 help(Agyemang) Mar 29, 2009 7:52 AM (in response to (Brandon_Holly))I get an error with uploadTest-1.2.jar. It says it unable to read file. Contact administrator
Regards,
Agyemang Fellowes
Director of Operations
OPTiC BURST COMMUNICATIONS
Office: 1 (209) 885-4211
31. Re: Content Server 4 helpFrancesco Boccaletti Jul 2, 2009 2:13 AM (in response to (Agyemang))
Hi!
32. Re: Content Server 4 helpJim Lester Jul 4, 2009 7:37 AM (in response to Francesco Boccaletti).
33. Re: Content Server 4 helpiwishubien Aug 1, 2009 2:33 PM (in response to (Brandon_Holly))
34. Re: Content Server 4 helpDavid Adone Aug 5, 2009 7:50 AM (in response to (Brandon_Holly)) ?
35. Re: Content Server 4 helpJim Lester Aug 10, 2009 7:41 AM (in response to (Brandon_Holly)).
36. Re: Content Server 4 helprarunachalam May 18, 2010 11:40 AM (in response to Jim Lester)
Has anyone done this in java, I have decoded the shared secret and still get GBLINK_AUTH_SIGNATURE_NOMATCH, appreciate your help.
thanks
revathi | https://forums.adobe.com/thread/312868 | CC-MAIN-2017-17 | refinedweb | 4,208 | 65.62 |
New Relic's Java agent provides several options for custom instrumentation. One of those options is adding the Java agent API's
@trace annotations to your application code. This document describes how to use annotations..
To detect custom traces:
- Make sure that
newrelic-api.jarappears in your classpath.
Add
@Traceannotations to your code. In each class containing a method you want to instrument, call:
import com.newrelic.api.agent.Trace;
- Place the
@Traceannotation on each target method.
The
annotation com.newrelic.api.agent.Trace is located in the newrelic-api.jar.:
@Trace protected.
dispatcher
If
true, the agent starts a transaction when it reaches this
@Traceannotation, even if a transaction is not already in progress. If a transaction is in progress, then that transaction will continue.
If
false(default), no metrics will be recorded if the agent has not started a transaction before the
@Traceannotation is reached. For example:
@Trace(dispatcher=true)
async
If
true, this method is marked as asynchronous and the agent will trace this method if it linked to an existing transaction. For example:
@Trace(async=true)
If
false(default), the method is not marked as asynchronous. If other
@Traceannotations are present and the method is not executing asynchronously, it will still be traced.
metricName
This property affects transaction traces and error reporting. By default, the metric name will include the class name followed by the method name. If you do not want class followed by method, then you can use this property to change the metric name.
If you set the
metricNamein addition to the dispatcher, as in
@Trace(metricName="YourMessageHere", dispatcher=true), then the time spent in this method will appear as YourMessageHere in any transaction trace.
This will not set the overall transaction name as you see it in the APM Transactions page. To set the transaction name, see NewRelic.setTransactionName. Here is an example:
@Trace(metricName="YourMetricName", dispatcher=true)
Do not use brackets
[suffix]at the end of your transaction name. New Relic automatically strips brackets from the name. Instead, use parentheses
(suffix)or other symbols if needed.
skipTransactionTrace
If
true, the agent will not collect a trace for this transaction. If the method with this annotation is reached in the middle of a transaction, then the transaction trace will still be captured, but this method will be excluded from the call chart.
This property overrides a dispatcher property of
true. The agent still reports rollup metrics for the method, so you will continue to see data in the APM Transactions page. Here is an example:
@Trace(skipTransactionTrace=true)
excludeFromTransactionTrace
If
true, the method will be excluded from the call chart. The agent will still collect transaction traces and individual metrics for the method. Here is an example:
@Trace(excludeFromTransactionTrace=true)
leaf
A leaf tracer has no child tracers. This is useful when you want all time attributed to the tracer, even if other trace points are encountered the tracer's execution.
Database tracers often act as a leaf so that all time is attributed to database activity, even if instrumented external calls are made. Here is an example:
@Trace(leaf=true)
If a leaf tracer does not participate in transaction traces, the agent can create a tracer with lower overhead. Here is an example:
@Trace(excludeFromTransactionTrace=true, leaf=true)
More API functions
For more about the Java agent API and its functionality, see the Java agent API introduction. | https://docs.newrelic.com/docs/agents/java-agent/custom-instrumentation/java-instrumentation-annotation | CC-MAIN-2017-34 | refinedweb | 566 | 56.66 |
CodePlexProject Hosting for Open Source Software
What's a good approach to complete map/level xml serialization?
I currently have two problems with world serialization:
firstly my farseer bodies belong to various actors in the level which also need to be serialized as does their relationship to the farseer bodies, as their body references will obviously be invalid on deserialize
secondly I am using userdata as a reference to said actors so that its easy to tell which fixtures and bodies belong to what actor. As userdata gets serialized, I'm effectively serializing actors with physics bodies many times over, depending on how many bodies
they deal with.
My only idea at this stage is to modify farseer serialization to allow individual bodies to be serialized on their own and then serialize them when their actors are serialized, but this still leaves joints unhandled.
Has anyone dealt with a similar scenario? I have a feeling that there's a much better way to do this.
The World serialization outputs the body userData with no issues that I have found.
Reading it in has been a bit of a problem. Right or wrong, here is what I did.
The object that I save in the body.userdata has a key that uniquely identifies this object. Since this object is my custom class and is not part of the Farseer assembly, ReadSimpleType can not resolve custom class. So I modified
ReadSimpleType in the serialization class to raise an event and I deserialize it in my game code and store it in a Dictionary using the object’s Key. ReadSimpleType returns the Key so what ever is calling it can save this
key in the usserdata.
Once the world has been deserialized, I just iterate through the body list and swap out the keys for the real object from the Dictionary.
Also, I found that deserialization was a bit slow on the phone. I converted that aspect to use Linq. Was planning to share at some point.
What are the keys and values for this dictionary?
Here is a simple sample of the dictionary and data class that I use.
public Dictionary<int, EntityData> entityDataList;
public class EntityData {
public
int key { get; set; }
public
float drawDepth { get;
set; }
public Body body {
get; set; }
public Vector2 origin {
get; set; }
// need texture name to hook up the texture after deserialization
public
string textureName { get;
set; }
// texture will not be serialized
public Texture2D texture {
get; set; }
}
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://farseerphysics.codeplex.com/discussions/256812 | CC-MAIN-2017-22 | refinedweb | 446 | 61.36 |
Exchange Migration Planning: What will happen with clients when changing namespace?
Hello, We are planning to upgrade our EX 2010 environment to EX 2013. Sadly our namespace in EX2010 is right now owa2.domain.com. The name owa2 is coming from last migration between EX 2003 and EX 2010. I'd like to remove this minor flaw if possible. But what will happen with all the clients and...
Removing Exchange Server 2007 from the Active Directory
Hello - our organization had an Exchange Server 2007 that got wiped. We have a new Exchange Server 2007 up and it is running fine however the active directory still thinks the old 2007 server is around and so the new Exchange Server 2007 has it showing up in the server configuration panel in the...
Self-signed Cert about to Expire
Hi, Just realized the self-signed cert is about to expire within a week. Is there any way I can renew my self-signed cert without removing the existing one and losing any uptime of Exchange 2k3? I have an internal CA, but just not sure the exact procedure on how I can renew the cert. Any advice is...
What is the market value/salary for a Exchange system administrator and MCSA on messaging?
What is the market value for a system administrator in Exchange and how much salary a MCSA on messaging can get in Delhi NCR?
Move an Exchange 5.5 on Windows Server 2000 to a different server that has Windows Server 2003 standard edition
How would you move an Exchange 5.5 on Windows Server 2000 to a different server that has Windows Server 2003 standard edition?
Our exchange server 2003 crashed and has been rebuilt from start.
Our exchange server 2003 crashed and has been rebuilt from start. Email works ok but I need to allow users to access email via internet explorer from home. It did work till the server crash and I'm stuck on how to configure Exchange.
Need to install Service packs on active/passive Exchange cluster
Need to install server 2003 SP2 and Exchange 2003 SP2 on my active/passive back end cluster, is there anything I need to worry about? Figure I'll update server SP2 first then Exchange SP2. On front end servers first then backend cluster. Sound right?
Migrating from Exchange 5.5 to Exchange 2003
Moving a few users at a time. How to I forward mail from Exchange 2003 to Exchange 5.5 for users that have yet to be migrated? Would appreciate any help.
Outlook 2003 loosing connectivity from Small Business Server Exchange 2003
Hi all, This is a problem that's been plaguing my network for quite some time now. This is the first time I thought I'd ask for help. We have a Small Business Server 2003 based domain with 20 PC's running Windows 2000 and Windows XP. Originally all the PC's were running Office 2000, which didn't...
Removing an Exchange 2003 admininistrative group
While I have removed the first server in an Exchange 2003 administrative group, I have never removed an entire administrative group which only has one server. Do I first need to manually delete the routing groups on servers in other admininistrative sites that use this server as a remote bridgehead...
You do not have sufficient permission to perform this operation on this object
All, I have an exchange 5.5 site, and the users at that particular site recieve this error when attempting to use the Address Book to find recipients in thier site. When selecting the TO button and using the drop down arrow to select the show names from the selection, they select thier site, and....
Replacing Exchange Server 2003 Cluster Nodes
How do I replace an individual node in an Exchange Server 2003 cluster?
Need to change my Outlook Web Access password but get error.
We have just added an exchange 2003 front end server and an Exchange 2003 Server back end. Everything seems to be working fine but now when users try to access their emails using Outlook Web Access. They will frequently see http/1.1 503 Service Unavailable in different parts ofte web page Pressing...
(0x8004010F): The operation failed. An object could not be found.”
Some of our users who use Outlook 2003 have the same problem. Every couple of minutes, they get this message: "Task 'Microsoft Exchange Server' reported error (0x8004010F): The operation failed. An object could not be found." E-mail does come in and go out without any problems, but this error keeps...
Hello, I have a question about Outlook. All of our users who use Outlook versions newer than 2000 (XP and 2003) all have the same problem. They get this message "Task 'Microsoft Exchange Server' reported error (0x8004010F) : 'The operation failed. An object could not be found" every couple of...
2003 server R2 additional DC to master DC
Hi, I am using 2003 server SP2 with DC, AD and exchange 5.5 insatlled. Now I have a new server to replace the oldDC. I have created different machine name to joined the oldDC domain and transferred AD users and exchange mail boxes to my new machine. The problem is I cannot demote the oldDC server....
How can I configure Exchange 2003 for NDR message
How can I configure Exchange,for users who no longer works with my company, so that those who send emails to them can get a NDR with information in it informing them who to contact instead?
Exchange Server 2003 running on a physical Server want to add an Exchange cluster for failover
I have a MS Exchange Server 2003 running on a physical Server and would like to add an Exchange cluster server for failover. Is that possible? | http://itknowledgeexchange.techtarget.com/itanswers/tagdirectory/exchange/page/8/ | CC-MAIN-2016-22 | refinedweb | 965 | 72.76 |
ARGetSupportFile
Note
You can continue to use C APIs to customize your application, but C APIs are not enhanced to support new capabilities provided by Java APIs and REST APIs.
Description
Retrieves a file supporting external reports.
Privileges
Any user who has access to the specified object.
Synopsis
#include "ar.h" #include "arerrno.h" #include "arextern.h" #include "arstruct.h" int ARGetSupportFile( ARControlStruct *control, unsigned int fileType, ARNameType name, ARInternalId id2, ARInternalId fileId, FILE *filePtr, ARTimestamp *timeStamp, ARStatusList *status)
Input arguments
control
The control record for the operation. It contains information about the user requesting the operation, where that operation is to be performed, and which session is used to perform it. The user and server fields are required.
fileType
The numerical value for the type of file, and the type of object the file is associated with. Specify
1 (
AR_SUPPORT_FILE_EXTERNAL_REPORT) for an external report file associated with an active link.
name
The name of the object the file is associated with, usually a form.
id2
The ID of the field or VUI, if the object is a form. If the object is not a form, set this parameter to
0.
fileId
The unique identifier of a file within its object.
filePtr
A pointer to a file from which the support file contents are retrieved. Specify
NULL for this parameter if you do not want to retrieve the file contents. If you are using Windows, you must open the file in binary mode.
Return values
timeStamp
A time stamp that specifies the last change to the field. Specify
NULL for this parameter if you do not want to retrieve this value., ARSetActiveLink, ARSetSupportFile. | https://docs.bmc.com/docs/ars91/en/argetsupportfile-609071659.html | CC-MAIN-2020-10 | refinedweb | 273 | 50.02 |
tabnanny – Indentation validator¶
Consistent use of indentation is important in a langauge like Python, where white-space is significant. The tabnanny module provides a scanner to report on “ambiguous” use of indentation.
Running from the Command Line¶
The simplest way to use tabnanny is to run it from the command line, passing the names of files to check. If you pass directory names, the directories are scanned recursively to find .py files to check.
When I ran tabnanny across the PyMOTW source code, I found one old module with tabs instead of spaces:
$ python -m tabnanny . ./PyMOTW/Queue/fetch_podcasts.py 78 "\t\tfor enclosure in entry.get('enclosures', []):\n"
Sure enough, line 78 of fetch_podcasts.py had two tabs instead of 8 spaces. I didn’t see this by looking at it in my editor because I have my tabstops set to 4 spaces, so visually there was no difference.
for enclosure in entry.get('enclosures', []): print 'Queuing:', enclosure['url'] enclosure_queue.put(enclosure['url'])
Correcting line 78 and running tabnanny again showed another error on line 79. One last problem showed up on line 80.
If you want to scan files, but not see the details about the error, you can use the -q option to suppress all information except the filename.
$ python -m tabnanny -q . ./PyMOTW/Queue/fetch_podcasts.py
To see more information about the files being scanned, use the -v option.
$ python -m tabnanny -v ./PyMOTW/Queue './PyMOTW/Queue': listing directory './PyMOTW/Queue/__init__.py': Clean bill of health. './PyMOTW/Queue/feedparser.py': Clean bill of health. './PyMOTW/Queue/fetch_podcasts.py': *** Line 78: trouble in tab city! *** offending line: "\t\tfor enclosure in entry.get('enclosures', []):\n" indent not greater e.g. at tab sizes 1, 2
Using within Your Program¶
As soon as I discovered the mistake in my Queue example, I decided I needed to add an automatic check to my PyMOTW build process. I created a tabcheck task in my pavement.py build script so I could run paver tabcheck and scan the code I’m working on for PyMOTW. This is possible because tabnanny exposes its check() function as a public API.
Here’s an example of using tabnanny that doesn’t require understanding Paver’s task definition decorators.
import sys import tabnanny # Turn on verbose mode tabnanny.verbose = 1 for dirname in sys.argv[1:]: tabnanny.check(dirname)
And in action:
$ python tabnanny_check.py ../Queue '../Queue': listing directory '../Queue/__init__.py': Clean bill of health. '../Queue/feedparser.py': Clean bill of health. '../Queue/fetch_podcasts.py': *** Line 78: trouble in tab city! *** offending line: "\t\tfor enclosure in entry.get('enclosures', []):\n" indent not greater e.g. at tab sizes 1, 2
Note
If you run these examples against the PyMOTW code it won’t report the same errors, since I have fixed my code in this release. | https://pymotw.com/2/tabnanny/index.html | CC-MAIN-2017-26 | refinedweb | 474 | 60.61 |
dependencies: flutter: sdk: flutter flutter_shapes: } }
A great article about animation with Flutter.
It helped me to write example codes.
See all changes_shapes: ^0.2.0
You can install packages from the command line:
with Flutter:
$ flutter pub get
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:flutter_shapes/flutter_shapes.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Document public APIs. (-1 points)
42 out of 42. | https://pub.dev/packages/flutter_shapes | CC-MAIN-2019-30 | refinedweb | 110 | 60.82 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Support Forum » Python error using urllib2 - need some help
I have built a 5 segment ledarray powered by the usb port and am experiencing a Python script problem that I need help with.
My sign follows a user on twitter and the functionality is split between two pc's. One get the tweets and sends to a website and the other (small fit pc 1 running linux embedded in the led array box via MIFI on Verizon network) gets the text file from the website and pulls down to feed the sign. Problem is that Python throws an error intermittently and I don’t know how to resolve it. Sometimes it will run for 30 min, sometimes an hour or more. I think there may be a timeout with the urllib2 process. I recently added the “ket.setdefaulttimeout(5)” line but it did not help.
Any help would be appreciated. I am opened to any other ways of pulling down this text file as well. I have cobbled this code together from other samples and may have done a bad job of it. I am opened to any changes.
Thanks.
Guy
Here is the error..
Traceback (most recent call last):
File "Y:\guy.py", line 15, in <module>
f = urllib2.urlopen(myurl) #define url to variable name
File "C:\Python26\lib\urllib2.py", line 124, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python26\lib\urllib2.py", line 395, in open
response = meth(req, response)
File "C:\Python26\lib\urllib2.py", line 508, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python26\lib\urllib2.py", line 433, in error
return self._call_chain(*args)
File "C:\Python26\lib\urllib2.py", line 367, in _call_chain
result = func(*args)
File "C:\Python26\lib\urllib2.py", line 516, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
Here is my code
/usr/lib/ python
import serial
import time
import urllib2,socket
import os
serial = serial.Serial("com2", 115200) #testing on my windows box vs linux for now.
myurl = ''
message = "startup led array message"
socket.setdefaulttimeout(5)
loopvar = 1
while loopvar == 1: #never present a false statement so run forver
f = urllib2.urlopen(myurl) #define url to variable name
#while True:
print message
message = f.read() #get url to variable
mystring = str(message) + " \n" #place contents of message to mystring and add trailing spaces
serial.write('z')
serial.write('b')
#Send message to LED array
for i in range(1, 2): #not sure what this does....
for data in mystring:
if not data == '\n':
serial.write(data)
#Debug line follows
#print "Sent " + data + " at " + time.strftime("%H:%M")
response = serial.read()
#Debug line follows
#print "Received " + response
if not response == 'n':
#print "Received: " + response
#This is a debug line, since I'm a bit fuzzy on some details
# break
serial.write('a')
#time.sleep(delay)
pass
I have done a bit more research, looks like I may be opening too many ports that are not being closed. Will try httplib instead.
Hi gplouffe,
That is likely the issue. Have you tried calling f.close() after you do the f.read(), it might be enough to fix this issue.
Humberto
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1698/ | CC-MAIN-2020-29 | refinedweb | 558 | 77.13 |
BBC micro:bit
Driving A Motor
Introduction
Circuit
In this circuit you have,
- Motor
- 470 Ohm Resistor
- NPN transistor
- Diode 1N4148
Program
We'll turn the motor on when the A button is pressed, otherwise we'll switch it off.
from microbit import * while True: if button_a.is_pressed(): pin0.write_digital(1) else: pin0.write_digital(0) sleep(50)
Challenge
Stick to the single motor and go easy when using this much power from the micro:bit. Make a spinning circle out of card and place it on the axle of the motor. Use this in a game, as a pretty colour wheel or as a nice little fan. | http://www.multiwingspan.co.uk/micro.php?page=motor | CC-MAIN-2019-09 | refinedweb | 107 | 71.95 |
Introduction
If you’re new to Python and have stumbled upon this question, then I invite you to read on as I discuss how to call a function from another file. You have most likely used some of Python’s built-in functions already like
print() and
len(). But what if you’ve defined your own function, saved it in a file, and would like to call it in another file?
Import it!
If you’ve ever imported something like random, NumPy, or math then it is really as simple as that! If you haven’t, then here’s a quick look at how it’s done.
As an example, let’s use the math module to find the square root of a number.
First, we import it.
>>> import math >>>
To see the available functions and attributes for a module use the built-in function
dir():
>>> dir(math) ['_', 'perm', 'pi', 'pow', 'prod', 'radians', 'remainder', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'tau', 'trunc']
The function to calculate square root is called ‘
sqrt’. And we’ll use the dot notation to call it:
>>> math.sqrt(64) 8.0 >>>
Alternatively, you can use the keyword “
from” followed by the module name and “
import” followed by the attribute or function. This way we no longer have to use the dot notation when calling the square root function.
>>> from math import sqrt >>> sqrt(81) 9.0
And as expected, attempting to access the other functions or attributes still requires the dot notation:
>>> pi Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'pi' is not defined >>> math.pi 3.141592653589793
User-Defined Functions
As you progress in your Python coding, you will eventually create your own functions and will implement them in other programs. As an example, we will illustrate this with a simple tip calculator. I invite you to follow along.
Open your favorite python editor. I’m currently using Linux so I’ll just use vi for this example. I’ll call my file “
myfunctions.py”.
Here’s the function definition:
def calcTip(b): # Tip will be 20% of the bill return (b * .2)
Save the file.
Now to call a function from another file in Python, we simply use “import” followed by the filename of your
.py file:
>>> import myfunctions >>> totalBill = 100.00 >>> tip = myfunctions.calcTip(totalBill) >>> print(tip) 20.0
If you have multiple functions in your file and would like to see them, don’t forget to use the
dir function. In our case, it only shows the
calcTip function:
>>> dir(myfunctions) ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'calcTip'] >>>
Also, don’t forget we can use the alternate method if you would like to skip the dot notation:
>>> from myfunctions import calcTip >>> totalBill = 250.00 >>> print(calcTip(totalBill)) 50.0 >>>
Things to watch out for
Note in my example, when I ran the Python interpreter it was within the same working directory as the
myfunctions.py file. If you’re not familiar with Linux, the dollar sign is the command prompt.
- pwd = print working directory
- The current directory is “/home/pete/Videos/Python”
- ls -l = list directory contents
- The file “myfunctions.py” is located here
- python3 = invoke the python interpreter
- When entering the “import myfunctions” line, there is no error.
The below screenshot shows I change the working directory to home (~) and run
pwd to show the current directory path. I then run the python interpreter and attempt to import the
myfunctions file. Now it shows “
ModuleNotFoundError” because the file is not within the current directory.
If you’re using an IDE, then make sure your file is in your project. The below screenshot shows the Spyder IDE with a Project called “Function Example”. Screenshot of “
myfunctions.py” file with the function definition of
calcTip:
Screenshot of the “
main.py” file. This particular IDE is really great because since the “
myfunctions.py” file is within our Project, then the autocomplete detects it when I import it.
Here’s the console output when running the
main.py file, passing variable “
bill” into the
calcTip function:
And that’s how to call a function from another file in Python. I hope you found this article useful and look forward to writing more! See you soon! | https://blog.finxter.com/how-to-call-a-function-from-another-file-in-python/ | CC-MAIN-2022-21 | refinedweb | 709 | 73.58 |
May 04 2018
ESP & OLEDfor long time i not play with this stuff, even i have a still packed board here
ESP 8266 plus OLED
(i burned too much time at the raspberry pi forum, the questions there can give you some good ideas what to play with, but as i see, my own things got stuck, and my new Raspberry Pi 3B+ will come not for another month.)
i needed to check what
from LAZADA
and that must be its homepage
But the "WEMOS" text on the board and the ESP-8266 housing makes it a fake?copy?
first i need to get into this again and do a updated setup on my old desktop PC.
+ + arduino 1.8.5 or the nightly
see Installing with Boards Manager
info this release 2.4.1
fire it up:
connect via USB cable to PC and see some nice graph demo
white on black ( 128 * 64 pix)
i think it is this demo
a check with the mobile WIFI shows there seems to be no WIFI AP pre installed.
while my PC complains again some driver NOT successfully installed, but i think that serial chip i used already, and device manager tells me:
also the board info button works, so i have connection.
but actually i am lost, what board i have to select?
as he say it should be a ESP-8266-12F , while on his picture i read
MODEL: ESP8266MOD
VENDOR: AI-THINKER
and my board just say WEMOS ESP-8266
check on WEMOS and their shop i can not find that board.
but in the tutorial i see i correctly selected
NodeMCU 1.0 ESP-12E module
add:
download from esp8266-oled-ssd1306-master.zip and copy to
/arduino-1.8.5/libraries/esp8266-oled-ssd1306-master/
!!that does not fit with board info SH1106 but see his tutorial:
actually i like that guy: Travis Lin about and i am already sorry i buy a copy product, but i did not know.
now my first steps:
-1- play with the OLED DEMO
try take the example SSD1306SimpleDemo
and change to:
#include "SH1106Wire.h"
SH1106 display(0x3c, D1, D2);
get error 'SH1106' does not name a type
again:
#include <'Wire.h'>
#include "SH1106.h"
#include "images.h"
SH1106 display(0x3c, D1, D2);
did compile:
Sketch uses 277612 bytes (26%) of program storage space. Maximum is 1044464 bytes.
Global variables use 32856 bytes (40%) of dynamic memory, leaving 49064 bytes for local variables. Maximum is 81920 bytes.
as you see in the settings, i did a full upload, no need to press any buttons ( the board has "reset" "flash" PB ) and while upload see a blue LED blinking. ( the old demo stopped until board booted anew )
also started about the clock example ( what arduino lib i need? ) and get a running digital clock ( starts with 0:00:00 at boot ) useless until i am online and can use a internet NTP time service.
-2- add a WIFI server in my LAN
i do webserver with ESP8266 already here, but as i use now a new library i think it is better to start new and copy over some custom details later.
start with the Advanced Web Server Example
-1- change ssid and psk
-2- upload
-3- monitor 115200
now, as much i love SVG, that random numbers chart is a piece of art, but i disabled it.
and included that tricky fix IP again.
and include the OLED ( try to show connection info there too ( not only on serial.print / monitor ))
this was only my first 2 steps with this board, now this looks much smarter!
my board is the with the 1.3" OLED, it has the buttons on the back
"RST" //RST pin// makes a RESET
"FLASH" works in my example as button_pin=0
( to toggle the relay ) but is it: // arduino 0 // GPIO0 //D3 :or: // arduino 16// GPIO16//D0 // WAKE
-3- make it a MQTT device
info: here and here and here
ok, try that igrr example and need add libs in arduino IDE:
[sketch][include library][manage libraries][search "pubsubclient"]
see homepage and spec:
[sketch][include library][manage libraries][search "bounce2"]
set ssid and password
and alread start thinking about that device and ?channel? naming
also need that fix IP in there first.
compared to the wifi webserver example above, this time i
cleaned the OLED part into a other TAB "OLED" and just call in main:
void setup() {
OLED_setup();
...}
void loop() {
show_OLED();
...}
OLED_setup();
...}
void loop() {
show_OLED();
...}
make a new page for a 6 line message buffer for mqtt topics
and make the ESP send back a AKN when it gets a command
currently i switch between the 2 BLOG // here // RPI part //
knowing that would be difficult to read.
anyhow, there i just was thinking that it would be good to implement a sensor / data collection feature.
well, a ESP8266 has only one little Ain ( and that is questionable // and the ESP32 with more inputs i remember as same questionable.) but sensors could be linked to the ESP...
but all that is not the point here, now i just need a "analog" signal i can deal with in ESP, transport to broker, show in a web page, or better record to a database and later show the graph in a webpage. so i just make one up, a noisy sinus.
and yes, i worry correctly, float to string to char array to MQTT... pain in the ass
so make the data format JSON and connect command back to sinus generation,
using the inverted hardware logic.
now that will be later used also from the FLASK WEBSERVER site
what also show the by a python3 / mtqq sub / sqlite3 sql service saved data
to test exactly that program and its automatic understanding of several measuring columns
in the (flat) JSON record i now here create some more data.
-a- a filtered signal of that noisy sinus
-b- a feedback of the LED status ( but use 0.0 OR 1.0 float for easy structure)
but with this i found out there is a init problem for sinus (open)
for the RPI part pls. see my other BLOG
and for the code, that you can find at git | http://kll.engineering-news.org/kllfusion01/articles.php?article_id=148 | CC-MAIN-2018-22 | refinedweb | 1,040 | 75.34 |
Started at end of business day April 1st. Coexistence 2010/2016. Thought I understood disjoint namespace but this particular configuration may be an issue (registered domain for AD domain.com). I need help and actually have looked at MS paid support but it seems to take me in circles. I'm in Canada, called Canadian support line and got nowhere. OAS site ends up saying to call support and the support number tells you to go to the OAS site - brilliant. Tried calling US site but back I go to Canadian phone support...
6 Replies
Run the following on Exchange 2010 EMS:
Get-Command Exsetup.exe | ForEach-Object {$_.FileVersionInfo}
Can you repost the results of the below script? run in 2016 EMS, replace real server FQDN with mail.domain.com and attach log0.txt to your reply. (I suggest you remove the other file for now)
Start-Transcript log0.txt Get-OabVirtualDirectory | fl server, Name, *URL*, *auth* Get-WebServicesVirtualDirectory | fl server, Name, *URL*, *auth*, MRSProxyEnabled name, *admin*, static* Stop-Transcript
For the record, I started with the best practice of all external to begin with. Users (probably around 10% of total users so far) then started having these popups (graphic on bottom of initial post) then altered the broken pieces to internal but mixed results not cured. I'll play some more with it over the weekend. I'd really like to get moving on the Exchange 2019 phase but this needs to get fixed so if I have to postpone another week not a huge problem.
Based on the above snapshot, it seems that your client is connecting to office 365. Do you get any office 365 urls if you perform a Test E-mail AutoConfiguration in Outlook?
If there is any O365 url, Outlook seems be detecting if your acount is an Office365 one(even though the Autodiscover is not pointed to Office365). Try
adding the following DWORD key in the registry(How Do I Disable Autodiscover For On-Premises Exchange?), reboot your PC and start Outlook, and see if there is any difference:
(DWORD)Name: ExcludeExplicitO365Endpoint Value: 1 HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\x.0\Outlook\AutoDiscover HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\x.0\Outlook\AutoDiscover
Thanks Ivan but I think the autodiscover registry entries have ben exhausted.
After I stopped the Outlook clients that were behind a managed firewall from calling home Mother 365 i began to see other issues.
There were a lot of moving pieces and I missed cleaning up DNS, it would seem that even a secondary DNS server on the 2016 Exchange server (coexistence with 2019) was enough to screw things up.
I'm talking about an incoming test from my external work email getting through to a user in a specific mb database, then (logged in to her PC and using her Outlook) I reply back and it arrives back in my mailbox fairly quickly. The I reply to her reply and it gets caught up in the 2016 queue with a 451 4.4.0 error basically saying it cannot find the mailbox database on the 2019 server.
After removing the garbage dns server entries and also repointing everything with only the primary dns server pointing to itself as primary it seems to be good and I can reply back and forth.\
Bedtime and Cheers | https://community.spiceworks.com/topic/2313967-exchange-2010-2016-migration-client-access-issues-probably-disjoint-namespace?slot=slot_1&source=slot-1-default | CC-MAIN-2021-21 | refinedweb | 556 | 62.48 |
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "bullshit"
-
-
- I fucking hate toxic positivity. Every fucking corporation pushes the notion that "lifE iS aWeSomE, wE cArE abOuT pEoPle" and other such bullshit, and when you point it out, they call you a bad, toxic person.
No, you don't care about your community, let alone the whole world. You're just trying to make people believe that spyware, wage slavery and being fired by a neural network is the norm. You're making money off of those who don't have a choice.
If you account all people, not just American white rich 1%, it turns out that for the vast majority of people life is either an uphill battle or straight up nightmare. People are working in shifts and have no time or emotional resource to spend on themselves. Most of the people can't afford a house or a flat. Even those who can still suffer from mental illnesses, to the point where there are more mentally challenged people than mentally healthy ones. The word "neurotypical" meaning "mentally healthy" is wrong.
You want nothing but to sell your stuff and earn more money off of Chinese and Indian factory workers who work 16-hour shifts. Maybe your life is great, but aggressively pushing this notion is a big, wet spit in the face of humanity.
Fuck you. Fuck your space rockets. Fuck your twitter accounts. Fuck your institutionalized exploitation of the weak. Fuck your products. Fuck your "open source". Fuck your "GDPR compliance". Fuck your offshores, your hedge funds and your tax evasion. Fuck your bailouts. Fuck your ships spilling tons of crude oil, fuck your factories, fuck your slave labor, fuck your anti-suicide nets in Chinese dormitories.
One day, because of you, our planet will become unlivable. You will hop into your fancy space rocket to go to that top-1% elite Mars colony. Nice job.
But I will pray for a solar flare to hit you and turn you and your fucking rocket into radioactive ash.22
- Fuck corporate bullshit. I do the job, you pay me, the rest is pointless crap.
You are not my family, I don’t base my whole self worth on working for you and mostly I really don’t care.
I’m really not cut out for this shit.9
- i was asked to start a new project, and another dev was brought onto the team shortly after. as soon as he joined, straight away he started an entirely new project and worked on it through the whole weekend, then came back on monday and just sort of pasted his files into/over the code i had already started and was working on, with no regard for folder structure or naming conventions or anything. his work was even split between 2 almost identically named namespaces (both of which were completely different to the existing project namespace) and his shit broke everything i did in the first place. the cherry on top is that none of his work was even functional, it was purely dummy/mockup web pages that weren't linked to any sort of backend.
when i asked him wtf he thought he was doing, he kept saying "i didnt touch your code" and refused to acknowledge that pasting a project over a different project can break stuff, then said it "wasn't his fault that i'm slow and not keeping up". and just kept saying vague bullshit about how i have to do it his way because he "has more experience"
he had no idea what my previous experience was, he had never asked and i had never told him, he just decided that he had more experience than me.
i dug through the shit and found out that he didn't just break my work, he had actually purposely deleted it when he realised it was getting in the way of his spaghetti. i showed him the commit and confronted him with it and all the cunt said was "well the good news is, you know the fix" and kept trying to dismiss me in the most disrespectful ways he could think of. i eventually snapped at him (long overdue at this point) and told him that any experienced developer would not commit code that didn't even fucking compile, especially when they're the one who broke it, and that he needs to grow up. of course he then complained that i was being unprofessional.
our manager decided we should go with fuckfaces """code""" without even looking at the work either of us had done, purely because fuckface is older than me and that's how the world works.
in the end i just told my manager that i refuse to work with the guy and he could either take him or me off the project (guess who he picked) or i quit.
after a few months of the guy failing to deliver any of even the basic functionality that was asked for, the entire project got scrapped, and the dude just quit once everyone realised he was literally just larping as an experienced dev but couldn't accomplish simple tasks.
i never received an apology from anybody involved.5
-
- FUCK WEB3, FUCK CRYPTO, FUCK NFTS, FUCK ALL THIS PONZI ASS BIG BULLSHIT!!!!!!!!!!! FUCK YOU WHOEVER MADE THJS!!!! THE ONLY ONES WHO PROFIT ARE NOT THE ONES WHO BUY CRYPTO OR JPEGS BUT THE ONES ON TOP OF THIS PYRAMID WHO CREATED IT!!!!! MIGHT AS WELL CODE MY OWN PYRAMID COIN/JPEG AND SELL IT TO SUCKERS!!!!!!!! FCKK YOU!!!!!!!!!!!!11
- I don't argue with managers. I believe the music is the universal language of the world, so I just sing!
twinkle twinkle little star
*pulls out glock*
your bullshit has gone too far.5
- Its so weird working in this company. No onboarding, no micromanaging, noone to track your progress or performance. U can basically do what u want and ask what u want and requests will be fulfilled.
Initially was assigned to a random team and started fixing stuff. I hated the scope so after 2 months in requested to switch teams, request approved.
3 months in realized I lowballed myself during the interview and actually am doing better than half of the team, so I asked for a 43% bump, request approved.
4 months in I realized that I did atleast 100hrs overtime in a month during crunchtime, burned out. Asked for a paid week off to recover, request approved.
5 months in realized that we have many MR's piling up in the team and I could help with approving some of them, but they grant MR approval rights only when u work here for a year or are a decent dev from the get go. Requested for MR approval rights, request approved.
Again it feels so weird working on a big product with 6-7 scrum teams. Its like there is no bullshit, just ask what you need you will get what you asked so you can continue working.
On the other hand its kinda weird to keep asking everything, in other companies a good teamlead/manager shows more initiative takes care of stuff like this without even asking.8
-
- sooo shit started hitting the fan
after another useless discussion where PM tried to hardcore micromanage me and then bullshit his way out, i fucking tilted and started swearing.
after this discussion, he invited to a meeting next week to talk about "miscommunication".
no need bruh, i'll tell my boss on Monday i want to switch to another team.8
-.16
-
- sprint retros with PM are a fucking farce, it cannot possibly get any more grotesque.
they are held like this:
- in the meeting, PM asks each team member directly what they found good and bad
- only half of the team gives real negative feedback directed towards the PM or the process, because they are intimidated or just not that confrontative
- when they state a bad point, he explains them that their opinion is just wrong or they just need to learn more about the scrum process, in any case he didn't do anything wrong and he is always right
- when people stand up against this behavior, he bullshits his way out, e.g. using platitudes like "it's a learning process for the whole team", switching the topic, or solely repeating what he had just said, acting like everybody agreed on this topic, and then continue talking
- he writes down everything invisible for the team
- after the meeting he mostly remembers sending a mail to the team which "summarizes" the retro. it contains funny points like "good: living the agile approach" (something he must have obviously hallucinated during the meeting)
- for each bad point from team members, he adds a long explanation why this is wrong and he is doing everything right and it's the team's fault
- after that happens the second part of the retro, where colleagues from the team start arguing with him via mail that they don't feel understood or strongly disagree with his summary. of course he can parry all their criticism again, with his perfectly valid arguments, causing even longer debates
- repeated criticism of colleagues about poor retro quality and that we might want to use a retro tool, are also parried by him using arguments such as "obviously you still have to learn a lot about the scrum process, the agile manifesto states 'individuals and interactions over processes and tools', so using a tool won't improve our sprint retros" and "having anonymous feedback violates the principles of scrum"
- when people continue arguing with him, he writes them privately that they are not allowed to criticize or confront him.
i must say, there is one thing that i really like about PM's retro approach:
you get an excellent papertrail about our poor retro quality and how PM tries to enforce his idiocratic PM dictatorship on the team with his manipulative bullshit.
independently from each other, me and my colleague decided to send this papertrail to our boss, and he is veeeery interested.
so shit is hitting the fan, and the fan accelerates. stay tuned シ16
- Everyone that says you can't get viruses in Linux because only .exe compiled programs can contain malicious code or some bullshit like this is a fucking retarded
Sorry I had to say it12
-
- You know what really grinds my gears?
When a manager writes up some bullshit "this doesn't work".
Then you waste your time following up, and they say, "oh yeah, this so and so pop up came up with validation error X".
YEAH? AND I'M SUPPOSED TO KNOW THAT WHEN YOU WRITE ABSOLUTELY NO STEPS TO REPRODUCE, JUST COMING TO ME WITH "HEY, X IS BROKEN" GOD JUST GET FUCKING 1% TECHNICALLY LITERATE THATS ALL I ASK FOR I'M SO SICK OF YOUR SHIT2
- Not a coworker, but this guy who I went to uni with and was a real life saver when I was really down. (we played minecraft together)
... So, he is a real genius. One of those guys who I legit couldn't keep up with. His brain works, he doesn't bullshit his way through, he's not pretentious, he is legit a down to earth rare genius. Yet, he doesn't use his talents enough, he likes to work or go home to play minecraft. And he doesn't politically care enough, so I am almost sure that he will end up getting stuck in the defence force.
We're still friends. And I try my hardest to not be nosy and nag at him that he can do better. I mean, he is happy the way he is, and he is not ambitious. But the memory of him is a reminder that not everyone who gets somewhere is the best and brightest.41
-
-
- Manager: Can you stay late as fuck today? One of our bitchiest vendors is gonna update their piece of crap and I'm pretty damn sure shit is gonna hit the fan
Dev (inner voice): no fucking way, I have kids to watch and chores to do!
Dev (outer voice): can't we just check everything in the morning?
Manager: No fucking way! If there is some fucking "challenge" when our "people" try to log onto their shit, I'm gonna look like a chump!
Let's talk silvers, I will sign on that bloody commie bullshit for your hours tonight.
Dev (outer voice): Fine. Until how late?
Dev (inner voice): Wait, I was supposed to do it without getting overtime bonus?6
- I get really defiant when i repeatedly get micromanaged with bullshit instructions, such as asking me to have my just started c++ library poc which also involves a lot of learning and will earliest be usable in a few months, "ready for our customer devs" in 2 weeks from now.
just no, you fucking retard.
also, the lib alone wouldn't make any sense, since the code parts working with it don't yet exist at all.
and then getting instructed to ask customers if they can provide you with c++ code that solves the task for them in their own software, which of course will somehow magically fit in my existing codebase. even if it existed (which it fortunately doesn't because they do everything in C#), i don't think i'm going to be faster trying to somehow solder in their code into my library, of which i'm still brainstorming about the general architecture.
if you have so fucking unrealistic expectations, maybe stop sniffing glue all day and don't make this my fucking problem
- I'm gonna fail my now-online uni course. I'm not understanding jackshit.
Fuck this covid bullshit.
Thank you for listening.17
- so i had the "miscommunication" meeting with PM today. he criticized me for "not following his orders", allegedly having worked on stuff during this sprint that did not help fulfill his sprint goal, and that i should have aligned my work with him. i didn't even realize this exact goal existed specifically for my user story (even though it was at least mentioned with one single word in story description, must have read over it). however, during the whole fucking sprint, he never mentioned a single time i should align with him. every daily i'm explaining what i'm going to do, every day he sees subtasks that i created for this story, and he never disagreed or mentioned this topic, so i assumed i'm on track. and now suddenly, when sprint is over, he blames me for the misalignment?
he also criticized me for having said something rude to him during a team meeting, but he couldn't rephrase or specify what i had said, he couldn't give any details at all, and also i couldn't understand or remember what he meant. what shall i respond to that?🤷♀️
also, aligning my work with that of a colleague and brainstorming with him about how our API could look like for our stakeholders was "not on track / following his orders" for him, even though i had announced it in the daily and he hadn't disagreed.
either this guy has alzheimer's or he has a down on me, dunno what to make out of all that.
and then he mentions i appear "somewhat aggressive" to him.
hmm weird, why should someone become aggressive when they have to deal with this bullshit all the time 🤦♀️12
-
- I'm pretty sure that the technical tests for FAANG are just to prove that you'll bust your ass doing trivial bullshit for them / and that you're a sucker -- instead of actual meaningful skill checks. Is this guy a total sucker who will drink our Koolaid when it's time? Are they wearing Nike? Yes. This is going to be a good investment.
I was down and out once and got a job a Micheal's Art and Crafts store. The application was clearly a mindfuck test. It asked, "If your boss was stealing - would you report them?" BTW - the answer is "No." You only report people below you. I answered in the way I knew the computer wanted me to - and I got the job. Same shit.
Are you subordinate? You're hired.2
-
- That slow realization that you're hitting burnout due to the toxicity of those in non-technical roles above you is wild. Also, Jira, Scrum, Sprints and all the extra bullshit can fuck right off as well.3
- -
- dear female devs / haecksen, how many other female devs do you have in your team?
if not so many, how do you feel about it?
and do you get a lot of sexist bullshit or not so much?
would be great to hear your experiences.
the female quota among our devs is < 3% 😅
most of the time i don't think about it and just do my job and it's fine, but sometimes i think, it's a bit weird. also, there is this fear that people might not have trust in my skills. it can be good and bad to be "special"... anyway, having more female rolemodels / mentors / colleagues to have technical discussions with would be awesome.59
- Dear customer, disregarding the bullshit your agency has dumped into Figma, I hereby deliver a clean, minimalist, and usable website without carousel sliders, chatbots, call-to-action teasers for newsletter signup, and muted auto-play videos consuming your end users' bandwidth.
One day you will understand and be grateful, too!3
-
- Yeah well fuck right off then. I'm just going to build a bot to auto signup for every possible username combination left in the latin alphabet. Then after the media bullshit dies down they'll be changing this policy.
👺23
-
- Doing e-learning for a job
One of the examples provided:
"You could be late for work (fail to meet your objective of being on time) because you're hit by a car whilst crossing the road"
Are you fucking kidding me, I think being late to work would be the least of my worries. Fuck corporate bullshit.18
-
- I am a person who never lies. And when I see/hear others lie, be it for the benefit of mine or not, it gets my blood boiling. I disrespect liers with passion.
And I particularly hate magic fixes at work. You know the ones, when smth is not working for a few weeks, you involve 3 other teams responsible for their tiers, and then one day suddenly everything starts working. When you ask all the 3 tiers what has been done - everyone says "nothing".
If you do this bullshit to me, just know that everytime I remember you, before remembering your name/face/role I very vividly visualize pissing on your toothbrush right before you wake up.
Or did I do that for real..? Idk, it's too vivid to distinguish2
- God damnit Quora!
I stumbled upon some article or post or whatever they are called on quora.
And I really wanted to read the comments on it. It wouldn’t let me unless I log in.
I normally don’t do that but I thought I’ll make an exception because I really wanted to read the comments.
So I clicked on that comments button and logged in (via google). First it presented me some modal dialog to pick 5 things that interest me. And it was mandatory. Fine… I picked those 5 things.
Finally it presents me the list of articles or whatever. But not the same list that I have seen before I was logged in. Scrolling, the article of my interest is not there. God damnit! Just show me my comments for fucks sake.
I go back to that tab where I was not logged in to somehow copy the link of that article or the link to the comments section. But it doesn’t let me. Some bullshit pseudo smart layer of crap is preventing me from doing anything.
Then I abuse the fucking share link to visit it in my logged in tab to finally see the comments that I came for.
And the comments weren’t even worth it. God! What a waste of time! And how can one fuck up a fucking forum so much?
It will be a lesson for me not to visit Quora ever again.4
-
- The MS Teams SDK is bullshit. It's so half baked and comes with instructions like "you'll probably want a better implementation for production, good luck cause you'll have to write it yourself."
Oh and don't forget to cache your installations in a file called "notifications.json"
Deploying will create 2 app registrations (OIDC) and about 6 resources in Azure... But "you'll probably want to log to app insights in production"... So I hope you're very familiar with Bicep cause you'll have to figure out how to add that to your template properly and there are about 7 Bicep files to decipher and it doesn't create an app insights out of the box.
Probably written by an intern.2
- Fuck you ios,storyboad,xibs,xcode. FUCK OFF!! YOU FUCKING ASSHOLES. Literally giving me migrane with your fucking ass constraints!! Fuck you xcode for not having a terminal. Ios is utterly bullshit. Has fucking all kind of devices that I have to set constraint. Fuck you macos. You are slower than a snail. How on earth do you take so much time to build!!
Width, height, constraints, my ass! What is this fucking logic bro. Fuck you apple for making so many device of different sizes and then hiring us to set constraints. Warning warning warning oh what a load of crap!
I would rather die than set your fucking ass constraints.6
- Central team: No, your team must be doing something wrong. Our pipeline is super-configurable and works for any situation! You just have to read the docs!
Me: Where are the docs?
Central team: Uhh, well, umm... we'll hook you up with a CI/CD coach!
Me: Okay, cool. In the mean time, can you point me at the repo where all the base scripts are?
Central team: Sure, it's here.
Me, some weeks later: Yeah, uhh, the coach can't seem to figure out how to make our Prod deployment work either.
Central team: That's impossible! It's so easy and completely configurable!
Me: Well, okay... but, here's the thing: your pipeline IS pretty "configurable", in the sense that you look for A LOT of variables...
Central team: See! We told you!
Me: ...none of which are actually documented, so they're just about useless to me...
Central team: But, but the coach...
Me: ...couldn't make heads or taisl of it either despite him literally being ON YOUR TEAM...
Central team: Then your project must just be architected wrong!
Me: Well, we're not perfect, so could be...
Central team: Right!
Me: ...but I think it's far more likely that the scripts... you know, the ACTUAL Python scripts the pipeline executes... while it took me DAYS to get through all your levels of abstraction and indirection and, well, BULLSHIT... it turns out they are incredibly NOT flexible. They do one thing, all the time, basically disregarding any flexibility in the pipeline. So, yeah, I'm thinking this is probably one of this "it's you, not me" deals.
Central team: Waaaaahhhhhhhh!!!!!
- “You know what is not fungible, scarce and valuable to me? My time! So if you wish to persuade me NFT are a good thing, you should pay me a fair amount of real money to make me listen your bullshit”
From now on this will be my standard reply to NFT harassment.
Feel free to use, edit and share with others.5
-
- Hiring a third party to help us with something...
Third party: yeah okay, we know what we need. Can we get access to your git repo
Me: sure, I'll make sure you'll get it
(To the admins): hey can you get them access to our git server?
Admins: did they sign the personal data processing contract?
Me: oh they won't work with any personal data. It's a dev server and they only need access to the source code. And the usual contracts and NDAs are already done
Admins: well we still need the other one.
... Sure. Why not. Just delays the start of the process for... Like a week and a half until that useless bit of paper has passed through all the necessary departments. Not like time's an issue. Right
- So my google assistant keeps bugging, my amazon app keeps glitching, my mac's display config get shuffled every time the computer wakes...
And we're supposed to do 10 rounds of bullshit whiteboarding interviews to work with these morons?1
- I’ve never seen a rich person give a step by step guide that led them to where they are, it’s always some beat around the bush bullshit20
- i don't have the energy to argue against this bullshit any longer
when you're trying so hard to build a piece of crap that nobody needs, fine
- Headhunter called about a rejection for an assignment I did:
Assignment had malformed data examples
Assignment had unrealistic timespan for completion
Assignment used item stocks for a shop setup
Assignment didn't use any prices just item stocks
Who builds a webshop without prices in the first place?
So done with this job hunting assessment bullshit
- [CMS Of Doom™]
Ah, yes, their built-in bullshit newsletter module just sent the n-th user n emails. Wonderful considering n=368.
The culprit? Better don't ask...
OK, anyway: So the mailer is running as a CRONjob, but nah, not as a console script call but by a public HTTP GET URL call, fucking obviously (it's the CMS Of Doom for a reason).
So these fucking imbeciles "implemented" an ob_start() callback where HTML links are - for whatever fucking reason - modified by some regex (obviously everybody knows parsing HTML by Regex is trivial). In this case the link was somehow modified to recall the mailer Cronjob...
This must have upset the pngoing mailing process thus spamming mails. Whyyyy
And I've thought I've seen it all after 6 months in this legacy hell...
This is why you don't run a company consisting of only beginners in PHP (in cluding their "CEO")!
-
- You know when you work with an incompetent team or organization? No one
knows what they are doing and there is no competent leadership to set high standards and
people are making their own bullshit up? Yeah? That's my current workplace.3
- The overuse of something is designed to demoralize and discourage the very thing.
Vapid christmas jingles, meaningless consumerism, blackfriday shopping, keeping up with the joneses, decorations and tinsel bullshit so overloaded on homes and trees that it looks like a gaudy airstrip display, holiday-town-esque themes and festivals so frequent, overcooked and overcommercialized that its like you've stepped into a 40-year-old sterile suburban house-wives braindrain internal fantasy reruns of regurgitated hallmark christmas romance movies.
Alls fair love and christmas.
In other news, some strapping young and intrepid adventurer *lit a public christmas tree on fire*.
Its a shame really, when we can't just enjoy the simple things without some dickhead going and spoiling it. But also I can't help but ask
"ARE YOU NOT ENTERTAINED?!"13
-
- Woke up, worked out, went back to bed. (?? Yeah I'm surprised too) Slept for an hour, woke up again, worked tirelessly and finished the slides. (Not as easy as you think. Had to drag out and undust a few jupyter notebooks again, plus realized that the stupid past me has deleted a bunch of notebooks because of lack of space, and I had to remake one again.)
Now I have to figure out why google slides doesn't like to play my videos, and write my script (don't give me the "don't practice too much" bullshit or "don't need a script". That's for losers. You gotta practice enough that you can cite your presentation even if you got a concussion in the middle of the presentation. Plus, you can modify content in the middle of presentation based on the crowd vibe but you can't do that without knowing your script by heart, can you?) Aaaaaand what was I saying... I forgot... Geez ... Well, wish me luck. This week is gonna be tough. And next week. And probably the week after. Ew.6
-
- Brothers and sisters I have ascended
From my early chilidhood I was taught by my parents & society that I should put effort into doing things that I "MUST", be kind and polite to others
Tis' all bullshit; never lift a finger if you do not feel like it; never help people free of charge; if you dislike a particular undertaking then it is not worth even an ounce of effort.
We live in a society.12
- Previous department director. I loved working with the dude.
He had a no bullshit attitude and would always back up and defend his people, he would tell us that whenever he sticks his neck out for us we better be in the right because he would go full ballistic and did not wanted to make a fool of himself or the department. Dude was fucking amazing.
He was happy when I accepted the promotion but told me that he wanted me to shadow him to learn more about proper management techniques. It was a clear mentor trainee relationship, but he had 100% full trust in my ability and knowledge.
He retired about a year ago, got a new director, dude ain't thaaaat bad but he has a lot of cons, as a person I like the new boss, as a boss I am not convinced entirely since he has not been around for long, but it does feel that he does not listen, goes in one ear and out through the other kind of person.
-
- Working on another SaaS product, and now I've run into a "fun" conundrum that is hard to determine cleanly in an automated fashion.
I'm certain it's stupid bullshit opinionated conventions like this as to why so many devs are driven to burnout and bitterness...3
- "Hey can you make this excel report for me real quick? Here are the columns, you gotta get them from this table in the database. Shouldn't take long."
Alright, sounds easy enough wait where is the data. I have to join how many tables? What is this bullshit data? I want to strangle the guy who modeled this piece of garbage.5
-
- So with the advent of Docker Desktop going premium we thought we'd buy a couple licenses... What did the HR team say?
"No, you're fine - we can just keep using it - how will they know?"
WHAT??!!! I will NOT be the one who gets brought into a multi-million dollar lawsuit because HR are a bunch of nitwits. I will fight this with everything I have so that when ouch time comes, i can say i didnt participate in the shady bullshit these people are recommending.17
-
- I think I've reached some kind of job nirvana. My coworkers and I all complain about our work. We're overworked, underappreciated, underpaid, and and have to deal with all sorts of bullshit all the time. Pretty much everyone who has been on the team longer than a year is talking about quitting.
But I started at this company as a level 1 tech support phone technician before I transferred into the DevOps side of things, and that tech support job was SO much worse. Way more stressful, way less pay, mandatory overtime, horrible scheduling, being forced to remain calm while people hurl insults at you over the phone, and it was a dead-end job with a high turnover rate and almost no opportunities for advancement of any kind.
And every time I think back on that job, I realize that what I have now is actually pretty great. I'm paid well (still underpaid for the job I do, but catching up really fast due to my current boss giving me several big raises to keep me from quitting lol). I deal only with other tech people like developers and data scientists so no more listening to salesmen insult me on the phone. I'm not in any sort of customer service role so I can call people on their bullshit as long as I'm professional about it. I'm salaried so they can't make me work horrible shifts. 99% of my days are a normal 9-5 workday. I actually have a reliable schedule to plan around.
People treat me like the adult that I am.
I'd get a similar experience at other, better-paying companies, for sure, but what I have now is still pretty great.
I'm sure I'll be back in a few days to rant about more nonsensical bullshit and stress, but for now I'm feeling the zen.
-22
- Imagine seeing words like developer:ess, member:esses or user:ess in artices on the web becoming more and more popular.
Pretty dumb, yes?
That’s what happening right now with the German language with something called gender-language.
It hurts my eyes reading Entwickler:innen, Mitglieder:innen and Benutzer:innen.
People argue that words like Entwickler are excluding woman by using the male form by default. But it’s just a matter of perspective. Why not just define this as the neutral form just like in english? Developer is neither male nor female. Everybody is fine with that.
Yet the Germans are messing around with this gender shit and making text unreadable for no reason at all.
It’s just bullshit!21
- ..
- Well here it goes,
I started out in customer support (A lot of stuff to tell here).
1.
One of my colleagues would come to work drunk, like every day he would smell of boze (the hard stuff 80%+). When a customer got on his nerves he endet the call and threw his Keyboard across the room. He worked in the company 3+ Years after I left.
2. Another colleague would connect to his Personal Computer at his home and play WoW while at work ( Allthough the man was a genius with a lot of free time, until a new task was assigned to him)
3. My Boss at the time did some really shitty things. I worked 17 hour days (while I was 18) for a week, and at the end of the week he shredded the accrued overtime with some Bullshit Explanation. (I did not stay long after this shitshow happened).
4. A dispatcher who sent our technicians out scheduled their tasks so that they were on the road for weeks and did not see their families. This led to a very strong turnover among technicians.
And yes, this company still operates today.1
- How to manage when you start something good for you, start taking decisions for your good and people start spreading hate about you. It obviously will effects your mental health right?
How you guys manage it? I mean how?
Today I'm feeling of getting bullied and getting bullied again from the same person. I'm correct but can't show the correctness just because there's no proof I've in-hand.
I'm literally tired of people now
- Just keep getting the dumbest tickets from a client as a Frontend dev.
I told them I am a backend and even my contract says backend but I made the mistake to help them with some themes.
So fucking ready to take other interviews where I don't have to deal with bullshit colors and fonts anymore.2
-
- Worst was getting head hunted into my current role at this terrific company.
Three months later I’m done with it.
It’s not shit shitty codebase, or the lack of direction that self governing teams have. It’s not the megalomaniac company owner. It’s the bullshit team mobbing and 8 hours of video calls a day.
The best part.
Come he’ll or high water I’m getting myself out before the end of the year.
I’d rather be busy and have f’k all chance of promotion than any more of this. At least the day will fly by.
Just hope I don’t make the same mistake twice, that’s become my biggest worry now.
-
- When the team lead has no idea what the problem is but took too many public speaking courses so he’s really well-versed at feeding people bullshit…1
- 🍻
- Retarded WSL crashed which means I lost all my fucking logs
How do people even work on this retarded OS? Is it just a scam for parasites to pretend they work and leeche on society using their precious little intellectual property bullshit12
-
-
- Reminder: if you were tasked with breaking down a work item/story, and your breakdown involved so much incorrect, outdated, and downright incomprehensible gibberish that, when you were approached by another dev, you had to rewrite the whole thing -- after rewriting it into a form that includes almost none of the original and still contains errors and omissions, you do not get to announce to everyone that you were 'helping' said dev to 'understand'. If you do this you are not some machevellian linguistic genius, you are just an asshole who is going to get found out for your bullshit sooner or later.7
- Play Store's $25 registration fee - for getting PWA listed in their shitty catalogue? Who in the right mind would even jump in this clusterfuck of store to find a *web* app? For all you know, Google, there is such thing as QR codes - and customers can just scan the code (or type in that sweet address). Voila! Boom!!! Ching-ching!
Hello-hello, monopolistic cashgrabage! I came to inform you that your TWA bullshit is unneeded in ETHICAL space. The only ones who would benefit from this thing are permission-hungry publishers. And I'm already sick of this culture where people are put into store bubbles. You can't hide the fact that this data and features you provide, with "native" layer, may be misused in a jiffy - and by big players, no less. Of course, as a vile dumpster that you are, you don't mind it.
Don't even bring up a battery consumption that comes with PWA and browser. This doesn't matter if you use an app for some 2 minutes to tick your mental checkboxes! I'm just sick of app stores and native apps that collect the data without normal warning, and dare to take more than 1 second to fucking load the cached data. Take a lesson or two from PWAs that collect (probably useful) cache, instead of my specs, and load almost instantly.11
- FUCK YOU PHP, FUCK YOU SYMFONY AND DEFINITELY FUCK YOU SHOPWARE.
Don't get me wrong, PHP has evolved a lot, but the stuff people are building with it is just the biggest load of fucking shit I have ever seen: Shopware. Shopware is the most ass-sucking abomination to extend. It's nearly impossible to develop anything beyond "use the standard features and shut the fuck up" that is more sophisticated than a fucking calculator.
The architecture of this pile of crap is the worst bullshit ever. A mix of OOP, randomly making use of non OOP concepts and features together with the unnecessarily HUGE amount of useless interfaces and classes. Sometimes I feel like it's 90% fucking shitty boilerplate shit.
And don't get me started with TWIG. It's a nice thought, but WHY THE BLOODY FUCK WOULD YOU NOT USE VUE IF YOU ARE ALREADY USING IT FOR A DIFFERENT PART OF SHOPWARE. This makes no fucking sense whatsoever and makes development of new features a huge pain in the ass. I can't comprehend how people actually like using this shit.
OH AND THE DATABASE. OH MY FUCKING GOD. This one is bad. Ever tried to figure anything out in a database where random strings (yes MySQL "relational" - you might think) that are stored as text in a JSON format make up some object or relations during runtime?? Why the fuck do you have foreign and primary keys if you don't use them properly??
Seriously you can't even figure out which data belongs to what because the architecture just sucks fucking ass. FUCK YOU Shopware wankers, you suck, your product sucks, your support sucks, your architecture sucks and you keep releasing new versions that regularly break shit even in minor versions.
I used to like PHP, but not in projects like these
- The install will take about one more minute…
*go make a cup of tea, pack for holiday, go on holiday, return from holiday*
Ah still installing for one more minute my old friend
- Wow i left this platform for almost a year because you guys were to right wing political and after 2 minutes of reading again i see some right wing conservative bullshit. You should just solve in reddit. I deinstall now.14
- My biggest challenge is not telling the people who wrote code I get to maintain that it is a big pile of shit. My fear is I will forget I wrote said code and proceed to complain about said code. Then someone will point it out that I wrote said code. So it is kind of a self preservation strategy.
Also, in meetings, when my boss calls something a "piece of software", I have to refrain from giggling.3
-
- what is your "dev sitting in a dumbfuck meeting being forced to waste their time listening to bullshit" spirit animal? mine is katsuki bakug
- Thanks google for creating the illusion of an option to change the shipping address for a repair order. You even mention the new address in your notification email, but when I click on UPS tracking, I can see that you sent the shipment to the old address, which is in a different city where I can't quickly go to pick up my repaired phone. After charging an extra 95,- Euros for additional damage supposedly not covered by my warranty. Lucky you that my old phone had connection problems with the shitty Vodafone station wi-fi router, which is one of the few reasons that I still even want to use a google hardware product. Thanks google for just being slightly less wretched and mediocre than your competitors, that might grant you some more years before you will be buried in history forever. Pixel phones are just like Macbooks: high quality product and good marketing, good enough to make your customer accept everything else being bullshit. Google search is even worse, but based on the same concept: just suck a little less than your competitors but don't waste any effort trying to actually be really good at anything
- Ok so I have
- a legal structure (or several of them actually)
- infrastructure
- website design is being made
- logo and visual identity
- one client and a pet project for portefolio
What I need:
- clients
How I do that:
- Buying leads and paying a dude to call them
- Paying a CM
- Networking (I guess)
- Printing posters and putting them all over Paris, LDN, Barcelona and Berlin.
Am I doing this right? I really don' wanna take another bullshit job just to pay the bills. I wanna go back to doing cool shit for radio stations and restaurants and stuff. Do the UX, get a design, create them a remix + headless strapi plus some random shit if needed. Get paid. Rince. Repeat3
-
- Any advice on how to find proper customer as a freelancer? Should I go on fiverrr and pay for coldcalling and an assistant?
Because honestly I'm sick of corporations employing me (my company) for the sake of not paying taxes but still expecting a 9 to 5 and all the corporate bullshit.
I just want to get customer, do UX, pay a designer, get figmas, implement, invoice, repeat. Not have 3 hours spring grooming calls stuck between a team meeting with management, a demo and a mid-spring alignment review. Is that too much to ask?7
- This jobhunting with recruiters is such bullshit. Haha. I'm gonna troll my way away from them fuckwits.1
-
- Getting super demotivated looking at job postings on both indeed and glassdoor - they all seem like the same generic bullshit of maintaining some website... does anyone have suggestions of how to find companies that are building exciting products that aren't dinosaurs?8
- MySQL Innodb easily get crashed, bullshit, I just restarted my server now all databases get corrupted. F*ck you OVH3
-
- I think you already know by now, but I have to say it. The update of the discord app is utter shit, brought only downgrades to me and they still refuse to fix bug that have been prevalent on their platform for years to force their shiny, new, untested bullshit down your throat
- you know they call me 'good' which in their speech is an insult. do they ever pay fucking attention or can they not conceive of what a real human is like ? especially one that is conflicted on various levels as a result of abuses, loneliness, depression, survival interests, years of bullshit, etc ? I'm not immune to temptation or corruption, I'm just extremely resistant.
-
-
- I implore ANYONE... please...
Have you EVER written a SINGLE Jest test that didn't have some sort of bullshit spewing stuff like this:
"ReferenceError: You are trying to `import` a file after the Jest environment has been torn down."
"Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: object. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports."
and yet running on a device, features work flawlessly and quite well, no errors or even warnings in sight logged
This is the most fragile pile of garbage I have ever seen.
I hate this.
inb4 your stupid ass todo boilerplate garbage you wrote tests for in freshman year. i'm talking about a REAL app with HUNDREDS of components.
where the grownup testing tools at? it's a question I've still not answered after a year of fucking around with this framework2
-
- I swear to god, getting Chumsky to do my bidding has almost taken longer than writing a parser by hand. I'm not looking for operator precedence, I'm not looking for complicated rules or anything, the main part of my language is literally just S-expressions, with some top level bells and whistles.
I don't even have a working lexer yet because I wanted to use this piece of shit library which usually matches the fewest possible characters to parse significant newlines but the Padded combinator takes as much whitespace at the end as it can find, and a host of other atomics don't actually adhere to the library's lazy principle in their procedural implementation. I've had enough. I'm going to bed, and tomorrow I'm writing tickets.
Actually, I'll probably also write PRs because I actually want the fixes to exist and not just complain about the problems, but I also really want to complain before I get started on that because I spent about two weeks just on this bullshit.3
- I have to participate in this retarded conference for 2 days and then I will have to join this fucking summer gathering on my weekend and that will take whole day. Fuck this fucking corporate bullshit. Better give me a fucking raise or better yet start fucking managing this scrum team because half of devs are not pulling their fucking weight.
Fucking BA too lazy to update issues with new details after grooming so each time I pick a new task I either have to somehow remember what we discussed weeks ago or I have to spam you with questions so you would run around like chicken without head while gathering answers to questions that were already discussed because you are too lazy of a fuck to compile notes. And even that is not enough, my merged MR's apparently dont cover all the use cases because your'e too incompetent to even figure out how our app works and define properly the task.
And then theres supposedly a techlead dev whos not taking a ticket when theres 3 days left till end of the sprint and he goes: "But a task spillover will happen!!!". Yeah so I guess just sit on your ass and wait for new sprint so you could pick yet again another low hanging fruit task and marinate it for weeks.
Motherfucker I checked your MR's in the last 6 weeks you did 1 week worth of work. You are a techlead but your only dev colleague is asking us for help daily because you dont even help him Fucking lazy and incompetent bastard.
- SAFe PI objective "business value" estimates are complete and utter bullshit. Every objective is a 10? Let's work on all of them in parallel! Fucking genius!2
- anyone made something that sits there and tells youtube to filter out content from other history lists and or certain subject material to allow something not inundated by their 'it' bullshit to shine through ?
also, this time, how about you all take pictures of seeming parents so their children's locations are accounted for at all times and the child traffickers working on camera here can get caught ?
they wouldn't be able to say kill their captives if they absence of the child was noted.
Top Tags | https://devrant.com/search?term=bullshit | CC-MAIN-2022-40 | refinedweb | 8,440 | 70.13 |
For a long time I resisted the QML wave. I had good reasons for doing so at the time. Essentially, compared to Python, there was not much desktop functionality that you could access without writing C++ code to wrap existing libraries and expose them to QML. I liked the idea of writing apps in javascript, but I really did not relish going back to writing C++ code. It seemed like a significant regression. C++ brings a weird set of bugs around memory management and rogue pointers. While manageable this type of coding is just not fun and easy.
However, things change, and so did QML. Now, I am convinced and am diving into QML.
- The base QML libraries have pretty much everything I need to write the kinds of apps that I want to write.
- The QtCreator IDE is "just right". It has an editor with syntax highlighting and an integrated debugger (90% of what people are looking for when they ask for an IDE) and it has an integrated build/run system.
- There are some nice re-factoring features thrown in, that make it easier to be pragmatic about good design as you are coding. I also like the automatic formatting features.
- The QML Documentation is not quite complete, but it is systematic. I am looking forward to more samples, though, that's for sure.
In my first few experiences with QML, I was a tiny bit thrown by the "declarative" nature of QML. However, after a while, I found that my normal Object Oriented thought processes transferred quite well. The rest of this post is about how I think about coding up classes and objects in QML.
In Python, C++, and most other languages that support OO, classes inherit from other classes. JavaScript is a bit different, objects inherit from objects. While QML is really more like javascript in this way, it's easy for me to think about creating classes instead.
I will use some code from a game that I am writing as an easy example. I have written games in Python using pygame, and it turned out that a lot of the structure of those programs worked well in QML. For example, having a base class to manage essential sprite behavior, then a sub class for the "guy" that the player controls, and various subclasses for enemies and powerups.
For me, what I call a QML "baseclass" (which is just a component, like everything else in QML) has the following parts:
- A section of Imports - This is a typical list of libraries that you want to use in yor code.
- A definition of it's "isa"/superclass/containing component - Every class is really a component, and a compnent is defined by declaring it, and nesting all of it's data and behaviors in curly brackets.
- Paramaterizable properties - QML does not have contructors. If you want to paraterize an object (that is configure it at run time) you do this by setting properties.
- Internal compotents - These are essentially private properties used within the component.
- Methods - These are methods that are used within the component, but are also callable from outside the component. Javascript does, actually, have syntax for supporting private methods, but I'll gloss over that for now.
In my CharacterSprite baseclass his looks like:
Imports
import QtQuick 2.0 import QtQuick.Particles 2.0 import QtMultimedia 5.0
Rectangle is a primative type in QML. It manages presentation on the QML surface. All the code except the imports exists within the curly braces for Rectangle.
Paramaterizable Properties
property int currentSprite: 0; property int moveDistance: 10 property string spritePrefix: ""; property string dieSoundSource: ""; property string explodeParticleSource: ""; property bool dead: false; property var killCallBack: null;
Internal Components
For readability, I removed the specifics.
Repeater { } Audio { } ParticleSystem { ImageParticle { } Emitter { } }
Methods
With implementation removed for readability.
function init() { //do some default behavior at start up } function takeTurn(target) { //move toward the target } function kill() { //hide self //do explosion effect //run a callback if it has been set }
CharacterSprite { spritePrefix: ""; dieSoundSource: "zombiedie.wav" explodeParticleSource: "droplet.png" Behavior on x { SmoothedAnimation{ velocity:20}} Behavior on y { SmoothedAnimation{ velocity:20}} height: 20 width: 20 }
I can similarly make a GuySprite for the sprite that the player controls. Note that
I added a function to Guy.qml becaues the guy can teleport, but other sprites can't.
I can call the kill() function in the collideWithZombie() function because it was inherited from the CharacterSprite baseclass. I could choose to override kill() instead by simply redefining it here.
CharacterSprite { id: guy Behavior on x { id: xbehavoir; SmoothedAnimation{ velocity:30}} Behavior on y { id: ybehavoir; SmoothedAnimation{ velocity:30}} spritePrefix: "guy"; dieSoundSource: "zombiedie.wav" explodeParticleSource: "droplet.png" moveDistance: 15 height: 20; width: 20; function teleportTo(x,y) { xbehavoir.enabled = false; ybehavoir.enabled = false; guy.visible = false; guy.x = x; guy.y = y; xbehavoir.enabled = true; ybehavoir.enabled = true; guy.visible = true; } function collideWithZombie() { kill(); } }
So now I can set up the guy easily in the main qml scene just by setting connecting up some top level properties:
Guy { id: guy; killCallBack: gameOver; x: root.width/2; y: root.height/2; }
Hi, great post! Cleared up a few things for me
Hi , I am facing a issue, where I am not able to see supported Platform as "Mobile". So if I create any Application I see only supported platform as "Desktop". My ubuntu os 12.04 LTS and QT creator 2.7 based on Qt 5.0.1. It will great if you can assists me on this. Thanks.
Really amazing and great post, thanks gor share,
funny pictures
this is very useful information for me
funny pictures
Very stunning and excellent post it's also useful information for us so thanks a lot for sharing us
clipping path service
Really it's an effective post. I learned a lot from here. Thank you so much for sharing.clipping path
clipping path service/Raster To vector | http://theravingrick.blogspot.com/2013/03/how-i-learned-to-love-qml-and.html | CC-MAIN-2018-30 | refinedweb | 989 | 66.23 |
Do you have an object and want to display all of its values at runtime in C#, without having to open specific debugging tools? In this article, I am going to explain the ways to be able to easily print out or display the values of an object along with all its nested objects.
Dumping objects is a great way to visualize values and types while enabling you to easily debug and detect problems at runtime. This is particularly important whenever you don’t have access to specific debugging tools.
If you are a PHP developer or at least know some PHP, you might already be familiar with a very simple commonly used function that prints out (or dumps) the full details of variables (or objects), including the value, the datatype and the length for string types, this function is the var_dump($someVariable).
string
var_dump($someVariable)
I don’t want to dig deep into PHP now, as it is outside the scope of this article, however I just want to quickly mention that due to the nature of PHP being a dynamic language, the dumping is done easily by its built-in function var_dump .
var_dump
In C#, you can still achieve the same result as PHP, but unluckily there is no built-in functionality to do so as PHP. So in C#, you must either use reflection or serialization to be able to dump the variable or object that you have.
Luckily though, there are many ways to do this in C#, and I am going to explain these methods to you in this article.
While there might be some other methods that I am unaware of to achieve a similar result, I will be explaining 3 ways, just to keep it short for you. I will be so happy if you share more ways that you know so all of us can learn from each other.
So as mentioned before, there are numerous ways to dump an object to display all of its details. Let’s get started.
ObjectDumper is a very handy assembly that can be found under Nuget Packages.
ObjectDumper
Once installed, the assembly provides a simple single static method (Dump) that takes a Generic type T, Dump name and a stream writer as parameters.
static
Dump
T
stream
The idea of the object dumper is that it recursively loops on the object’s properties through reflection and collects all data underlying that object.
Check the below code snippet to implement the ObjectDumper:
Item item = new Item
{
Name = "Chocolate",
Number = 12345,
CreatedDate = DateTime.Now,
Price = 36.7M,
Category = new Category(1, "Sweets")
};
using (var writer = new System.IO.StringWriter())
{
ObjectDumper.Dumper.Dump(item, "Object Dumper", writer);
Console.Write(writer.ToString());
}
The writer object will display the following output on the console:
Serializing an object means to transform or convert this object (of some type), to a string representation using a formatter. Now there are many formatters in C# that can be used to serialize objects.
The first and most commonly used types (nowadays) is the json formatter. ( You can still use other formatters like XML formatter, but I think json will be a better option due to its simplicity and better readability.)
To be able to serialize an object using the json serializer, you need to import the Newtonsoft nuget package, and then call the static method SerializeObject from class JsonConvert, passing to it the object you want to serialize as the first argument, and the second argument will be an enum of how to format the json string output, like indentation.
Newtonsoft
SerializeObject
JsonConvert
enum
See the below code snippet that represents a Dump function, it takes the object and dumps it to the console output. You can dump your object wherever you like (log, file, etc.)
private static void Dump(object o)
{
string json = JsonConvert.SerializeObject(o, Formatting.Indented);
Console.WriteLine(json);
}
Now that you have the basic concept of serializing the object to a json string and then dump it, why not we improve the above function to let it become an extension method? (Read my blog post about Extension Methods in .NET.
Having such function as an extension method on the project’s level comes in handy whenever you want to debug or visualize the details of your objects at runtime by just calling the dump method through the object’s reference.
dump
The below code is the same dump function mentioned previously, with a twist of an extension method:
static class ObjectHelper
{
public static void Dump<T>(this T x)
{
string json = JsonConvert.SerializeObject(x, Formatting.Indented);
Console.WriteLine(json);
}
}
Then you can simply call the method dump on our example’s item object (just make sure to import the namespace of the ObjectHelper in case you defined it under a different namespace).
item
namespace
ObjectHelper
item.Dump();
The output of the dump method that uses the json serializer is shown as below:
style="height: 167px; width: 433px" data-src="/KB/cs/1194980/json-dump.png" data-sizes="auto" data->
YAML stands for Ain’t Markup Language, according to yaml.org:
YAML is a human-friendly data serialization standard for all programming languages.
YAML is a human-friendly data serialization standard for all programming languages.
And in our context, YAML can also serve our need pretty well, in .NET, you can install the YamlDotNet nuget package, convert the object to a YAML format and then do the needed dumping.
YamlDotNet
The below function can be used to dump some object in YAML format:
private static void DumpAsYaml(object o)
{
var stringBuilder = new StringBuilder();
var serializer = new Serializer();
serializer.Serialize(new IndentedTextWriter(new StringWriter(stringBuilder)), o);
Console.WriteLine(stringBuilder);
}
Calling the above DumpAsYaml function on our item object will result in displaying the object’s details in YAML format, as shown in the below console output:
DumpAsYaml
It is very useful to be able to visualize your objects or collections at runtime, without having the need to open a particular debugging tool, like the debugger of Visual Studio.
In this article, I explained 3 ways to be able to dump an object at runtime so you can visualize the object values.
Let me know if this article was clear enough to explain this topic, and if you know more ways to dump objects, feel free to share them. | https://www.codeproject.com/Articles/1194980/How-to-Dump-Object-for-Debugging-Purposes-in-Cshar | CC-MAIN-2021-39 | refinedweb | 1,055 | 57.4 |
Request Content Checksums¶
Various pieces of code can consume the request data and preprocess it. For instance JSON data ends up on the request object already read and processed, form data ends up there as well but goes through a different code path. This seems inconvenient when you want to calculate the checksum of the incoming request data. This is necessary sometimes for some APIs.
Fortunately this is however very simple to change by wrapping the input stream.
The following example calculates the SHA1 checksum of the incoming data as it gets read and stores it in the WSGI environment:
import hashlib class ChecksumCalcStream(object): def __init__(self, stream): self._stream = stream self._hash = hashlib.sha1() def read(self, bytes): rv = self._stream.read(bytes) self._hash.update(rv) return rv def readline(self, size_hint): rv = self._stream.readline(size_hint) self._hash.update(rv) return rv def generate_checksum(request): env = request.environ stream = ChecksumCalcStream(env['wsgi.input']) env['wsgi.input'] = stream return stream._hash
To use this, all you need to do is to hook the calculating stream in
before the request starts consuming data. (Eg: be careful accessing
request.form or anything of that nature.
before_request_handlers
for instance should be careful not to access it).
Example usage:
@app.route('/special-api', methods=['POST']) def special_api(): hash = generate_checksum(request) # Accessing this parses the input stream files = request.files # At this point the hash is fully constructed. checksum = hash.hexdigest() return 'Hash was: %s' % checksum | http://flask.readthedocs.io/en/latest/patterns/requestchecksum/ | CC-MAIN-2017-39 | refinedweb | 244 | 52.36 |
0
Hi All,
Im attempting to make a simple java menu system. the psueo code is as follows
1. user is presented with menu
2. selection is made
3. that action is ran, i.e make a new user (i have this code seperate, which works)
4. the user is returned to the main menu.
what is actually happening is it keeps ending. I tried using a do while, but i cant seem to get my head around it.
/* * To change this template, choose Tools | Templates * and open the template in the editor. */ package week7; //import java.io.*; //import java.io.IOException; import java.util.Scanner; /** * * @author Administrator */ public class whileDoTest { @SuppressWarnings("empty-statement") public static void main (String[]arguments) { int menuSelect = 0; if (menuSelect ==0){ System.out.println("Please make your selection"); System.out.println(""); System.out.println("(1) - New book : (2) - New user : (3) - New Load"); Scanner scan = new Scanner(System.in); menuSelect = scan.nextInt(); }else{ switch (menuSelect) { case 1: if(menuSelect == 1) { System.out.println("Selection 1"); System.out.println("(1) - Main Menu"); break; }menuSelect = 0; case 2: if(menuSelect ==2) { System.out.println("Selection 2"); break; } case 3: if (menuSelect ==3){ System.out.println("Selection 3"); break; } } } } } | https://www.daniweb.com/programming/software-development/threads/235383/simple-non-gui-menu | CC-MAIN-2017-43 | refinedweb | 201 | 54.9 |
Journal tools |
Personal search form |
My account |
Bookmark |
Search:
...
adderall addictions
ambien dependency
the godfather love theme mp3
experience skiing vail
expression well window
...
florida assemblies of god
ama drug
106.7 miami romance
draw winnie the pooh
libertyville high school libertyville il
shamrock rovers
accounting call software
supply ...rental properties
las vegas tv episode 1 song
how to care for a kitten 3...
...(1959) [x] Snow White and the Seven Dwarfs (1937) god I hated her voice [x] Song of the South (1946) Disney's Dark Age ----------------- [x] The Aristocats (1970) [x] The...and the Hound (1981) [] The Great Mouse Detective (1986) [x] The Jungle Book (1967) [] The Many Adventures of Winnie the Pooh (1977) [x] Oliver and Company (1986) [x] Pete's Dragon (1977) [x] Rescuers, The (1977) I like the...
...
to clothing
small kitchen appliance mixer
song cats in the cradle
marjaree mason ... dead law money order
kindergarten winter theme activity
asian development bank philippine
merchants...gray information parrot pet
american idol theme
dog grooming schools quebec
difference between...lebanese women beautiful
christopher robbins winnie the pooh
richard burton 1977 tv role...
...
http genxtvland.simplenet.com school house rock song.hts hitunpack
low overhead antivirus
lafite rothschild ... school basketball results
short story of winnie the pooh
pcpoints.caescapewinter
import model search 2005
autohelm
big cat breeder
direct...car restoration texas
normandy beaches france
anime song theme
long haired orange calico enouricelh
fuxgolletoroz...
lapdance song
seville sts 1999
12 christmas day k relient
argatu ... spain
converted church sale
strawberry shortcake theme song
privacy defender v3.0 ...
conair hot brush
dog birthday theme
mental health insurance providers
1997 ... potted
korean tv programs
small song theme wonder
1998 dodge stratus ... marion county fair
bear history pooh winnie
bmw x5 2002 4.4...
... cloud fl real estate
romantic love song online
godzilla 2000 pics
lcd ... down lyrics
edward bear winnie the pooh
common sense product
i ... customer relations
16 daughter father song sweet
sylvia plath death of
... of msn messenger
casualty tv theme
health care union
canada general ...
onondaga community college syracuse
joker theme song
day earth kid project
...
... larry download demo
stonefire grill restaurant
the incredibles theme download
savannah bed and breakfast georgia
er tech ... 7.0
sonic foundry mp3 plug in activation code
poetry competitions - australian
winnie the pooh music downloads
bed and breakfast in ohio...movies download
older women with tattoos
anime opening song download
two roads diverge in a yellow
yandas...
...cell nokia phone ringtone
avanti evening gown
display pic pooh winnie
lacross field
granby system wrestling
baking store new york
hutchinson mn real estate
propylene ... web
exchange link video
minnesota renaissance initiative
plunge router edge guide
lyrics golden girls theme song
architect chief download full
john kohler paramount
...
...
ewan mcgregor film nick leeson
hole theme
jackson state university urban
how to ... trip long island
jackson lyric millie song
couros educationaltechnology.ca
greater london house ...hole institution oceanographic wood
michigan fight song sheet music
dog printables
silver jewellry ...manteca unified school district california
winnie the pooh games.com
manchester 24 hour...
... walls dcps
sheet metal duct nj
autumn theme weddings
dracut high school dracut ma
alexander...activation
butcher shop steakhouse orlando
rave master theme song lyrics
new york daily news photo...university
colorado tax liens
waxing poetic means
identify a phone number
winnie the pooh shops
texas motorcycle license requirements
chamonix apartment for rent
short ...
Download My Friends Tigger And Pooh Theme Song
Free Download Theme Song My Friends Tigger And Pooh
Mp3 Theme Song My Friends Tigger And Pooh
My Friends Tigger And Pooh Theme Song Mp3
Just Like We Dreamed It Mp3
Animation Winnie The Pooh
Baby Bedding
Winnie The Pooh Theme Song Lyric
Cartoons Winnie The Pooh
Download Winnie The Pooh Theme Song
Disney
Lyric To Winnie The Pooh Theme
Eeyore
Winnie The Pooh Theme
Mickey Mouse
The New Adventure Of Winnie The Pooh Theme
Peter Rabbit
Winnie The Pooh Song
Piglet
Bear Pooh Sing Song Tigger Winnie
Result Page:
1
2
3
4
5
6
7
8
9
for Winnie The Pooh Theme Song | http://www.ljseek.com/Winnie-The-Pooh-Theme-Song_s4Zp1.html | crawl-002 | refinedweb | 671 | 55.34 |
On Wed, Mar 18, 2009 at 04:57:16PM -0700, Adam Dershowitz, Ph.D., P.E. wrote: > > was trying to compile ffmpeg 0.5 on the Blackfin, with > >>>> uclinux. I > >>>> found a bug and then a fix. > >>>> I am using --enable-swscale and when it tries to link, I get a link > >>>> error: > >>>> libswscale.so undefined reference to > >>>> '_sws_ff_bfin_yuv2rgb_get_func_ptr' > >>>> Sure enough this function is used in libswscale/yuv2rgb.c: > >>>> > >>>> lines 461: > >>>> > >>>> #if ARCH_BFIN > >>>> if (c->flags & SWS_CPU_CAPS_BFIN) > >>>> t = sws_ff_bfin_yuv2rgb_get_func_ptr(c); > >>>> #endif > >>>> > >>>> But I greped through the code and I can't find that function > >>>> defined > >>>> anywhere. In fact, I only see it that single time in the code. > >>>> > >>>> I asked around some and was told that: > >>>> ff_bfin_yuv2rgb_get_func_ptr in yuv2rgb_bfin.c is the correct > >>>> function. > >>>> > >>>> So one of these needs to be changed so that they have the same > >>>> name. > >>>> I am not sure if this is the best place to report this bug and fix. > >>>> If not, please let me know where might be a better? I think it's the tracker that is having issues. > >> Given the above error it looks like no one has actually tested the > >> blackfin specific code. I had been running a much older version of > >> ffmpeg from blackfin.uclinux.com that does run. > > > > Marc Hoffman, the author of the blackfin code in FFmpeg, used to work > > for Analog Devices, I'm pretty sure the code must have worked at some > > point on some machine. > > I have also been talking to some people at blackfin.uclinux.org. That > site has a few people from AD who post (maybe support the site?). I > have not seen any posting from Hoffman in a while, so might not be > working on this any longer. Marc no longer works for AD. I know this for sure. > But the current release of Linux on that site includes ffmpeg- > svn-11114 and that version does compile and run. Can you try to find the revision that broke? Diego | http://ffmpeg.org/pipermail/ffmpeg-devel/2009-March/073496.html | CC-MAIN-2014-42 | refinedweb | 327 | 84.37 |
pipe, pipe2 - create pipe
Current Version:
Linux Kernel - 3.80
Synopsis
#include <unistd.h> int pipe(int pipefd[2]); #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <fcntl.h> /* Obtain O_* constant definitions */ #include <unistd.h> int pipe2(int pipefd[2], int flags);.
Return Value
pipe2() was added to Linux in version 2.6.27; glibc support is available starting with version 2.9.
Conforming To
pipe(): POSIX.1-2001.
pipe2() is Linux-specific..
Program source
); } }
Colophon
This page is part of release 3.80 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
License & Copyright
(A few fragments remain from an earlier (1992) version by Drew Eckhardt .) %%-23 by Rik Faith Modified 1996-10-22 by Eric S. Raymond Modified 2004-06-17 by Michael Kerrisk Modified 2005, mtk: added an example program Modified 2008-01-09, mtk: rewrote DESCRIPTION; minor additions to EXAMPLE text. 2008-10-10, mtk: add description of pipe2() | https://community.spiceworks.com/linux/man/2/pipe2 | CC-MAIN-2018-34 | refinedweb | 169 | 70.8 |
Newsletter
Java Training
Performance Consulting
CustomerHub
Newsletter Archive
.
distribution within the buckets.
[Issue 234] Random Code Failures - Race Conditions Between Our Code and the JVM.
[Issue 233] Intersection Types to give Lambdas Multiple Personalities.
[Issue 232] ByteWatcher from JCrete.
[Issue 231] Why Crete?
In this newsletter, Heinz answers the question that he gets asked most: "Why Crete?" "Because I can" could be one answer, but the reality is a bit deeper than that.
[Issue 230] String Substring
Java 7 quietly changed the structure of String. Instead of an offset and a count, the String now only contained a char[]. This had some harmful effects for those expecting substring() would always share the underlying char[].
[Issue 229] Cleaning ThreadLocals.
[Issue 228] Extracting Real Task from FutureTask
ExecutorService allows us to submit either Callable or Runnable. Internally, this is converted to a FutureTask, without the possibility of extracting the original task. In this newsletter we look at how we can dig out the information using reflection.
[Issue 227] How Can I Become a Champion Programmer?
In this newsletter, Heinz talks about some characteristics that are useful if you want to become a successful champion Java programmer.
[Issue 226] Discovering Where Threads Are Being Constructed
How can you discover all the places in your program where threads are being constructed? In this newsletter we create our own little SecurityManager to keep an eye on thread creation.
[Issue 225] Hiding Interface Methods
Whenever a class implements an interface, all the implemented methods have to be public. In this newsletter we look at a trick that we can use to make them private.
[Issue 224] Book Review: Mastering Lambdas: Java Programming in a Multicore World
In his latest book, Maurice Naftalin takes us on a journey of discovery as we learn with him how Lambdas and Streams work in Java 8.
[Issue 223] ManagedBlocker
Blocking methods should not be called from within parallel streams in Java 8, otherwise the shared threads in the common ForkJoinPool will become inactive. In this newsletter we look at a technique to maintain a certain liveliness in the pool, even when some threads are blocked.
[Issue 222] Identity Crisis.
[Issue 221] Throwing Checked Exceptions From Lambdas
Lambdas in Java took a long time in coming, due to the considerable engineering effort put into incorporating them into the Java Language. Unfortunately checked exceptions are not managed as seamlessly as they should.
().
[Issue 219] Recent File List
The LinkedHashMap has a little known feature that can return elements in least recently accessed order, rather than insertion order. In this newsletter we use this to construct a "Recently Used File" list.
[Issue 218] Thread Confinement.
.
[Issue 214] CountingCompletionService
CompletionService queues finished tasks, making it easier to retrieve Futures in order of completion. But it lacks some basic functionality, such as a count of how many tasks have been submitted.
[Issue 213] Livelocks from wait/notify
When a thread is interrupted, we need to be careful to not create a livelock in our code by re-interrupting without returning from the method.
[Issue 212] Creating Sets from Maps
Maps and Sets in Java have some similarities. In this newsletter we show a nice little trick for converting a map class into a set.
[Issue 211] Unicode Redux (2 of 2)
We continue our discussion on Unicode by looking at how we can compare text that uses diacritical marks or special characters such as the German Umlaut.
[Issue 210] Calling Methods from a Constructor
In this newsletter we investigate what can go wrong when we call methods from constructors, showing examples from the JDK, Glassfish, Spring Framework and some other well known frameworks..
[Issue 209] Unicode Redux (1 of 2)
Unicode is the most important computing industry standard for representation and handling of text, no matter which of the world's writing systems is used. This newsletter discusses some selected features of Unicode, and how they might be dealt with in Java.
[Issue 208] Throwing Exceptions from Fields
How can you set a field at point of declaration if its constructor throws a checked exception?
[Issue 207] Final Parameters and Local Variables
The trend of marking parameters and local variables as "final" does not really enhance your code, nor does it make it more secure.
[Issue 206] Striped Executor Service
We present a new type of ExecutorService that allows users to "stripe" their execution in such a way that all tasks belonging to one stripe are executed in-order.
[Issue 205] How to Make Your Own Rules
Rule Based Programming, a declarative programming paradigm, is based on logical patterns to select data and associate it with processing instructions. This is a more indirect method than the sequential execution steps of an imperative programming language.
[Issue 204] Book Review: The Well-Grounded Java Developer.
[Issue 203] GOTO in Java
It is possible to use the break statement to jump out to the end of a labelled scope, resulting in some strange looking code, almost like the GOTO statement in C.
[Issue 202] Distributed Performance Tuning
In this newsletter, it is up to you to figure out how we improved the performance of our previous Fibonacci newsletter by 25%.
[Issue 201] Fork/Join With Fibonacci and Karatsuba.
[Issue 200] On Learning Concurrency
Every Java programmer I have met knows that they should know more about concurrency. But it is a topic that is quite hard to learn. In this newsletter I give some tips on how you can become proficient in concurrency.
[Issue 199] Hacking Java Surreptitiously
Surreptitious: stealthy, furtive, well hidden, covert. In this newsletter we will show two Java puzzles written by Wouter Coekaerts that require a surreptitious solution. You cannot do anything to upset the security manager.
[Issue 198].
[Issue 197] What is the Meaning of Life?
In this newsletter we try to calculate the meaning of life, with surprising results.
[Issue 196] Uncaught AWT Exceptions in Java 7
Java 7 removes the Swing Event Dispatch Thread (EDT) hack that allowed us to specify an uncaught exception handler for the EDT using a system property sun.awt.exception.handler.
[Issue 195] Performance Puzzler With a Stack Trace
In this newsletter, we present a little performance puzzler, written by Kirk Pepperdine. What is happening with this system? There is only one explanation and it can be discovered by just looking at the stack trace.
[Issue 194] trySynchronize
Did you know that it possible to "try" to synchronize a monitor? In this newsletter we demonstrate how this can be used to avoid deadlocks and to keep the wine coming.
[Issue 193] Memory Usage of Maps
In this newsletter we measure the memory requirements of various types of hash maps available in Java. Maps usually need to be threadsafe, but non-blocking is not always the most important requirement.
[Issue 192b] How Does "this" Escape?
A quick follow-up to the previous newsletter, to show how the ThisEscape class is compiled, causing the "this" pointer to leak.
[Issue 192] Implicit Escape of "this"
We should never allow references to our objects to escape before all the final fields have been set, otherwise we can cause race conditions. In this newsletter we explore how this is possible to do.
[Issue 191] Delaying Garbage Collection Costs
The garbage collector can cause our program to slow down at inappropriate times. In this newsletter we look at how we can delay the GC to never run and thus produce reliable results. Then, instead of actually collecting the dead objects, we simply shut down and start again.
[Issue 190b] Automatically Unlocking with Java 7 - Follow-up
In this newsletter we discuss why we unfortunately will not be able to use the try-with-resource mechanism to automatically unlock in Java 7.
[Issue 190] Automatically Unlocking with Java 7
In this newsletter we explore my favourite new Java 7 feature "try-with-resource" and see how we can use this mechanism to automatically unlock Java 5 locks.
[Issue 189] Fun and Games with Java Lego NXT 2.0
[Issue 188] Interlocker - Interleaving Threads
In this newsletter, we explore a question of how to call a method interleaved from two threads. We show the merits of lock-free busy wait, versus explicit locking. We also discuss an "unbreakable hard spin" that can cause the JVM to hang up.
[Issue 187] Cost of Causing Exceptions
Many years ago, when hordes of C++ programmers ventured to the greener pastures of Java, some strange myths were introduced into the language. It was said that a "try" was cheaper than an "if" - when nothing went wrong.
[Issue 186] Iterator Quiz
Most of the northern hemisphere is on holiday, so here is a quick quiz for those poor souls left behind manning the email desk. How can we prevent a ConcurrentModificationException in the iterator?
[Issue 185] Book Review: Java: The Good Parts
In his latest book, Jim Waldo describes several Java features that he believes make Java "good". A nice easy read, and I even learned a few new things from it.
[Issue 184] Deadlocks through Cyclic Dependencies unlikely, it has happened in production.
[Issue 183] Serialization Size of Lists
What has a larger serializable size, ArrayList or LinkedList? In this newsletter we examine what the difference is and also why Vector is a poor candidate for a list in a serializable class.
[Issue 182] Remote Screenshots
In this newsletter, we describe how we can generate remote screen shots as compressed, scaled JPGs to build a more efficient remote control mechanism.
[Issue 181] Generating Static Proxy Classes - 2/2
In this newsletter, we show how the Generator described in our previous issue can be used to create virtual proxy classes statically, that is, by generating code instead of using dynamic proxies.
[Issue 180] Generating Static Proxy Classes - 1/2
In this newsletter, we have a look at how we can create new classes in memory and then inject them into any class loader. This will form the basis of a system to generate virtual proxies statically.
[Issue 179] Escape Analysis
Escape analysis can make your code run 110 times faster - if you are a really really bad programmer to begin with :-) In this newsletter we look at some of the places where escape analysis can potentially help us.
[Issue 178b] WalkingCollection Generics
Generics can be used to further improve the WalkingCollection, shown in our previous newsletter.
[Issue 178] WalkingCollection
We look at how we could internalize the iteration into a collection by introducing a Processor interface that is applied to each element. This allows us to manage concurrency from within the collection.
[Issue 177] Logging Part 3 of 3
After almost nine years of silence, we come back to bring the logging series to an end, looking at best practices and what performance measurements to log.
[Issue 176] The Law of the Xerox Copier
Concurrency is easier when we work with immutable objects. In this newsletter, we define another concurrency law, The Law of the Xerox Copier, which explains how we can work with immutable objects by returning copies from methods that would ordinarily modify the state.
[Issue 175] Creating Objects Without Calling Constructors
De-Serialization creates objects without calling constructors. We can use the same mechanism to create objects at will, without ever calling their constructors.
[Issue 174] Java Memory Puzzle Explained
In this newsletter, we reveal the answer to the puzzle from last month and explain the reasons why the first class sometimes fails and why the second always succeeds. Remember this for your next job interview ...
[Issue 173] Java Memory Puzzle
In this newsletter we show you a puzzle, where a simple request causes memory to be released, that otherwise could not. Solution will be shown in the next newsletter.
[Issue 172] Wonky Dating
The DateFormat produces some seemingly unpredictable results parsing the date 2009-01-28-09:11:12 as "Sun Nov 30 22:07:51 CET 2008". In this newsletter we examine why and also show how DateFormat reacts to concurrent access.
[Issue 171] Throwing ConcurrentModificationException Early.
[Issue 170] Discovering Objects with Non-trivial Finalizers.
[Issue 169] Monitoring Sockets.
[Issue 168] The Delegator
In this newsletter we show the reflection plumbing needed for writing a socket monitor that sniffs all the bytes being sent or received over all the Java sockets. The Delegator is used to invoke corresponding methods through some elegant guesswork.
[Issue 167] Annotation Processing Tool
In this newsletter we answer the question: "How do we force all subclasses to contain a public no-args constructor?" The Annotation Processing Tool allows us to check conditions like this at compile time, rather than only at runtime.
[Issue 166] Serialization Cache
Java's serialization mechanism is optimized for immutable objects. Writing objects without resetting the stream causes a memory leak. Writing a changed object twice results in only the first state being written. However, resetting the stream also loses the optimization stored in the stream.
[Issue 165] Starvation with ReadWriteLocks
In this newsletter we examine what happens when a ReadWriteLock is swamped with too many reader or writer threads. If we are not careful, we can cause starvation of the minority in some versions of Java.
[Issue 164] Why 0x61c88647?.
[Issue 163] Book Review: Effective Java 2nd Edition.
[Issue 162] Exceptions in Java.
[Issue 161] Of Hacking Enums and Modifying "final static" Fields addition, we use a similar technique to modify static final fields, which we need to do if we want the switch statements to still work with our new enums.
[Issue 160] The Law of the Uneaten Lutefisk sometimes escape from such deadlocked situations in Java and learn why the stop() function should never ever ever be called.
[Issue 159] The Law of Sudden Riches
We all expect faster hardware to make our code execute faster and better. In this newsletter we examine why this is not always true. Sometimes the code breaks on faster servers or executes slower than on worse hardware.
[Issue 158] Polymorphism Performance Mysteries Explained.
[Issue 157] Polymorphism Performance Mysteries
Late binding is supposed to be a bottleneck in applications - this was one of the criticisms of early Java. The HotSpot Compiler is however rather good at inlining methods that are being called through polymorphism, provided that we do not have very many implementation subclasses.
[Issue 156] The Law of Cretan Driving to some essential reading for every Java Specialist.
[Issue 155] The Law of the Micromanager.
[Issue 154] ResubmittingScheduledPoolExecutor
Timers in Java have suffered from the typical Command Pattern characteristics. Exceptions could stop the timer altogether and even with the new ScheduledPoolExecutor, a task that fails is cancelled. In this newsletter we explore how we could reschedule periodic tasks automatically.
[Issue 153] Timeout on Console Input
In this newsletter, we look at how we can read from the console input stream, timing out if we do not get a response by some timeout.
[Issue 152] The Law of the Corrupt Politician
Corruption has a habit of creeping into system that do not have adequate controls over their threads. In this law, we look at how we can detect data races and some ideas to avoid and fix them.
[Issue 151] The Law of the Leaked Memo
In this fifth law of concurrency, we look at a deadly law where a field value is written early.
[Issue 150] The Law of the Blind Spot
In this fourth law of concurrency, we look at the problem with visibility of shared variable updates. Quite often, "clever" code that tries to avoid locking in order to remove contention, makes assumptions that may result in serious errors.
[Issue 149] The Law of the Overstocked Haberdashery not doing anything.
[Issue 148] Snappy JSliders in Swing.
[Issue 147] The Law of the Distracted Spearfisherman.
[Issue 146] The Secrets of Concurrency (Part 1)
Learn how to write correct concurrent code by understanding the Secrets of Concurrency. This is the first part of a series of laws that help explain how we should be writing concurrent code in Java.
[Issue 145] TristateCheckBox Revisited
The Tristate Checkbox is widely used to represent an undetermined state of a check box. In this newsletter, we present a new version of this popular control, retrofitted to Java 5 and 6.
[Issue 144] Book Review: Java Puzzlers
Experienced Java programmers will love the Java Puzzlers book by Josh Bloch and Neal Gafter, both well known Java personalities. In this newsletter, we look at two of the puzzles as a teazer for the book.
[Issue 143] Maths Tutor in GWT
Google Web Toolkit (GWT) allows ordinary Java Programmers to produce highly responsive web user interfaces, without needing to become experts in JavaScript. Here we demonstrate a little maths game for practicing your arithmetic. Included is an Easter egg.
[Issue 142] Instrumentation Memory Counter
Memory usage of Java objects has been a mystery for many years. In this newsletter, we use the new instrumentation API to predict more accurately how much memory an object uses. Based on earlier newsletters, but revised for Java 5 and 6.
[Issue 141] Hacking Enums
Enums are implemented as constant flyweights. You cannot construct them. You cannot clone them. You cannot make copies with serialization. But here is a way we can make new ones in Java 5.
[Issue 140] Book Review: Java Generics and Collections
Java Generics and Collections is the "companion book" to The Java Specialists' Newsletter. A well written book that explains generics really nicely, including some difficult concepts. In addition, they cover all the new collection classes up to Java 6 Mustang.
[Issue 139] Mustang ServiceLoader
Mustang introduced a ServiceLoader than can be used to load JDBC drivers (amongst others) simply by including a jar file in your classpath. In this newsletter, we look at how we can use this mechanism to define and load our own services.
[Issue 138] Better SQLExceptions in Java 6
Java 6 has support for JDBC 4, which, amongst other things, gives you better feedback of what went wrong with your database query. In this newsletter we demonstrate how this can be used.
[Issue 137] Creating Loggers DRY-ly how to solve that problem.
[Issue 136] Sneaking in JDBC Drivers length of database queries.
[Issue 135] Are you really Multi-Core? cores are giving you.
[Issue 134] DRY Performance been essential in an effort to improve performance.
[Issue 133] Safely and Quickly Converting EJB3 Collections
When we query the database using EJB3, the Query object returns an untyped collection. In this newsletter we look at several approaches for safely converting this to a typed collection.
[Issue 132] Thread Dump JSP in Java 5.
[Issue 131] Sending Emails from Java
In this newsletter, we show how simple it is to send emails from Java. This should obviously not be used for sending unsolicited emails, but will nevertheless illustrate why we are flooded with SPAM.
[Issue 130] Deadlock Detection with new Locks 1.6. Also, we have a look at what asynchronous exceptions are, and how you can post them to another thread.
[Issue 129] Fast Exceptions in RIFE this work faster.
[Issue 128] SuDoKu Madness
In this Java Specialists' Newsletter, we look at a simple Java program that solves SuDoKu puzzles.
[Issue 127] Casting like a Tiger
Java 5 adds a new way of casting that does not show compiler warnings or errors. Yet another way to shoot yourself in the foot?
[Issue 126] Proxy equals()
When we make proxies that wrap objects, we have to remember to write an appropriate equals() method. Instead of comparing on object level, we need to either compare on interface level or use a workaround to achieve the comparisons on the object level, described in this newsletter.
[Issue 125] Book Review: Java Concurrency in Practice.
[Issue 124] Copying Arrays Fast
In this newsletter we look at the difference in performance between cloning and copying an array of bytes. Beware of the Microbenchmark! We also show how misleading such a test can be, but explain why the cloning is so much slower for small arrays.
[Issue 123] Strategy Pattern with.
[Issue 122] Copying Files from the Internet.
[Issue 121] How Deep is Your Hierarchy? examine some existing classes.
[Issue 120] Exceptions From Constructors
What do you do when an object cannot be properly constructed? In this newsletter, we look at a few options that are used and discuss what would be best. Based on the experiences of writing the Sun Certified Programmer for Java 5 exam.
[Issue 119] Book Review: "Wicked Cool" Java
The book "Wicked Cool Java" contains a myriad of interesting libraries, both from the JDK and various open source projects. In this review, we look at two of these, the java.util.Scanner and javax.sql.WebRowSet classes.
[Issue 118] A Simple Database Viewer
A simple database viewer written in Java Swing that reads the metadata and shows you all the tables and contents of the tables, written in under 100 lines of Java code, including comments.
[Issue 117] Reflectively Calling Inner Class Methods
Sometimes frameworks use reflection to call methods. Depending how they find the correct method to call, we may end up with IllegalAccessExceptions. The naive approach of clazz.getMethod(name) is not correct when we send instances of non-public classes.
[Issue 116] Closing Database Statements
Don't Repeat Yourself. The mantra of the good Java programmer. But database code often leads to this antipattern. Here is a neat simple solution from the Jakarta Commons DbUtils project.
[Issue 115] Young vs. Old Generation GC
A few weeks ago, I tried to demonstrate the effects of old vs. new generation GC. The results surprised me and reemphasized how important GC is to your overall program performance.
[Issue 114] Compile-time String Constant Quiz
When we change libraries, we need to do a full recompile of our code, in case any constants were inlined by the compiler. Find out which constants are inlined in this latest newsletter.
[Issue 113] Enum Inversion Problem generics.
[Issue 112] Book Review: Head First Design Patterns.
[Issue 111] What is faster - LinkedList of ArrayList?
[Issue 110] Break to Labeled Statement
[Issue 109] Strategy Pattern of HashCode Equality
[Issue 108] Object Adapter based on Dynamic Proxy
[Issue 107] Making Enumerations Iterable
[Issue 106] Multi-line cells in JTable in JDK 1.4+
[Issue 105] Performance Surprises in Tiger
[Issue 104] EDT Lockup Detection
[Issue 103] New for/in loop gymnastics
[Issue 102] Mangling Integers
[Issue 101b] Causing Deadlocks in Swing Code (Follow-up)
[Issue 101] Causing Deadlocks in Swing Code
[Issue 100] Java Programmers aren't Born
[Issue 099] Orientating Components Right to Left
[Issue 098] References
[Issue 097] Mapping Objects to XML Files using Java 5 Annotations
[Issue 096] Java 5 - "final" is not final anymore
[Issue 095b] Follow-up: Self-reloading XML Property Files
[Issue 095] Self-reloading XML Property Files
[Issue 094] Java Field Initialisation
[Issue 093] Automatically Detecting Thread Deadlocks
[Issue 092] OutOfMemoryError Warning System
[Issue 091] Controlling Machines Remotely with Java
[Issue 090] Autoboxing Yourself in JDK 1.5
[Issue 089] Catching Uncaught Exceptions in JDK 1.5
[Issue 088] Resetting ObjectOutputStream
[Issue 087] sun.reflect.Reflection
[Issue 086b] Initialising Fields before Superconstructor call (Follow-up)
[Issue 086] Initialising Fields before Superconstructor call
[Issue 085] Book Review: Pragmatic Programmer pragmatism.
[Issue 084] Ego Tripping with Webservices
[Issue 083] End of Year Puzzle
[Issue 083b] End of Year Puzzle Follow-up
[Issue 082] TristateCheckBox based on the Swing JCheckBox
[Issue 081] Catching Exceptions in GUI Code
[Issue 080] Many Public Classes in One File
[Issue 079] Generic toString()
[Issue 078] MemoryCounter for Java 1.4
[Issue 077] "Wonderfully disgusting hack"
[Issue 076] Asserting Locks
[Issue 075] An Automatic Wait Cursor: WaitCursorEventQueue
[Issue 074] GoF Factory Method in writing GUIs
[Issue 073] LinkedHashMap is Actually Quite Useful
[Issue 072] Java and Dilbert
[Issue 071] Overloading considered Harmful
[Issue 070b] Multi-Dimensional Arrays - Creation Performance
[Issue 070] Too many dimensions are bad for you
[Issue 069b] Results of last survey
[Issue 069] Treating Types Equally - or - Life's Not Fair!
[Issue 068] Appending Strings
[Issue 067] BASIC Java
[Issue 066] Book Review: Java Performance Tuning by Jack Shirazi.
[Issue 065] Wait, Cursor, Wait!
[Issue 064] Disassembling Java Classes
[Issue 063] Revisiting Stack Trace Decoding
[Issue 062b] Follow-up and Happy New Year!
[Issue 062] The link to the outer class
[Issue 061] Double-checked locking
[Issue 060] Nulling variables and garbage collection
[Issue 059b] Follow-up to Loooong Strings
[Issue 059] When arguments get out of hand...
[Issue 058] Counting bytes on Sockets
[Issue 057] A Tribute to my Dad, Hans Rudolf Kabutz
[Issue 056] Shutting down threads cleanly
[Issue 055] Once upon an Oak ...
[Issue 054b] Follow-up to JDK 1.4 HashMap hashCode() mystery
[Issue 054] HashMap requires a better hashCode() - JDK 1.4 Part II
[Issue 053] Charting unknown waters in JDK 1.4 Part I
[Issue 052] J2EE Singleton
[Issue 051] Java Import Statement Cleanup
[Issue 050] Commenting out your code?
[Issue 049] Doclet for finding missing comments
[Issue 048] Review: The Secrets of Consulting
How much do your customers love you? How should you give and receive advice? In this excellent book, we learn why it is so important to understand your customer. I use the principles daily in my work with code reviews, performance tuning and dealing with customers or clients.
[Issue 047] Lack of Streaming leads to Screaming
[Issue 046] "The compiler team is writing useless code again ..."
[Issue 045] Multi-line cells in the JTable
[Issue 044] Review: Object-Oriented Implementation of Numerical Methods
In our first book review, we look at an interesting book that talks about implementing numerical methods in Java. Although not primarily a Java book, it gives us some insight as to the performance of Java versus other languages like C or Smalltalk.
[Issue 043] Arrgh, someone wants to kill me!
[Issue 042] Speed-kings of inverting booleans
[Issue 041] Placing components on each other
[Issue 040] Visiting your Collection's Elements
[Issue 039] Why I don't read your code comments ...
[Issue 038a] Counting Objects Clandestinely
[Issue 038b] Counting Objects Clandestinely - Follow-up
[Issue 037] Checking that your classpath is valid
[Issue 036] Using Unicode Variable Names
[Issue 035] Doclets Find Bad Code
[Issue 034] Generic Types with Dynamic Decorators
[Issue 033] Making Exceptions Unchecked
[Issue 032] Exceptional Constructors - Resurrecting the dead
[Issue 031] Hash, hash, away it goes!
When I first started using Java in 1997, I needed very large hash tables for matching records as quickly as possible. We ran into trouble when some of the keys were mutable and ended up disappearing from the table, and then reappearing again later.
[Issue 030] What do you Prefer?
[Issue 029] Determining Memory Usage in Java
[Issue 028] Multicasting in Java
[Issue 027] Circular Array List
[Issue 026] Package Versioning
[Issue 025] Final Newsletter
[Issue 024] Self-tuning FIFO Queues
[Issue 023] Socket Wheel to handle many clients
[Issue 022] Classloaders Revisited: "Hotdeploy"
[Issue 021] Non-virtual Methods in Java
[Issue 020] Serializing Objects Into Database
[Issue 019] Finding Lost Frames
[Issue 018] Class names don't identify a class
[Issue 017b] Follow-up
[Issue 017a] Switching on Object Handles
[Issue 016] Blocking Queue
[Issue 015] Implementing a SoftReference based HashMap
[Issue 014] Insane Strings
[Issue 013b] Follow-up
[Issue 013a] Serializing GUI Components Across Network
[Issue 012] Setting focus to second component of modal dialog
[Issue 011] Hooking into the shutdown call
[Issue 010] Writing GUI Layout Managers
[Issue 009] Depth-first Polymorphism
[Issue 008] boolean comparisons
[Issue 007] java.awt.EventQueue
In this newsletter, we learn how we can create our own EventQueue and then use that to intercept all the events that arrive in AWT/Swing.
[Issue 006] Implementation code inside interfaces
Did you know that it is possible to define inner classes inside interfaces? This little trick allows us to create method objects inside an interface.
[Issue 005] Dynamic Proxies - Short Tutorial
Dynamic proxies can help us reduce the amount of code we have to write. In this newsletter, our guest author, Dr Christoph Jung, gives us a short tutorial on how the dynamic proxies work.
[Issue 004] Logging part 2
We look at some tricks on how we can track where things are happening in our code, which can then be used for producing more detailed logging.
[Issue 003] Logging part 1.
[Issue 002] Anonymous Inner Classes
Anonymous inner classes allow us to call super class methods inside its initializer block. We can use this to add values to a collection at point of creation, for example: new Vector(3) {{ add("Heinz"); add("John"); add("Anton"); }});
[Issue 001] Deadlocks
Java deadlocks can be tricky to discover. In this very first Java Specialists' Newsletter, we look at how to diagnose such an error and some techniques for solving it.. | http://www.javaspecialists.co.za/archive/archive.html | CC-MAIN-2016-50 | refinedweb | 4,791 | 52.9 |
About the Mobile Backend Generator
If you haven’t heard already, there’s a brand-new way to create and deploy an OData service, using the Mobile Backend Generator. The Mobile Backend Generator is a set of tools included with SAP Web IDE that interacts with SAP Cloud Platform Mobile Services. These tools are now available to customers with the latest release of the Mobile Development Kit (MDK) on SAP Web IDE for trial or preview landscapes. You’ll need to create a trial account to use this new feature, if you don’t have one already.
This blog walks you through generating and deploying an OData service to the Neo or Cloud Foundry landscape. Even if you plan to deploy to Neo, you require a Cloud Foundry account as well. Cloud Foundry provides the builder service for the Mobile Backend Generator.
About the MDK
MDK is a feature of SAP Cloud Platform Mobile Services. If you’re new to MDK and want to learn more about developing apps without having to do much coding, you might want to begin by going through this Learning Journey and these helpful videos.
Setting Up Your Environment
Ensure that the SAP Web IDE Full-Stack is enabled.
Select the box and under Take Action, select “Go to Service”.
Select the gear in the left menu (1) and select “Features” (2). Make sure that “Mobile Development Kit Editor V master” is On (3).
Select “Cloud Foundry” (1), verify that you have a builder installed (2) and that it is up to date. When the installation or update is complete, select “Save” (3). I missed this and wondered why I couldn’t continue!
Generating an OData Service
Select the “</>” symbol (1) to enter your workspace. Right-click your workspace folder, select “New” (2) then “Project from Template.”(3)
Change the Category to “All categories” (1) and select “Mobile OData Service Project.” (2) Select “Next.” (3)
Provide a project name (1), then select “Next.” (2).
Provide a service app name (1) and the version (2). Here, for simplicity, I’m using an in-memory database (3), a deployment target of Neo (4) and “None” (5) as the authentication mode. Select “Finish” (6) when done. See Generating an OData Service for choices other than in-memory database.
Add a model to your project by importing a CSDL (Common Schema Definition Language) file that you’ve previously created. Right-click your project folder (1), select “Import” (2), then “File or Project.” (3)
When you see the import screen, browse to the location of your csdl.xml file (1), then select “OK”. (2)
Next, you’ll see a visual representation of your CSDL file. You can use our CSDL graphical modeling tool to create or modify an existing csdl.xml file. This tool is currently available in preview mode.
You can create your own csdl.xml file if you don’t already have one. Right-click your project folder, (1) and select “New” (2), then select “OData CSDL Document.” (3)
You can use either OData Version 2.0 or Version 4.0 (1). Please see Overview for OData versions to help determine what version is correct for you. Provide a schema namespace (2) and a namespace alias (3), then select “Create” (4). The file name is generated automatically from the schema namespace.
Double-click the csdl.xml file to use the graphic editor to graphically create your model. Remember to save your changes!
Right-click your csdl.xml file (1), select “Generate Mobile OData Service” (2) to generate the OData service.
The srv (service) folder now contains new files and folders. Right-click your srv folder (1),
select “Build”(2)(3) .
The target folder now contains an odata-service-version#.war file.
Deploying the Generated Service to Neo
Right-click your generated war file in the target folder (1) , select “Export.” (2) This saves the file locally.
Since you cannot deploy to Neo from Web IDE, move to the SAP Cloud Platform cockpit of your Neo landscape. Select “Tools” (1) from the top menu and select “SAP Cloud Platform Cockpit”. (2)
Once you’re in the cockpit, go to “Applications” (1) then “Java Applications,” (2) then select “Deploy Application”. (3)
Browse to the war file you exported (1) and enter an application name (2) (this is the name that will display in Neo). Select the runtime “Java EE 7 Web Profile TomEE 7.” (3) When done, select “Deploy.” (4)
When the deployment completes, select “Done”. Don’t start the service yet.
Select the service name.
Since this blog used the in-memory database and None as authentication method options, select “Start”. Otherwise, you’ll need to bind your database to your application.
Once the service starts, you see its URL in the Application URLs.
Select this link to access your OData service and make sure it works from a browser.
Deploying the Generated Service to Cloud Foundry
Even though I chose Neo as the deployment landscape, I’ve added the instructions to Cloud Foundry because they are simple!
You can deploy your application to Cloud Foundry from Web IDE. Right-click on your project folder (1) and select “Build” (2), then “Build.” (3)
Right-click on the srv folder (1), select “Run” (2), then “Run as Java Application” (3).
Once the app is deployed, you’ll see a URL just above the console. You can test and debug your service within Web IDE.
Select the link to access your OData service and make sure it works from a browser.
Now you are ready to use your newly generated OData service by building a mobile app with SAP Cloud Platform SDK for iOS, SAP Cloud Platform SDK for Android, MDK or SAP Mobile Cards!
I hope you enjoyed my first blog!
Hi,
I didn’t understand the below step. Where do i create my model or csdl.xml file? Is this blog have any parent blog in which i can refer this step? Please help me on this.
“Add a model to your project by importing a CSDL (Common Schema Definition Language) file that you’ve previously created”
Thank you,
Regards,
JK.
Good day JK,
This blog is not part of another blog. This step assumes that you have a pre-existing csdl.xml file.
The following step shows you how to create an csdl.xml file but doesn’t get into the detail. Sounds like something that we should blog about. This is something we have discussed. I will document the process.
Lana
Hi Lana, congrats with your first blog! I find the article very inspiring.
Does the method described allow to create oData services with creation/deletion functions and not only read?
Regards, Anton
Hi Anton,
The service we generate supports the full Create, Update, Delete operations without requiring any extra coding. If you want to implement actions (OData V4) or Function Imports (OData V2), we would generate the stubs, but then you would have to code the business logic.
Our generated code has extension points to add any custom logic outside of the generic CRUD service we generate.
thanks
Martin
Thanks for the answer, Martin! It definitely makes me even more interested in this tool. Meanwhile I have one more question if you don’t mind.
What’s the reason why you call it “Mobile” backend generator? Are there any limitations which don’t allow me to use this tool for my Fiori applications rather than mobile apps?
Regards, Anton
Is anyone here?
Thanks for the nice comments Anton. Makes me want to write more blogs! 🙂 Martin has provided the answer to your question. | https://blogs.sap.com/2018/10/07/guide-to-easily-create-and-deploy-an-odata-service-using-our-new-mobile-backend-generator/ | CC-MAIN-2018-43 | refinedweb | 1,267 | 67.45 |
You have several choices for how you handle events that originate from elements in your Microsoft Silverlight-based application:
You can write JavaScript handlers, which are interpreted by the Silverlight plug-in and passed to the hosting browser's script engine.
You can write handlers in a dynamic language that targets managed code, such as IronPython or Managed JScript. These handlers are not compiled until run time. Support for dynamic languages in managed code is provided by the dynamic language runtime (DLR). For more information, see Programming Silverlight with Dynamic Languages.
You can write handlers in a managed code language such as C# or Visual Basic. These handlers are written in a code-behind file that is compiled along with the XAML page that references the handlers.
This topic describes the third option. It explains how to write event handlers in C# or Visual Basic, reference the handlers from XAML, and then compile the XAML and its code-behind as part of an assembly.
Prerequisites (available from the Silverlight download site):
Silverlight version 2 Beta 1.
Microsoft Visual Studio 2008.
Silverlight Tools Beta 1 for Visual Studio 2008.
This topic also assumes that you have created a basic Silverlight project. (See Creating an Application for Silverlight for instructions.)
A project that is based on the Visual Studio Silverlight templates will choose a default class and namespace. x:Class will already be declared with these values.
Create a new application for Silverlight as described in Creating an Application for Silverlight. However, choose the Generate a test HTML page ... option for the hosting HTML page (do not create a Web site). You can choose either a C# or Visual Basic project (further procedures will account for either language choice).
Open Solution Explorer to view the project that you created from the default Silverlight template.
Open Page.xaml for editing.
If the root UserControl defines a Height and Width, delete those attributes.
Replace the Grid element with a StackPanel element, and give it a visible background. Paste the following over the existing Grid and it closing tag:
<StackPanel x:
</StackPanel>
Add a UI element. For this example, add a new Canvas element as a child element of the StackPanel. Paste the following XAML as child elements of the initially empty StackPanel opening and closing tags. Note that you are not declaring the event or its handler yet.
<Canvas x:
<TextBlock>Canvas1</TextBlock>
</Canvas>
Place the text cursor just after the last attribute closing quote of Canvas, and type one space. Now type MouseLeftButtonDown, or better yet, type as much as you need to and use the IntelliSense dropdown choices to complete the typing for you. Notice that the attribute is now defined empty: MouseLeftButtonDown="" and also note that there is an additional UI dropdown visible from IntelliSense.
Press TAB, as instructed by the tooltip over the IntelliSense dropdown. This will both reference and define your handler. (The name will be Canvas1_MouseLeftButtonDown; you can change this name later.) Your XAML should resemble the following:
<Canvas x:
<TextBlock>Canvas1</TextBlock>
</Canvas>
Do not compile yet. Your initial handler code is empty. You will write this code in the next procedure.
All event handlers for the XAML page must be defined in the class and assembly that is declared by x:Class in the XAML file. The code-behind and its XAML are joined by a partial class technique, and are eventually compiled together. By default, the templates generate XAML and its code-behind with matching names, and with the code-behind as a dependent file of its XAML in the project structure. For instance, you have been working with Page.xaml. Its code-behind file will be named either Page.xaml.cs or Page.xaml.vb depending on your language choice, and can be found "under" the XAML you just edited (click the + to the left of Page.xaml in Solution Explorer if you cannot see the code-behind file initially.)
Open Page.xaml.cs or Page1.xaml.vb for editing.
For C#, the file will already have a namespace and class defined based on the project templates. For Visual Basic, the class is defined, and the namespace is set by the default namespace of the project.
Find the existing and empty Canvas1_MouseLeftButtonDown definition as a member of the Page class. This empty handler was placed here when you pressed TAB through IntelliSense in the previous procedure. Write a handler that uses the sender parameter to obtain a reference to the element where the handler was attached, and then sets a property value on it (using a new SolidColorBrush to replace the original value that was defined in XAML). Your code should resemble the following:
private void Canvas1_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
Canvas c = sender as Canvas;
SolidColorBrush newColor = new SolidColorBrush(Color.FromArgb(255, 200, 77, 0));
c.Background = newColor;
}
Private Sub Canvas1_MouseLeftButtonDown(ByVal sender As System.Object, ByVal e As System.Windows.Input.MouseButtonEventArgs)
Dim c As Canvas = sender
Dim newColor As SolidColorBrush = New SolidColorBrush(Color.FromArgb(255, 200, 77, 0))
c.Background = newColor
End Sub
Compile the application and run it (the quick way: press F5 to compile and run under debug). When you load the HTML page and its hosted Silverlight content and click the area under Canvas1, you should see it change color.
Managed events in Silverlight generally use either EventHandler or RoutedEventHandler, unless there is particular event data that motivates a different handler and a class derived from EventArgs/RoutedEventArgs. The MouseLeftButtonDown event does have mouse-specific event data available, so it uses the MouseEventHandler delegate.
Generally, it is convenient to attach the event handlers as part of the XAML markup. But you can also use the appropriate common language runtime (CLR) syntax to add event handlers as part of your CLR code. For example, for the C# language, you attach handlers to instance events with +=. The following procedure uses the class constructor to attach a Loaded event handler. The Loaded event handler in turn attaches the existing Canvas1_MouseLeftButtonDown handler to the MouseLeftButtonDown event on another Canvas.
Edit Page.xaml. After the first Canvas "button", insert another similar Canvas that does not have a XAML event handler in it.
<Canvas x:
<TextBlock>Canvas2</TextBlock>
</Canvas>
Edit Page.xaml.cs. In the existing Page class constructor, attach a handler for the Loaded event, after the InitializeComponent call. (You will define the new handler in the next step. Also, in C# at least, this is another case where IntelliSense will offer options as soon as you type the +=, and can also generate the skeleton of Page_Loaded for you to start the next step.)
public Page()
{
InitializeComponent();
this.Loaded += new RoutedEventHandler(Page_Loaded);
}
Define the Page_Loaded handler, which references your new Canvas2 and its MouseLeftButtonDown event, and attaches the Canvas1_MouseLeftButtonDown handler there also.
Sub Page_Loaded(ByVal sender As Object, ByVal e As RoutedEventArgs)
AddHandler Canvas2.MouseLeftButtonDown, AddressOf Canvas1_MouseLeftButtonDown
End Sub
Save the files and compile and run the application. When you click the Canvas2 area, you should see that it also changes color.
Visual Basic has a different syntax for attaching handlers that relies on the keywords WithEvents and Handles. In the case of named elements that are created in XAML, you will not directly see the WithEvents keywords applied. In a Silverlight Visual Studio project, WithEvents keywords are added and field references (for accessing your named XAML elements in code) are created through generated code by default. Handles syntax will work only if you give the source element a unique x:Name; that name becomes your instance reference when you declare Handles.
Edit Page.xaml.vb. Add another handler that does essentially the same thing as the first one (copy Canvas1_MouseLeftButtonDown and rename it Canvas2_MouseLeftButtonDown). This time, use Handles to assign the handler to the specific MouseLeftButtonDown event on the Canvas2 instance.
Private Sub Canvas2_MouseLeftButtonDown(ByVal sender As System.Object, ByVal e As System.Windows.Input.MouseButtonEventArgs) Handles Canvas2.MouseLeftButtonDown
Dim c As Canvas = sender
Dim newColor As SolidColorBrush = New SolidColorBrush(Color.FromArgb(255, 200, 77, 0))
c.Background = newColor
End Sub
If you give the first Canvas the x:Name Canvas1, and the handler implementations are identical in signature as well as in function, you can designate more than one instance to be handled by the same handler, for example: … Handles Canvas1.MouseLeftButtonDown, Canvas2.MouseLeftButtonDown. If you do this, you will also want to remove the MouseLeftButtonDown event attribute from Canvas1, because now you are declaring the handling in code.
Note that Handles syntax and using the XAML attributes for event handlers are ultimately mutually exclusive per event. You should generally choose one or the other approach throughout your application for consistency. If you use Handles, be aware that Silverlight supports the concept of routed events, which will be shown next. The ramification of routed events on Handles is that an event handler might handle routed events where the source was actually a different object in the object tree.
So far, the aspects of event handling that have been shown are not that different from event handling in CLR managed code in general, with the exception of the XAML attribute technique for referencing the handler. Next, you will modify your handlers to illustrate two important concepts that are specific to Silverlight input events.
Edit Page.xaml. Add a handler for the MouseLeftButtonDown event to the StackPanel parent of your two named Canvas elements. Use (or reproduce) the IntelliSense naming for the handler, LayoutRoot_MouseLeftButtonDown.
After the two named Canvas elements, but still within the StackPanel, add the following element:
<TextBlock Name="statusText"/>
Edit Page.xaml.cs or Page.xaml.vb. Write code for the IntelliSense-generated LayoutRoot_MouseLeftButtonDown handler. Paste the following:()
Compile and run the application.
Try clicking on your Canvas elements. The handler that changes their color executes as before. But now the event is also handled by the parent StackPanel, which updates the TextBlock with information that comes from the mouse event data. This illustrates the concept of a routed event; the event is raised (sourced) by the Canvas element, but then routes upwards in the object tree to also invoke handlers on the StackPanel parent. If you had put a handler on the root UserControl, the event would have routed there too.
Not all Silverlight events have this routing behavior. In fact only a few input events route in this way. These are each events defined by the UIElement base class, and are events that report input that comes from the mouse or keyboard. Check the SDK reference documentation for the events defined by UIElement and check the Remarks of each to see which events specifically have the routing behavior.
Notice that your LayoutRoot_MouseLeftButtonDown is reporting the value of Source from the event data. This is the element that actually raises the event, and would have the first opportunity to handle the event in its route. Occasionally you might click somewhere that does not report the name of its Source. This is probably because you happened to click the text of the TextBlock that has the text "Canvas1" or "Canvas2". That is the source in this case (it just isn't displaying a name in the status, because you did not give these TextBlock elements a name). So you actually had routing behavior occurring even before you introduced a handler on the StackPanel.
This also illustrates the value and purpose of routed input events. When you are compositing a UI, either as part of writing user code that puts together existing controls, or when you are defining a custom control's compositing, it is not always certain whether you want event handling to happen on the composite parts or on some common parent. Event routing provides the opportunity to handle either case, or both cases.
Your LayoutRoot_MouseLeftButtonDown also exercises the most useful mouse-event specific event data: the X and Y coordinates of where the mouse event occurs. This is reported by GetPosition. GetPosition is a method rather than a property (or pair of properties) so that you can easily evaluate the coordinates in your choice of relative frames of reference. To get coordinates within the overall Silverlight content area, you simply pass null to GetPosition. Otherwise, you can pass any element that is connected to the Silverlight object tree (it does not need to be an object that is directly involved in the event route). The most common object to pass is the event Source, so that you can preserve the relative frame of reference even if the event is handled on a parent element, but many other techniques are also possible. For example, you might pass reference frame objects that correct for relative transforms.
Event routing is useful, but you do not always want every possible object on a route to invoke the handler it has for a particular input event. Fortunately, you can write event handlers that can report the fact that another handler has already been invoked at an earlier point in the route. To do this, you change the value of Handled in the event data, and then add checks for Handled to any other event handler that might be invoked along a route.
Edit Page.xaml.cs or Page.xaml.vb. Modify LayoutRoot_MouseLeftButtonDown handler, by adding a check for the value of Handled. if Handled is false previously, then the handler also sets Handled to true, so that any other handler along the route can use similar logic to determine if the event was already handled.
if (!e.Handled)
{
e.Handled = true;();
}
If (Not e.Handled) Then
e.Handled = True()
End If
Modify Canvas1_MouseLeftButtonDown (and Canvas2_MouseLeftButtonDown if you have it for the Visual Basic example) to set Handled to true. Also, have your handler blank out the status text in this case.
void Canvas1_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
e.Handled = true;
statusText.Text = "";
Canvas c = sender as Canvas;
SolidColorBrush newColor = new SolidColorBrush(Color.FromArgb(255, 200, 77, 0));
c.Background = newColor;
}
Try clicking on your Canvas elements. The handler that changes their color executes as before. The event is NOT handled by the parent StackPanel, because your color-changing event handlers set Handled to true in the event data that is then passed further along the route, and your StackPanel handler checks for Handled.
If you are familiar with Windows Presentation Foundation (WPF) routed events, there is an important difference here in the routed event behavior. In WPF, setting an event's data to Handled=true would prevent most handlers from even invoking; only specially registered "handledEventsToo" handlers would still be invoked. In Silverlight, changing the Handled value does not influence the event routing behavior or handler invocation at all. It is up to your handler implementation to check for Handled. Basically you are using the routed event data's Handled value as a sentinel for application-specific event behavior, and you can choose your own "protocol" for what Handled values mean to your application logic, and for when you consider any given routed event as handled.
Keyboard events are also routed events. Rather than reporting the X/Y coordinate of the mouse pointer position at the time of the event, a keyboard event's data reports the specifics of the key that was pressed or released to initiate the event.
Keyboard events also introduce the concept of modifier keys. The most common modifier keys are the SHIFT or CTRL (control) keys. Usually you are not interested in the modifier keys in isolation, but are interested in whether a modifier key is pressed at the same time as another key's event is received.
So far, the elements you have added are not inherently focusable. You will need to add a true control to the UI so that it can be a stop in the tab sequence and thus gain focus. A control must be focused in order to raise a keyboard event, but the key events themselves are defined deeper than Control in the hierarchy so that events might be raised by a control but still can be routed to and handled by a non-control that cannot gain focus (such as a panel).
Edit Page.xaml.cs or Page.xaml.vb. Add a handler for the KeyDown event to the StackPanel named LayoutRoot. Use (or reproduce) the IntelliSense naming for the handler, LayoutRoot_KeyDown. Paste in the following code:
private void LayoutRoot_KeyDown(object sender, KeyEventArgs e)
{
//check key value, we are looking for "G"
if (e.Key == Key.G)
{
//check modifiers for Ctrl
if ((Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control)
{
statusText.Text = "Beep!";
}
}
}
Sub LayoutRoot_KeyDown(ByVal sender As Object, ByVal e As KeyEventArgs)
'check key value, we are looking for "G"
If (e.Key = Key.G) Then
'check modifiers for Ctrl
If (Keyboard.Modifiers And ModifierKeys.Control) Then
statusText.Text = "Beep!"
End If
End If
End Sub
Edit Page.xaml. After the two Canvas elements, but before the TextBlock, paste the following:
<Button Content="Hello" />
Tab into the application. Once tabbed focus is within Silverlight content, the only focusable element is the button. If you press the key combination Ctrl+G while the "Hello" button has focus, the event routes to the StackPanel parent, and the handler there changes the status text.
Key only reports portable key codes. Nonportable key codes are possible; these occur when a key is pressed that only exists on a particular platform. Nonportable key codes are not discussed in this Quickstart; see Keyboard Support.
The following runs and shows sources for the final code from this QuickStart. | http://silverlight.net/Quickstarts/BuildUi/ControlEventHandlers.aspx | crawl-001 | refinedweb | 2,909 | 55.24 |
I'm using Visual C# Express 2010, on Windows XP SP3.
I took pain to define the var collBlock in the class directly, however, when I try to use it, it works once and then NOT.
Google searched all and tried a lot of solution. Error stil comes up.
The code below is partial, don't count the '{}', they are correct (VS2010 autocorrect won't allow even the most indifferent mistyping)
public class DMSecDoc { //...some declares... // private Hashtable collBlock = new Hashtable(); // this works... private int ReadAllBlocks(BinaryReader aReader, Int32 nNumOfBlock) { collBlock.Clear(); try { for (int i = 0; i < nNumOfBlock; i++) { aBlock = new DataBlock(); aBlock.Init(aReader); collBlock.Add(aBlock.GetName(), aBlock); } return 0; } catch { return 99; } } // this one does not. What gives ? ============= private string[] GetThatDataBlock(String name) { String[] arrData = null; //The name 'collBlock' does not exist in the current context DataBlock aBlock = (DataBlock)collBlock(name); arrData = aBlock.GetData(); return arrData; } | https://www.daniweb.com/programming/software-development/threads/380540/the-name-collblock-does-not-exist-in-the-current-context | CC-MAIN-2018-47 | refinedweb | 150 | 58.89 |
When; however, I’m going to focus on testing the SEO visibility of React applications, as I am currently working on a public-facing React web app.
Single-Page App SEO
The move toward single-page applications (e.g., React, Angular, Ember, etc.) has changed how content is delivered to users. Because of this, search engines have had to adjust how they crawl and index web content.
So what does this mean for single-page application SEO? There have been several great posts that attempt to investigate this. The general takeaway is that Google and other search engines can crawl and index these applications with pretty good competency. However, there can be caveats–so it’s really important to be able to test your site. This is where Fetch as Google comes in.
In Google’s own.
A Simple React App
To experiment with Fetch as Google, we’ll first need a website (React app for us) and a way to deploy it to a publicly accessible URL.
For this post, I’m going to use a simple “Hello, World!” React app, which I’ll deploy to Heroku for testing. Despite the app being simple, the concepts generalize well for more complicated React apps (in my experience).
Suppose our simple React app looks like this:
class App extends React.Component { render() { return ( <div> <h1>Hello, World!</h1> </div> ) } }
Using Fetch as Google
You can find the Fetch as Google tool under the Google Search Console. (You’ll need a Gmail account to have access.)
When you arrive at the Search Console, it will look something like this:
The Search Console first asks for a website. My Heroku app is hosted at. Enter your website URL and then press
Add a Property.
The Search Console will then ask you to verify that you own the URL that you would like to test.
The verification method will vary depending on how your website is hosted. For my site, I needed to copy the verification HTML file provided by Google to the root directory of my website, then access it in the browser.
After verifying your URL, you should see a menu like this:
Under the Crawl option, you should see Fetch as Google:
Fetch as Google allows you to test specific links by specifying them in the text box. For example, if we had a
/users page and wanted to test that, we could enter
/users in the text box. Leaving it blank tests the index page of the website.
You can test using two different modes: Fetch, and Fetch and Render. As described by Google, Fetch:
Fetches a specified URL in your site and displays the HTTP response. Does not request or run any associated resources (such as images or scripts) on the page.
Conversely, Fetch and Render:
Fetches a specified URL in your site, displays the HTTP response and also renders the page according to a specified platform (desktop or smartphone). This operation requests and runs all resources on the page (such as images and scripts). Use this to detect visual differences between how Googlebot sees your page and how a user sees your page.
Running a Fetch on our test React site yields:
This reflects the index.html page housing our React app. Note that this reflects the HTML when the page loads, before our React app is rendered inside of the app div.
Running Fetch and Render yields:
This provides a comparison of the site that the Googlebot is able to see with what a user of the site would see in their browser. For our example, they are exactly the same, which is good news for us!
There are several stories on the internet of folks running Fetch as Google on their React apps and observing a blank or different output for “This is how Googlebot saw the page.” That would be an indication that your React app is designed in a way that is preventing Google, and potentially other search engines, from being able to read/crawl it appropriately.
This could happen for a variety of reasons, one of which could be content that loads too slowly. If your content loads slowly, there is a chance that the crawler will not wait long enough to see it. This wasn’t a problem in our above example. I’ve also run Fetch as Google on a reasonably large React website that makes several async calls to fetch initial data, and it was able to see everything just fine.
So what’s the limit? I decided to run some naive experiments.
Experiments
Note: I’m not sure how Fetch as Google works under the hood. There are some posts that hint that it might be rendering your website using PhantomJS.
React apps usually rely on asynchronous calls to fetch their initial data. To reflect this, let’s update our sample React app to fetch some GitHub repositories and display a list of their names.
class App extends React.Component { constructor() { super(); this.state = { repoNames: [] }; } componentDidMount() { let self = this; fetch("", {method: 'get'}) .then((response) => { return response.json(); }) .then((repos) => { self.setState({ repoNames: repos.map((r) => { return r.name; })}); }); } render() { return ( <ol> {this.state.repoNames.map((r, i) => { return <li key={i}>{r}</li> })} </ol> ) } }
Running the above through Fetch as Google produces the following output:
Oh no! It wasn’t able to see any of the data from the async call to GitHub. I’ll be honest; this confused me for a bit. At first I thought it might be some strange cross-origin restriction. However, it turns out the Googlebot can process cross-origin requests just fine. After some digging, lots of trial and error, and a but of luck, I discovered that I needed to include an ES6 Promise polyfill. Apparently the browser that the Googlebot runs in doesn’t include an ES6 Promise implementation. After bringing in es6-promise, the Fetch as Google output looked like this.
Let’s pick at Fetch as Google a bit more. Suppose that your application has a slow call, or some async processing that it does–using things like
setTimeout or
setInterval. How long will Fetch as Google wait around for these types of async requests, and when will it capture its snapshot of your website?
Let’s modify our “Hello, World!” app from above to wait five seconds before displaying the “Hello, World!” text:
class App extends React.Component { constructor() { super(); this.state = { message: "" }; } componentDidMount() { setTimeout(() => { this.setState({ message: "Hello World!, after 5 seconds" }) }, 5000); } render() { return ( <div> <h1>{ this.state.message }</h1> </div> ) } }
Running Fetch as Google with the above code yields this:
Interestingly, it was still able to see the output of the component. Additionally, the Fetch as Google operation took significantly less than five wall clock seconds to run, which makes me think that the browser environment it’s running in must be fast-forwarding through delays or something. Interestingly, if we increase five seconds to 15, we observe this output:
I have no idea how Google treats
setTimeout. However, what the above test seems to indicate is that things that take too long to load will be ignored (not too surprising).
Now, let’s modify our component to call
setInterval every second, and update a counter that we print to the screen:
class App extends React.Component { constructor() { super(); this.state = { message: "", count: 0 }; this.update = this.update.bind(this); } update() { let count = this.state.count; this.setState({ count: this.state.count + 1 }); } componentDidMount() { setInterval(this.update, 1000); } render() { return ( <div> <h1>{ `Count is ${this.state.count}` }</h1> </div> ) } }
This produces the following output:
So, it captured the page render after waiting for five seconds. This aligns with the
setTimeout behavior above. I’m not sure exactly how much we can deduce from these experiments; however, they do seem to confirm that the Googlebot will wait around for some small amount of time before saving the rendering of your website.
Summary
In my experience, Google is able to crawl React sites pretty effectively, even if they do have substantial data loads up front. However, it’s probably a good idea to optimize your React app to load the most important data (that you would want to get crawled) as quickly as possible when your app loads. This can mean ordering API calls a certain way, preferring to load partial data first, or even rendering the initial page on the server to allow it to load immediately in the client’s browser.
Beyond load times, there are several other things that can cause SEO problems for your React app. We’ve seen that missing polyfills may be problematic. I have also seen the use of arrow functions to be problematic, so it may be worth targeting an older ECMAScript version.
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy4 Comments
Great post! Thank you for sharing it.
My current system is also running on React. And also facing the SEO problem. After a long time to research and optimize, finally at Fetch as Google, Googlebot could see our site. But I’m also confusing that at Fetching tab, I didn’t the HTML DOM generated. I mean tag just included only. I wonder whether Googlebot had already indexed the content generated by Javascript or not?
Awesome, I really liked your post.
I’m thinking about doing my next side project using React and I was wondering about SEO.
Now I’m pretty sure that I can handle it, thank you very much.
This is what I wanted to know thanks. Seems like googlebot is getting smarter.
Thank U MATT NEDRICH….
Your post is really very helpful to me. But I have some queries. Please suggest.
After reading your post, I did the ‘Fetch as Google’ for two pages of my react SPA website and observed that the page content shown to user and the google bot are same. and I did indexed for those pages.
Now if I searched for my website, search results showing those two pages only. But the other pages are not showing in the google search results. why?
As you said, “fetch as Google” is a testing tool to test whether web page is really crawling by google search engine or not. Right?
I observed that, the pages that I tested in “fetch as Google” are indexed in google search engine, but the other pages are not getting indexed. Do I need to do “fetch as Google” tool option to all my urls in my website?
Please suggest… | https://spin.atomicobject.com/2017/12/04/react-fetch-as-google/ | CC-MAIN-2019-09 | refinedweb | 1,763 | 73.98 |
.NET Micro Framework Website
TonyBy Tony Pitman
Introduction
During the summer of 2007 Microsoft released service pack 1 for the .NET Micro Framework. Soon after the release of the service pack Microsoft posted a few new samples. One of these samples is a thermostat based on an SPI temperature sensor from Analog Devices.
Sample Application
The sample uses the .NET Micro Framework emulator component architecture to simulate the actual hardware. This is one of the many really cool features of the .NET Micro Framework development environment and architecture. Being able to not only simulate hardware exactly the way it will work on the real silicon, but do so using C# and manage classes is something the embedded industry has not seen before.
Figure 1
I spent some time looking at the sample emulator. I started to wonder how hard it would be to actually make the hardware for the temperature sensor, and hook it to one of the development boards that is available for the .NET Micro Framework.
Developer Boards
The .NET Micro Framework home page lists some companies that provide hardware to run the .NET Micro Framework. Look for the section on Developer Kits.
Since my company has already done some work with the Embedded Fusion development kit, I decided to try connecting the Analog Devices temperature sensor to the Embedded Fusion Tahoe board. You can also use the Freescale iMXS SideShow Development board for this project, but there are some small changes. I note these changes at the bottom of this article.
Figure 2
The Tools
The tools that you will need for this project are pretty standard. You will need a soldering iron, some solder, wire cutters and wire strippers. If you are really good with the wire cutters you can probably do without the wire strippers. I also like to have a pair of tweezers to place the small parts onto the breading board.
Figure 3
The Parts
There are not many parts that you will need to connect the temperature sensor to the Tahoe board. You will need the sensor chip, a breading board, a header connector and some wire.
Figure 4
Figure 5
I like to use the standard red and black for power and ground and blue for signal lines.
Digi-Key carries the parts that you will need. Here is a list of the part numbers with the last known web page for ordering them from Digi-Key:
SPI Temperature Sensor: AD7314ARMZ-NDBreading Board: 33108CA-NDHeader Connector: S5682-ND
Total cost to me with USPS shipping came to about $13. The items came in the mail in about 4 days.
Now that we have all the parts and tools that we need, let’s get started...
Soldering the Chip
The first thing I did was solder the temperature sensor onto the breading board. The pins are kind of small, but with a decent soldering iron shouldn’t be too difficult. I am mainly a software guy, so my solder jobs are not the best. If I can do it I am sure most people can.
If you prepare the pad on the breading board with a little extra solder it helps. Just don’t put too much or you will end up having bridges across the pins. A soldering jig would probably make things much easier, but since I don’t have one I like to use tweezers to hold the part in place while I solder the first pin.
Figure 6
If you have added just a little extra solder to the breading board before placing the part, you should be able to hold the part in place with the tweezers and heat one of the corner pads just enough to hold the part in place. Once you have done that it is pretty easy to heat each of the other pins carefully to complete the job.
This puts pin 1 to the far left of the breading board when the pins of the breading board are pointing down.
Since the AD7314 uses the SPI bus to communicate, we will be connecting the pins to the SPI port of the Tahoe board. Using SPI requires a chip enable pin. This is usually a standard GPIO pin on the host. For our project we will use GPIO pin 5. This pin is labeled PA7 on the Tahoe board.
Besides connecting power to a 3.3 volt source, and ground to a ground pin, we just have to connect the SPI pins for SCLK (serial clock), SDI (serial data in) and SDO (serial data out). Serial data in and out on the AD7314 are from the perspective of the AD7314 chip.
I chose to solder the wires to the breading board first. As you can see in Figure 9; I soldered the blue wires to the signal pins, and the red and black wires to power and ground respectively.
Figure 9
Starting from pin 1 (the left most pin) you can see that the wires match the pin out in Figure 8.
The Header Connector
At this point I have to note that the types of headers I got were not the best. At the time I placed my order the only type that I could find from Digi-Key were surface mount headers. This means the pins that I solder to are horizontal instead of vertical.
This made solder to them a little more difficult, but not impossible. I would recommend getting some headers with the pins coming vertically out the bottom of the headers. Either type will work fine. You may just find that the vertical pins are easier to solder to.
You could also choose to purchase a single row type header connector. I chose to get the 2 row connectors so that I could use them to also connect to the Freescale developer board which requires 2 rows to hit all the pins that are used.
Now we move on to soldering the wires to the header connector. The header connector on the Tahoe board that contains the SPI bus is J6. Embedded Fusion has done a great job of labeling the connectors and what functions each of the pins performs.
If you look at J6 on the Tahoe board you can see that the SPI bus pins are all grouped together in a section conveniently labeled SPI as shown in Figure 10. This contains what Embedded Fusion has labeled MISO (master in, slave out), MOSI (master out, slave in) and SCLK (serial clock). These labels are a little different than what the AD7314 called them, but it is easy to match them up.
Figure 10
Since the Tahoe board is the master we can match them by connecting the master in, slave out (MISO) pin on the Tahoe board to the slave serial out (SDO) pin. MOSI on the Tahoe goes to SDI on the AD7314. SCLK is easy as it goes from SCLK on the Tahoe to SCLK on the AD7314.
Embedded Fusion made connecting devices to their Tahoe board extra easy by placing 5v, 3.3v and ground (what they call 0.0v) pins all over the place on each header. You can see these pins just to the right of the SPI section in a section labeled PWR.
We connect our red wire (pin 8 on the breading board) to the +3.3v pin on the Tahoe and our black wire (pin 4 on the breading board) to the 0.0v (GND) pin on the Tahoe.
Now the only pin left to connect is the GPIO for chip enable. This is pin 2 on the breading board and as mentioned above I have connected it to GPIO pin 5. This is a little more to the right on the same row of pins on the Tahoe board and is labeled PA7.
If you are wondering what the PA7 stands for it is actually Port A, pin 7 on the Freescale iMXS processor chip that the Tahoe board uses as its main processor. Internally Embedded Fusion has mapped GPIO pin 5 to Port A, pin 7 on the processor.
That is all there is to hooking up the AD7314 SPI temperature sensor to the Embedded Fusion Tahoe board. Now we can move on to the software changes.
The Sample Application
When Microsoft created the .NET Micro Framework they must have at least a few people with embedded experience on the team. I say this because they really did think of many of the things that embedded developers would have to deal with. One of those things is being able to simulate hardware without having to actually build the hardware.
The .NET Micro Framework has an emulator system. This article is not meant to be an explanation of the emulator, but I did want to point out one very important feature. Any developer can write what is called an emulator component.
Emulator components are software that acts like hardware. You can create devices that connect up to the emulator system in such a way that the managed drivers that you write can’t tell the difference between talking to an emulated component or the real thing.
The temperature sensor sample application has such an emulated component. The source code is all included and I recommend taking a look at it. The emulated AD7314 component “connects” to the emulator’s SPI bus and responds in exactly the same way that the real AD7314 responds. This allows the developer to write their AD7314 managed driver using the emulator and know that the same device driver will work with the real AD7314 hardware.
Differences Between Platforms
Even if we were not dealing with going from running our application on the emulator versus the hardware each hardware platform usually requires its own libraries to access it.
The Embedded Fusion Tahoe board has a set of libraries that ship with the board. Not only do these libraries give access to the specific hardware on the Tahoe, but also add a lot of great functionality.
Embedded Fusion Tahoe SDK
If you have not done so already, you will need to install the Embedded Fusion Tahoe SDK that came with your Tahoe board. I always recommend checking the Embedded Fusion web site to download any new versions of the SDK.
Copy the Project
At this point I would recommend that the reader copy the TemperatureSensor sample application to a new folder. I called my new folder TemperatureSensor – Hardware. Make sure that you don’t have the solution open in Visual Studio when you copy it. Once you have copied the project, open it in Visual Studio.
Embedded Fusion Tahoe References
The first modification that we will have to make to the sample application is to add the references to the Embedded Fusion Tahoe libraries.
Figure 11
Find the Solution Explorer in Visual Studio. One of the items under the project is labeled References. Right click on References and choose Add New Reference. A dialog appears that shows the available libraries that you can add as shown in Figure 12.
Figure 12
If you properly installed the Embedded Fusion Tahoe SDK you will see 3 components in the list that start with EmbeddedFusion. Highlight all 3 of them and click the OK button. You will now be able to use and reference EmbeddedFusion specific functionality in your project.
There are 2 places where we need to use Tahoe specific functionality. The first place is in the button implementation. If you come from a PC programming background, then you are probably used to having input devices available in a generic way. When you enter the truly embedded world you will often find that input comes in the form of interrupts and GPIO pins. This is going on under the hood on a PC, but the OS deals with it for you.
The .NET Micro Framework allows the developer the flexibility to go all the way to the hardware without much fuss. There are a couple of helper methods, however, that can make button input easier and provide a more “OS” like feel.
Button Input
Open the GPIOButtonInputProvider.cs source file by double clicking on it. This class is actually provided any time you create a new Window Application for the .NET Micro Framework using the New Project Wizard in Visual Studio.
In order to reference the correct GPIO pins on the Tahoe board we have to add a using statement to the top of the GPIOButtonInputProvider.cs file. You can see that I did this in Figure 13.
Figure 13
Next we need to change the button pin mappings that point to the emulator button pins to the Tahoe board pins for the same buttons. Figure 14 shows the lines that are in the emulator sample and Figure 15 shows the changed lines. Notice that the only change is the pins that the buttons are mapped to. The pins for the emulator are generic hardware pins and the pins on the Tahoe are specific to the Tahoe board.
Figure 14
Figure 15
SPI Chip Select
The only other thing that is different between the emulator and the actual hardware device is which pin to use for the SPI chip select. All of the code for the managed SPI driver is contained in the SpiTemperatureSensor.cs file source file. Open that file now.
At the top of the file you will see using statements similar to the button source file. Again we need to add the reference to the Embedded Fusion libraries. You can see that I added them in Figure 16.
Figure 16
Now we are ready to make the change to the chip select pin that will be used to talk to the AD7314 over the SPI bus. In Figure 17 you can see the original code and in Figure 18 you can see that simple change that I made.
/// <summary> /// Standard public constructor /// </summary> public SpiTemperatureSensor() { // Get a new SPI object that is connected to the temperature sensor _spi = new SPI(new SPI.Configuration((Cpu.Pin)5, true, 0, 0, false, false, 4000, SPI.SPI_module.SPI1)); }
Figure 17
class SpiTemperatureSensor{ /// <summary> /// Keep this private member around as the SPI object /// </summary> private SPI _spi;
/// <summary> /// Standard public constructor /// </summary> public SpiTemperatureSensor() { // Get a new SPI object that is connected to the temperature sensor _spi = new SPI(new SPI.Configuration(Meridian.Pins.GPIO13, true, 0, 0, false, false, 4000, SPI.SPI_module.SPI1)); }
All of the rest of the parameters and functionality of communicating with the SPI AD7314 temperature sensor are exactly the same. In fact, it would even be possible and easy to change the emulator component to exactly match the settings that we just changed, so that you could simulate the exact hardware that you are running on.
Building and Downloading
All we have left to do is to build and download the application to the Embedded Fusion Tahoe board. We will need to change the settings for the project in order to do this, so right click on the project in the solution explorer window and choose properties from the pop up menu as shown in Figure 19.
Figure 19
You should see a new dialog similar to the one in Figure 20. If you have already opened this dialog before for this project and clicked on one of the other tabs then the dialog will look a little different that what you see in Figure 20. No matter what tab the dialog is currently on, you should still see the tabs on the left side of the dialog. One of these tabs should be labeled Micro Framework.
Figure 20
Select the Micro Framework tab and you should see the same window as shown in Figure 21.
Figure 21
If you have built the Temperature Emulator project sample and already changed this sample device to point to it then the Device: drop down should already say Temperature Emulator. This doesn’t really matter to what we are doing, however.
Make sure that you have connected the Embedded Fusion Tahoe board to your PC and installed and configured all of the drivers successfully. If you have then connect the Tahoe board at this time if it is not already connected.
Once you have connected the Tahoe board you can change the Transport selection to USB. When you do this the Device drop down should automatically populate with Meridian_xxxxxx where the x’s are replaced with an ID number that represents your Tahoe board. If you have trouble with the drivers or seeing the Tahoe board in the drop down list I am sure the great folks at Embedded Fusion will be happy to help you. You can visit their web site at.
If everything has gone according to plan and you have the Tahoe board listed under the Device drop down, we are ready to build and deploy the project to the Tahoe board.
You can simply click on the play button on the Visual Studio tool bar as shown in Figure 22.
Figure 22
This will build the application and deploy it to the Tahoe board. Sometimes this can take many seconds, so be patient. If you get a build error, double check all of the changes that you made. Try disconnecting and reconnecting the Tahoe board if you get a deployment error. If you continue to have deployment problems you should contact Embedded Fusion for help.
If everything works you should see a display similar to Figure 23.
Figure 23
Freescale Developer Board
Hardware
There are a couple of differences between using the Tahoe board and the Freescale board. The button pins and GPIO chip select pins are different in the software and the pin connections to the board are different.
Figure 24
Figure 24 shows the top of the Freescale board. You can see a couple of the pins are labeled with SPI designations, but not all of the pins we will need to use are labeled. You will also notice there are several jumper wires soldered onto the board. The board I am using is version 1.3. If you have a newer board the jumpers may not be present and the pins may have changed. If that is the case I would recommend contacting Freescale to get the details of the changes.
Figure 25
Figure 25 shows the pins on the back of the Freescale board. This is where you will have to actually connect the header connectors. As I mentioned above the breading board can be connected to the Embedded Fusion board by a single row header, but the Freescale board requires a 2 row header.
Figure 26
Using the same 2 row header connectors show above for the Embedded Fusion board you can connect the breading board to the Freescale board as shows in Figure 26.
As you look at the back of the Freescale board the same way as shown in Figure 26, notice that the connector on the left (labeled on the Freescale board as P5 on the front side) is plugged in so that it is all the way to the right of that connector. Also notice that the connector on the right (labeled on the Freescale board as P6 on the front side) is plugged in so that it is all the way to the left of that connector.
If you wanted to get smaller headers, you could do that as well. You really only need two 6 pin / 2 row header connectors. Using the larger connectors does make for a more robust connection and it is easer to line up on the correct pins as noted above.
Counting from the left, as shown in Figure 26, the connections are as follows (the following numbers are counted on the header and don't include the 2 left most pins that are not connected to the header):
Left Header Connector (P5):
Right Header Connector (P6):
Software
For the software you will change the assemblies that you include a reference to. Instead of the 3 EmbeddedFusion.SPOT references shown in Figure 12, you only include a single reference to Microsoft.SPOT.Hardware.FreescaleMXSDemo. The using lines shows in Figures 13 and 16 are also different. They should both be:
using Microsoft.SPOT.Hardware.FreescaleMXSDemo;
You will need to change the same lines of code detailed in Figures 15 and 18 above, but with different values. Here is a cross reference of the Tahoe pins versus Freescale pins:
For Figure 15:
For Figure 18:
Conclusion
That’s really all there is to it. I hope you can see how easy it is to connect hardware to a .NET Micro Framework development board and write software for it. One thing to know about this sample and project is that with not a lot of extra effort this could be turned into a commercial product. You could easily create a high end thermostat device that could hook into a smart home network. What makes this even better is the fact that you could share .NET C# code between this application and an application running on a PC also written in Visual Studio.
The power and ease that Microsoft has brought to the embedded world is really cool. The possibilities are endless. Where do you want to go today? (I just had to say that).
real men don't use surface mount adapters for 8 pin packages
I don't know if you were just joking or what, but I got a good laugh out of your comment.
:-)).
I appreciate your effort for all of us who want to measure the temperature.
From last weekend, I've trying to read a ADT7301 temp sensor which is a little different from the AD7314 sensor.
If you please, let me know how to configure the sensor? It has the following characteristics.
-Data is clocked out on the falling edge of SCLK.
-Device is selected when CS is low.
-Requires 5ns minimum for each clock setup, hold time.
Thank you.
A new Comment has been posted to Building a Thermostat with the Microsoft
.NET Micro Framework.
Author: Jongbum Park
Status: Not Published
Spam Score: 1
From last weekend, I've trying to read a ADT7301 temp sensor which is a
little different from the AD7314 sensor.
If you please, let me know how to configure the sensor? It has the following
characteristics..
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement | http://blogs.msdn.com/netmfteam/archive/2007/11/12/building-a-thermostat-with-the-microsoft-net-micro-framework.aspx | crawl-002 | refinedweb | 3,752 | 71.04 |
In python, how would I go about making a http request but not waiting for a response. I don't care about getting any data back, I just need to server to register a page request.
Right now I use this code:
urllib2.urlopen("COOL WEBSITE")
What you want here is called Threading or Asynchronous.
Threading:
urllib2.urlopen()in a
threading.Thread()
Example:
from threading import Thread def open_website(url): return urllib2.urlopen(url) Thread(target=open_website, args=[""]).start()
Asynchronous:
Use the requests library which has this support.
Example:
from requests import async async.get("")
There is also a 3rd option using the restclient library which has builtin (has for some time) Asynchronous support:
from restclient import GET res = GET("", async=True, resp=True) | https://codedump.io/share/46C7XipFhwFz/1/python-amp-urllib2---request-webpage-but-don39t-wait-for-response | CC-MAIN-2017-39 | refinedweb | 124 | 68.87 |
Barcode Core Engine
The Core Layer is the lower layer of the ConnectCode Barcode Library. It returns a string of Black and White characters in the format "bwwwbwwbbww". The DLL for this layer is ConnectCodeBarcodeCoreLibrary.dll
The DLL exposes a class Barcodes, and the main method for this class is encode(), which will the return a string of black and white characters in the property EncodedData.
Because the results returned is a text string, it is not tied to any graphics engine. Advanced users who require customizations for their projects will be able to use this DLL to create barcodes for thier special needs. For example, if you are writing a laser engraving product that engrave barcodes and requires the barcodes to be returned as a text string, this layer will allow you to do just that.
The Core Layer requires only a small footprint. It uses only the following namespaces in .Net 2.0
using System;
using System.Collections.Generic;
using System.Text;
If you intend to develop a custom control, such as WPF Control, Win CE Control or SmartPhone Class Library, you can leverage on this Core Layer to do most of the work. With this design, you can be assured that your project will be empowered with a huge amount of scalability and flexibility.
Interpreting the output of the Barcode Core EngineThe EncodedData property of the core engine returns a string depending on the barcode type you have selected. ConnectCode classify the barcode types into four categories A , B , C and D.
Type A Barcodes
b - Black Stroke
w - White Stroke
For example, "bbwbwwbbbwwbwbbwwbbbwwbwwbbwwbbbwbbwwwbbbwbwbb" is interpreted as follow
t - Thin Stroke.
Type B Barcodes
w - Wide Stroke. The Wide Stroke is 3 times the width of the Thin Stroke. The barcode specifications allows the Wide Stroke to be 2-3 times of the Thin Stroke.
The barcode is generated with alternating Black and White Strokes starting with a Black Stroke.
For example, "twttwtwtttwttwttttwtttwwttttwttwttwtwttt" is interpreted as follow
b - Black Stroke
Type C Barcodes
i - Short Black Stroke
s - White Stroke. The White Stroke is 1.3 times the width of the Black Stroke.
For example, "bsisisisbsbsisisbsisbsisisbsbsisisbsisisbsisbsisbsisisbsisbsisb" is interpreted as follow
A - Black Stroke (Width * 1).
Type D Barcodes
B - Black Stroke (Width * 2).
C - Black Stroke (Width * 3).
D - Black Stroke (Width * 4).
E - Black Stroke (Width * 5).
F - Black Stroke (Width * 6).
G - Black Stroke (Width * 7).
H - Black Stroke (Width * 8).
I - Black Stroke (Width * 9).
a - White Stroke (Width * 1).
b - White Stroke (Width * 2).
c - White Stroke (Width * 3).
d - White Stroke (Width * 4).
e - White Stroke (Width * 5).
f - White Stroke (Width * 6).
g - White Stroke (Width * 7).
h - White Stroke (Width * 8).
i - White Stroke (Width * 9).
For example, "aAaAaAbAhAbGdAaGbAaAaAaAaBdAdAaAaHcBaDcAaAaDaA" is interpreted as follow
| http://www.barcoderesource.com/coreEngine.shtml | CC-MAIN-2017-13 | refinedweb | 463 | 68.87 |
From here, the real coding part begins. Gear up to travel to the next part of coding. This will be a lot of fun.
Many times, we need to first see the condition and then make a decision.
As an example, consider that a chocolate costs 10 rupees. So, if you have 10 rupees or more, you can buy the chocolate.
But how do we represent this scenario in C?
This type of decision taking is done by using if statement in C.
if statement
Let's consider an example.
#include <stdio.h> int main() { int a = 5; int b = 5; if ( a == b ) { printf ( "a and b are equal\n" ); } return 0; }
if (a==b) →
a==b is the condition. Conditions are written inside '
()'. Here, condition is true (will return 1). Since the condition is true, statements inside
if will be executed.
{} after
if represents the body of
if. Whatever is written inside '
{}' is part of
if.
So, the flow is that first the condition of
if is checked and if it is true then statement(s) inside
if are executed.
#include <stdio.h> int main() { if ( condition ) { statement; statement; statement; .... } return 0; }
if...else statement
Now, consider that a chocolate costs 10 rupees and a candy costs 5 rupees. So, if you have at least 10 rupees, you can buy a chocolate, otherwise, you have to buy candy.
In the world of programming, this is done by using if...else statement in C.
Now, let's see the same example as above but with the if...else statement.
#include <stdio.h> int main() { int a = 5; int b = 8; if (a == b) { printf ("a and b are equal\n"); } else { printf ("a and b are not equal\n"); } return 0; }
In the same way as above, firstly, the condition in
if will be checked and since, it is false (5 and 8 are not equal), so statements in
else are executed. It is that simple.
#include <stdio.h> int main() { if(condition) { statement; statement; .... } else { statement; statement; .... } return 0; }
If the condition is true, statements in
if are executed. Otherwise, statements in
else are executed.
But what is its use?
It has multiple applications in various fields like robotics and many other. For now, let's consider an example that a person is eligible to vote in an election only if his age is more than 18 years.
The C program for that will be:
#include <stdio.h> int main() { int age ; printf ( " Enter your age " ); scanf ( "%d" ,&age) ; if ( age >= 18 ) { printf ( " Your age is 18+.\n" ); printf ( " Eligible to vote\n" ); } else { printf ( " Your age is not yet 18.\n" ); printf ( " not eligible to vote\n" ); } return 0; }
Your age is 18+.
Eligible to vote
The above example can be understood easily. The condition in
if is
age >= 18. This means that if 'age' is more than or equal to 18 then it will print statements inside
if, otherwise statements inside
else will be printed.
else if statements
Many times we fall in situations when 'if' and 'else' are not sufficient. For example, if you have 5 rupees then you will buy candy, or if you have 10 rupees, then chocolate and if more than 100, then cake. Thanks to C, because it provides another tool 'else if' to get this thing done.
Consider this example:
#include <stdio.h> int main() { int a = 10; if(a==30) { printf("It is 30\n"); } else if(a==10) { printf("It is 10\n"); } else if(a==5) { printf("It is 5\n"); } return 0; }
This is a very simple code only to make you understand the use of
else if. There can be any number of
else if between
if and
else.
#include <stdio.h> int main() { if(condition) { statement statement ... } else if(condition) { statement statement ... } else if(condition) { statement statement ... } else { statement statement .... } return 0; }
If there is a single statement inside
if,
else or
else if, we can skip the braces ({}). Let's look at an example.
#include <stdio.h> int main() { int a = 10; if(a==30) printf("It is 30\n"); else if(a==10) printf("It is 10\n"); else if(a==5) printf("It is 5\n"); return 0; }
As you can see in the above example that a single statement is considered as a part of
if,
else and
else if respectively without any braces ({}).
Nested if/else
We can also use
if or
else inside another
if or
else. See the example to print whether a number is greatest or not to understand it.
#include <stdio.h> int main() { int a = 8 ; int b = 4 ; int c = 10 ; if ( a > b ) { if ( a > c ) { printf ( "a is the greatest number.\n" ) ; } } return 0; }
In the above example, the first expression i.e.,
(a>b) is true. So the statements enclosed in curly brackets {} of the first
if condition are executed.
Within the curly brackets, the first statement i.e.,
if(a>c) will be executed first. Since this condition is false, the statement
printf(" a is the greatest number."); within the curly brackets of this if condition will not be executed.
So, first we checked if 'a' is greater than 'b' or not, if it is, then we compared it with 'c'.
Isn't that simple?
See this example of finding greatest number by taking input by the user.
#include <stdio.h> int main() { int a = 8 ; int b = 4 ; int c = 10 ; printf("Enter three numbers\n"); scanf("%d %d %d",&a,&b,&c); if(a>b && a>c) { printf("%d is greatest\n",a); } else if(b>a && b>c) { printf("%d is greatest\n",b); } else if(c>a && c>b) { printf("%d is greatest\n",c); } return 0; }
4
8
2
8 is greatest
In the above example, we have three numbers a, b and c and we have to find the greatest among them. For that, we will first compare the first number with other numbers i.e., 'a' with 'b' and 'c' both. Now, if the condition (
a>b && a>c) is true (which means that a is the greatest), then the statements enclosed in the curly brackets {} of the first
if condition will be executed. If not so, then it will come to
else if and check for
(b>a && b>c). If this condition is true, then corresponding statements will be executed otherwise it will check the condition given in the last
else if.
Another form of if else - Ternary Operator
We can also judge the condition using ternary operator. Ternary operator checks whether a given condition is true and then evaluates the expressions accordingly. It works as follows.
condition ? expression1 : expression2;
If the condition is true, then expression1 gets evaluated, otherwise expression2 gets evaluated.
#include <stdio.h> int main() { int age; printf("Enter age"); scanf("%d", &age); (age > 18) ? printf("eligible to vote\n") : printf("not eligible to vote\n"); return 0; }
10
not eligible to vote
Here, if the condition
(age > 18) is true, then expression1 i.e.
printf("eligible to vote\n") will get evaluated, otherwise expression2 i.e.
printf("not eligible to vote\n") will get evaluated. Since the value of age that we entered (10) is less than 18, expression2 got evaluated and "not eligible to vote" got printed.
Let's see another example in which we want to find the greater among two numbers.
#include <stdio.h> int main() { int num1 = 4, num2 = 5, num; num = (num2 > num1) ? num2 : num1; printf("The greater number is %d\n", num); return 0; }
In this example, if the condition
(num2 > num1) is true, then 'num2' i.e. 5 will get assigned to 'num', otherwise 'num1' i.e. 4 will get assigned to 'num'. In our case, the values of 'num1' and 'num2' are 4 and 5 respectively. Thus, the condition 'num2 > num1' is true and the value of num2 i.e. 5 got assigned to num and so 5 got printed. | https://www.codesdope.com/c-decide-ifelse/ | CC-MAIN-2020-24 | refinedweb | 1,332 | 74.29 |
Java control flow
last modified July 6, 2020
In this part of the Java tutorial, we will talk about program flow control. We will use several keywords that enable us to control the flow of a Java program.
Java control flow statements
In Java language there are several keywords that are used
to alter the flow of the program. Statements can be executed multiple
times or only under a specific condition. The
if,
else,
and
switch statements are used for testing conditions, the
while and
for statements to create cycles, and the
break and
continue statements to alter a loop.
When the program is run, the statements are executed from the top of the source file to the bottom. One by one.
Java a block. A block is code
enclosed by curly brackets. The brackets are optional if we have only
one statement in the body.
package com.zetcode; import java.util.Random; public class IfStatement { public static void main(String[] args) { Random r = new Random(); int num = r.nextInt(); if (num > 0) { System.out.println("The number is positive"); } } }
A random number is generated. If the number is greater than zero, we print a message to the terminal.
Random r = new Random(); int num = r.nextInt();
These two lines generate a random integer. The number can be positive or negative.
if (num > 0) { System.out.println("The number number is positive" is printed to the terminal. If the
random value is negative, nothing is done. The curly brackets are optional if we have
only one expression.
Java else keyword
We can use the
else keyword to create a simple branch.
If the expression inside the square brackets following the
if
keyword evaluates to false, the statement following the
else
keyword is automatically executed.
package com.zetcode; import java.util.Random; public class Branch { public static void main(String[] args) { Random r = new Random(); int num = r.nextInt(); if (num > 0) { System.out.println("The number is positive"); } else { System.out.println("The number is negative"); } } }
Either the block following the
if keyword or the block
following the
else keyword is executed.
if (num > 0) { System.out.println("The number is positive"); } else { System.out.println("The number is negative"); }
The
else keyword follows the right curly bracket of the
if block. It has its own block enclosed by a pair of
curly brackets.
$ java com.zetcode.Branch The number is positive $ java com.zetcode.Branch The number is negative $ java com.zetcode.Branch The number is negative
We run the example three times. This is a sample output.
Multiple branches with if else
We can create multiple branches using the
else if keyword.
The
else if keyword tests for another condition if and only if
the previous condition was not met. Note that we can use multiple
else if keywords in our tests.
The previous program had a slight issue. Zero was given to negative values. The following program will fix this.
package com.zetcode; import java.util.Scanner; public class MultipleBranches { public static void main(String[] args) { System.out.print("Enter an integer:"); Scanner sc = new Scanner(System.in); int num = sc.nextInt(); if (num < 0) { System.out.println("The integer is negative"); } else if (num == 0) { System.out.println("The integer equals to zero"); } else { System.out.println("The integer is positive"); } } }
We receive a value from the user test it if it is a negative number or positive, or if it equals to zero.
System.out.print("Enter an integer:");
A prompt to enter an integer is written to the standard output.
Scanner sc = new Scanner(System.in); int num = sc.nextInt();
Using the
Scanner class of the
java.util package,
we read an integer value from the standard input.
if (num < 0) { System.out.println("The integer is negative"); } else if (num == 0) { System.out.println("The integer equals to zero"); } else { System.out.println("The integer.
$ java com.zetcode.MultipleBranches Enter an integer:4 The integer is positive $ java com.zetcode.MultipleBranches Enter an integer:0 The integer equals to zero $ java com.zetcode.MultipleBranches Enter an integer:-3 The integer is negative
We run the example three times so that all conditions are tested. The zero is correctly handled.
Java switch statement
The
switch statement is a selection control flow statement.
It allows the value of a variable or expression to control the flow of a program
execution via a multi-way branch. It creates multiple branches in a simpler way
than using the combination of
if and
else if
statements. Each branch is ended with the
break keyword.
We use.
package com.zetcode; import java.util.Scanner; public class SwitchStatement { public static void main(String[] args) { System.out.print("Enter a domain:"); Scanner sc = new Scanner(System.in); String domain = sc.nextLine(); domain = domain.trim().toLowerCase(); switch (domain) { case "us": System.out.println("United States"); break; case "de": System.out.println("Germany"); break; case "sk": System.out.println("Slovakia"); break; case "hu": System.out.println("Hungary"); break; default: System.out.println(.
Scanner sc = new Scanner(System.in); String domain = sc.nextLine();
The input from the user is read from the console.
domain = domain.trim().toLowerCase();
The
trim() method strips the variable from potential leading and trailing
white spaces. The
toLowerCase() converts the characters to lowercase.
Now the "us", "US", or . Inside the body, we can place multiple
case options.
Each option is ended with the
break keyword.
case "us": System.out.println(: System.out.println("Unknown"); break;
The
default keyword is optional. If none of the
case
options is evaluated, then the
default section is executed.
$ java com.zetcode.SwitchStatement Enter a domain:us United States
This is a sample output.
Java switch expression
Java switch expression simpplyfies the original switch statement. It was introduced in Java 12 and enhanced in Java 13.
Java switch expressions support using multiple case labels. They
can return values via
yield keyword.
package com.zetcode; import java.util.Scanner; public class SwitchExpression { public static void main(String[] args) { System.out.print("Enter a domain:"); Scanner sc = new Scanner(System.in); String domain = sc.nextLine(); domain = domain.trim().toLowerCase(); switch (domain) { case "us" -> System.out.println("United States"); case "de" -> System.out.println("Germany"); case "sk" -> System.out.println("Slovakia"); case "hu" -> System.out.println("Hungary"); default -> System.out.println("Unknown"); } } }
The previous switch statement example is rewritten using switch expression.
Java.
package com.zetcode; public class WhileStatement { public static void main(String[] args) { int i = 0; int sum = 0; while (i < 10) { i++; sum += i; } System.out.println++;
The last phase of the
while loop is the updating. We increment
the counter. Note that improper handling of the
while loops may lead
to endless cycles.
$ java com.zetcode.WhileStatement 55
The program calculated the sum of 0, 1, ..., 9 values.
There is a modified version of the
while statement. It is the
do while statement. It is guaranteed that the statements
inside the block are run at least once, even if the condition is not met.
package com.zetcode; public class DoWhile { public static void main(String[] args) { int count = 0; do { System.out.println(count); } while (count != 0); } }
First the block is executed and then the truth expression is evaluated.
In our case, the condition is not met and the
do while statement
terminates.
Java for statement
When the number of cycles is know before the loop is initiated,
we can use the
for statement. In this construct
we declare a counter variable, which is automatically increased
or decreased in value during each repetition of the loop.
package com.zetcode; public class ForStatement { public static void main(String[] args) { for (int i = 0; i < 10; i++) { System.out.println(i); } } }
In this example, we print numbers 0..9 to the console.
for (int i = 0; i < 10; i++) { System.out.println(i); }
There are three phases in a for loop. First, we initiate the counter
i to zero. This phase is done only once. Next comes the condition. If the
condition is met, the statement inside the for block is executed.
Then comes the third phase: the counter is increased. Now we repeat
2 and 3 phases until the condition is not met and the for loop
is terminated. In our case, when the counter
i is equal to 10, the for loop
stops executing.
A for loop can be used for easy traversal of an array. From the
length property of the array, we know the size of the array.
package com.zetcode; public class ForStatement2 { public static void main(String[] args) { String[] planets = {"Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Pluto"}; for (int i = 0; i < planets.length; i++) { System.out.println(planets[i]); } System.out.println("In reverse:"); for (int i = planets.length - 1; i >= 0; i--) { System.out.println(planets[i]); } } }
We have an array holding the names of planets in our Solar System. Using two for loops, we print the values in ascending and descending orders.
for (int i = 0; i < planets.length; i++) { System.out.println--) { System.out.println.
package com.zetcode; import java.util.Arrays; import java.util.Random; public class ForStatement3 { public static void main(String[] args) { Random r = new Random(); int[] values = new int[10]; int num; int sum=0; for (int i = 0; i < 10; i++, sum += num) { num = r.nextInt(10); values[i] = num; } System.out.println(Arrays.toString(values)); System.out.println("The sum of the values is " + sum); } }
In our example, we create an array of ten random numbers. A sum of the numbers is calculated.
for (int i = 0; i < 10; i++, sum += num) { num = r.nextInt(10); values[i] = num; }
In the third part of the for loop, we have two expressions separated by
a comma character. The
i counter is incremented and the current
number is added to the
sum variable.
$ java com.zetcode.ForStatement3 [1, 9, 2, 9, 0, 9, 8, 5, 5, 3] The sum of the values is 51
This is sample execution of the program.
Java enhanced for statement
The enhanced
for statement simplifies traversing over
collections of data. It has no explicit counter. The statement goes through
an array or a collection one by one and the current value is copied to a variable
defined in the construct.
package com.zetcode; public class EnhancedFor { public static void main(String[] args) { String[] planets = { "Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Pluto" }; for (String planet : planets) { System.out.println(planet); } } }
In this example, we use the enhanced
for statement to go
through an array of planets.
for (String planet : planets) { System.out.println(planet); }
The usage of the
for statement is straightforward.
The planets is the array that we iterate through. A
planet
is the temporary variable that has the current value from the array.
The
for statement goes through all the planets
and prints them to the console.
$ java com.zetcode.EnhancedFor Mercury Venus Earth Mars Jupiter Saturn Uranus Pluto
Running the above Java program gives this output.
Java break statement
The
break statement can be used to terminate
a block defined by
while,
for,
or
switch statements.
package com.zetcode; import java.util.Random; public class BreakStatement { public static void main(String[] args) { Random random = new Random(); while (true) { int num = random.nextInt(30); System.out.print(num + " "); if (num == 22) { break; } } System.out.print('\n'); } }
We define an endless
while loop. We use the
break
statement to get out of this loop. We choose a random value from 1 to 30 and print it.
If the value equals to 22, we finish the endless while loop.
while (true) { ... }
Placing true in the brackets of the while statement creates an endless loop. We must terminate the loop ourselves. Note that such code is error-prone. We should be careful using such loops.
if (num == 22) { break; }
When the randomly chosen value is equal to 22, the
break
statement is executed and the
while loop is terminated.
$ java com.zetcode.BreakStatement 23 12 0 4 13 16 6 12 11 9 24 23 23 19 15 26 3 3 27 28 25 3 3 25 6 22 $ java com.zetcode.BreakStatement 23 19 29 27 3 28 2 2 26 0 0 24 17 4 7 12 8 20 22 $ java com.zetcode.BreakStatement 15 20 10 25 2 19 26 4 13 21 15 21 21 24 3 22
Here we see three sample executions of the program.
Java.
package com.zetcode; public class ContinueStatement { public static void main(String[] args) { int num = 0; while (num < 100) { num++; if ((num % 2) == 0) { continue; } System.out.print(num + " "); } System.out.print('\n'); } }
We iterate through numbers 1..99 Java tutorial, we were talking about control
flow structures. We have covered
if,
if else,
else,
while,
switch,
for,
break,
continue statements. | https://zetcode.com/lang/java/flow/ | CC-MAIN-2021-21 | refinedweb | 2,135 | 60.92 |
Difference between revisions of "StarlingX/StarlingX Packet.com iPXE Installation"
Revision as of 16:21, 13 April 2019
Contents
Install AIO Simplex into Packet.com via iPXE
Packet.com is a baremetal public cloud, and they have donated some resources to the StarlingX project. The way that a custom operating system is installed into Packet.com is via iPXE. These instructions show a basic method for initial installation of a StarlingX ISO on Packet.com.
Configure a Web Server to serve ISO and iPXE Confguration
This assumes an Ubuntu 16.04 instance, but any Apache web server should do. It must be available publicly, ie. have a public IP address that is available from the Packet.com data center that the instance is being deployed to. (Typically this would be an instance that is running in the same Packet.com datacenter, but it doesn't have to be.)
Install Apache.
apt install apache -y
Download an ISO from the CENGN StarlingX build archive.
Mount that ISO where it will be available to the webserver process.
mkdir /var/www/html/stx mount -o loop ~/bootimage.iso /var/www/html/stx
Create an iPXE configuration file that is available from the web server. Replace the "webserver_public_ip" with the webservers public IP address. The configuration below will install a AIO Simplex node via the kickstart file indicated in the kernel line. This configuration needs to be available on a public webserver, and would usually be installed on the same webserver as the ISO was mounted on.
NOTE: This is a configuration that is currently working, it may not be the perfect setup. Please feel free to make it better.
set base-url http://<webserver_public_ip>/stx kernel ${base-url}/vmlinuz console=ttyS1,115200n8 root=live:${base-url}/LiveOS/squashfs.img ip=dhcp ks=${base-url}/smallsystem_ks.cfg boot_device=sda rootfs_device=sda inst.text inst.repo=${base-url} security_profile=standard user_namespace.enable=1 initrd ${base-url}/initrd.img imgstat boot
Using a Compute Type with nvme drives
Add the device name to the kernel line to the iPXE configuration file.
Example entry below:
rootfs_device=nvme0n1
Create an Instance in Packet.com
Use "Custom iPXE" for the operating system choice and point it to the iPXE configuration URL that was setup in the webserver configuration step.
The currently used node type is c1.small.x86.
Then access the Packet.com instance via it's "Out of Band Console." Once the instance is available and you can connect to the out of band console you should be able to see the instance booting from the iPXE configuration.
Once the installation has completed the node will reboot and will not boot by PXE again unless requested. At this point it seems the public IP that is provided by Packet.com is available.on the first interface and is accessible from the Internet via SSH.
Run config_controller
TBD | https://wiki.openstack.org/w/index.php?title=StarlingX/StarlingX_Packet.com_iPXE_Installation&oldid=169420&diff=prev | CC-MAIN-2019-43 | refinedweb | 479 | 60.72 |
I change the GDB test suite to use this inputrc: $if version >= 8.0 set enable-bracketed-paste off $endif However, to my surprise, this did not work. I believe this is a bug in bind.c:parser_if. This function computes 'op' as 'OP_GE' (ok so far), but then does: case OP_GE: _rl_parsing_conditionalized_out = rlversion >= versionarg; break; This sets _rl_parsing_conditionalized_out to 1, but it should be 0. I think all of these results should be inverted. See the appended patch for what I mean. Tom diff --git a/bind.c b/bind.c index 87596dc..03aa03d 100644 --- a/bind.c +++ b/bind.c @@ -1312,6 +1312,7 @@ parser_if (char *args) _rl_parsing_conditionalized_out = rlversion <= versionarg; break; } + _rl_parsing_conditionalized_out = !_rl_parsing_conditionalized_out; } /* Check to see if the first word in ARGS is the same as the value stored in rl_readline_name. */ | https://lists.gnu.org/archive/html/bug-readline/2021-01/msg00009.html | CC-MAIN-2022-21 | refinedweb | 133 | 60.72 |
Fl
#include <FL/Fl.H>
The Fl class is the FLTK global (static) class containing state information and global methods for the current application.:
bool state_changed; // anything that changes the display turns this on void callback(void*) { if (!state_changed) return; state_changed = false; do_expensive_calculation(); widget->redraw(); } main() { Fl::add_check(callback); return Fl::run(); }
Add.(); }.
FLTK provides an entirely optional command-line switch parser. You don't have to call it if you don't like them!()).
Returns the height offset for the given boxtype. See box_dy.
Returns the width offset for the given boxtype. See box_dy.
Returns the X offset for the given boxtype. See box_dy.():
int X = yourwidget->x() + Fl::box_dx(yourwidget->box()); int Y = yourwidget->y() + Fl::box_dy(yourwidget->box()); int W = yourwidget->w() - Fl::box_dw(yourwidget->box()); int H = yourwidget->h() - Fl::box_dh(yourwidget-: sets the window that is returned by first_window. The window is removed from wherever it is in the list and inserted at the top. This is not done if Fl::modal() is on or if the window is not shown(). Because the first window is used to set the "parent" of modal windows, this is often useful.
Causes all the windows that need it to be redrawn and graphics forced out through the pipes. This is what wait() does before looking for events.
Get or set())..
This is used when pop-up menu systems are active. Send all events to the passed window no matter where the pointer or focus is (including in other programs). The window does not have to be shown() , this lets the handle() method of a "dummy" window override all event handling and allows you to map and unmap a complex set of windows (under both X and WIN32 some window must be mapped because the system interface needs a window id).
If grab() is on it will also affect show() of windows by doing system-specific operations (on X it turns on override-redirect). These are designed to make menus popup reliably and faster on the system.
To turn off grabbing do Fl::grab(0).
Be careful that your program does not enter an infinite loop while grab() is on. On X this will lock up your screen! To avoid this potential lockup, all newer operating systems seem to limit mouse pointer grabbing to the time during which a mouse button is held down. Some OS's may not support grabbing at all.
Returns the()..
Set things up so the receiver widget will be called with an FL_PASTE event some time in the future for the specified clipboard..
Get or set()).
All Fl_Widgets that don't have a callback defined use a default callback that puts a pointer to the widget in this queue, and this method reads the oldest widget out of this queue.())..).
Returns the width of the screen in pixels. the first form is non-zero if there are any visible windows - this may change in future versions of FLTK.
The second form).. | http://www.fltk.org/doc-1.1/Fl.html | crawl-001 | refinedweb | 497 | 73.68 |
Question
Why should I learn Python? Are there any advantages to learning Python over other languages?
Answer
Python is a great language to learn, whether it’s your first time programming or not, for several reasons:
- It reads like plain English! This is something you can especially appreciate with a side-by-side. Take a look and, without worrying about what these code bits actually do, think about which is harder to read:
// Java public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } }
# Python print “Hello, World!”
It’s one of the most popular programming languages with a massive community. This means help is readily available when you’re stuck!
It’s used in many fields and industries, like Web development, machine learning and AI, and data science, just to name a few. So once you learn the fundamentals, you can grow into the role of your choice!
Python is one of our most popular courses and is only getting more so. We covered the rise of Python in our Codecademy blog, check it out to learn even more about why Python is so popular.
Do you have any insights into why people should learn Python? Share them below! | https://discuss.codecademy.com/t/why-learn-python/296497 | CC-MAIN-2018-34 | refinedweb | 204 | 74.29 |
Hi..
Have you looked at the dump() method on ParseResults? dump() lists out a
nested list of the ParseResults (the same as you get from the asList()
method), then lists a hierarchical tree of the named results.
Here is how your program's parsed tokens look using dump():
import pyparsing as pp
Name = pp.Word(pp.alphas).setResultsName('Name')
S = pp.delimitedList(Name, delim=pp.White())
results = S.parseString("a b c")
print results.dump()
Prints:
['a', 'b', 'c']
- Name: c
You're probably wondering why 'Name' only gives 'c' as the value. In the
default use of setResultsName, results names work like a Python dict. While
parsing progresses, 'Name' gets successively set to 'a', then 'b', and then
finally 'c'.
If you want to see all the 'Name' results, then use a different form of
setResultsName, passing listAllResults as True:
Name = pp.Word(pp.alphas).setResultsName('Name',listAllMatches=True)
S = pp.delimitedList(Name, delim=pp.White())
results = S.parseString("a b c")
print results.dump()
Prints:
['a', 'b', 'c']
- Name: ['a', 'b', 'c']
A couple of other comments:
- Pyparsing skips over whitespace by default, so there is no need to define
a sequence of Name words as a delimitedList(Name,delim=White()). You can
just OneOrMore(Name) to get this same result.
- setResultsName can be used in the manner you have done, and you should use
listAllMatches=True so that you don't just get the latest Name. When I
first added setResultsName, my intention was that a single expression might
be used several times within a larger grammar, with different results names
depending on where in the grammar the expression occurred. For instance,
here is an expression for a two-digit number:
num2d = pp.Word(pp.nums,exact=2)
I could label these with:
num2d.setResultsName("twoDigitNumber")
But if I were parsing a timestamp, this would be much more meaningful:
timestamp = num2d.setResultsName("year") + "/" + \
num2d.setResultsName("month") + "/" + \
num2d.setResultsName("day") + \
num2d.setResultsName("hours") + ":" + \
num2d.setResultsName("minutes") + ":" + \
num2d.setResultsName("seconds")
Parsing the string "07/06/23 12:34:56" and calling dump() gives this output:
['07', '/', '06', '/', '23', '12', ':', '34', ':', '56']
- day: 23
- hours: 12
- minutes: 34
- month: 06
- seconds: 56
- year: 07
I am glad you're using results names, I think they are an important feature
for all but the most trivial parsers. In the next release of pyparsing,
this syntax will be simplfied to:
timestamp = num2d("year") + "/" + \
num2d("month") + "/" + \
num2d("day") + \
num2d("hours") + ":" + \
num2d("minutes") + ":" + \
num2d("seconds")
Probably more than you were bargaining for, but I hope this helps.
-- Paul
> -----Original Message-----
> From: pyparsing-users-bounces@...
> [mailto:pyparsing-users-bounces@...] On
> Behalf Of sam lee
> Sent: Saturday, June 23, 2007 11:24 PM
> To: pyparsing-users@...
> Subject: [Pyparsing] How can I print syntax tree for debug?
>
> Hi.
>
>.
>
> --------------------------------------------------------------
> -----------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
>
> _______________________________________________
> Pyparsing-users mailing list
> Pyparsing-users@...
>
>
> | http://sourceforge.net/p/pyparsing/mailman/pyparsing-users/thread/001601c7b61e$e01a1bb0$6500a8c0@AWA2/ | CC-MAIN-2014-23 | refinedweb | 502 | 55.95 |
Using a Sound Sensor With a Raspberry Pi to Control Your Philips Hue Lights
Introduction: approach.
Step 1: Hardware Requirements
- a Raspberry Pi 2 to run the software / scripts (every Raspberry generation should be feasible to implement this)
- a cheap sound sensor for a couple of bucks from ebay/amazon/etc - e.g. here or here
- some female-to-femalejumper wires to connect the sound sensor with the Pi
- Philipps Hue lights, you can go for every set up you want - I went for
- Philips Friends of hue - LivingColors Bloom
- Philips Hue Go
- Philips hue - LED
- one of the above needs to be the starter kit / you will need a bridge to control the lights after all
Step 2: Software Requirements
In my setup I used a Raspberry Pi 2 with Raspbian Wheezy with a few python libraries:
- Raspbian
- Python package python-dev
- Python library requests
- Python library qhue from Quentin Stafford-Fraser
- Python library RPI.GPIO
Step 3: Initial Setup
This will outline the main steps of the set up, as you can see it's pretty simple and should be straight-forward to understand.
- Connect the sound sensor to the Raspberry Pi via female-to-female jumpers, using 3 jumpers for:
- VCC (white cable, Physical Pin 2)
- GND (black cable, Physical Pin 6)
- D0 (grey cable, I used the Physical pin 7 to connect the sensor)
Check out this tutorial for better pictures (note they use pin #12 instead of #7 in their tutorial)
- power on your Raspberry Pi
- If your sound sensor has indicator LEDs, make sure it triggers when creating noise to test the basic sound sensor capabilities
- install Raspbian image (if not done already)
- setup up and connect via ssh
- install necessary software (python-dev) via
apt-get install python-dev
- and install the python libraries requests, qhue and RPI.GPIO via
pip install requests
git clone
cd qhue
pip install RPI.GPIO
Step 4: Set Up the Script
Let's open a new script named sensor.py and put in the below code.
nano sensor.py
Content:
import time
import RPi.GPIO as GPIO from qhue import Bridge
GPIO.setmode(GPIO.BOARD) # use board pin numbers # define pin #7 as input pin pin = 7 GPIO.setup(pin, GPIO.IN)
b = Bridge("192.168.1.30", 'e254339152304b714add57d14a8fdbb') groups = b.groups # as groups are handy, I will contorll all
while 1: if GPIO.input(pin) == GPIO.LOW: i = 3 # number of iterations for l in range(1,i+1): # this is one of the temporary effects, see official docs # at b.groups[0].action(alert="select") #group 0 = all lights time.sleep(1) time.sleep(10)
Step 5: Fire It Up!
Just run the script via
sudo python sensor.py
And trigger the sound input - if all went well, your lights should blink 3 times..
Congratulations, you just created a listener script to listen to your doorbell / any sound you wish!
Step 6: Set Up Autostart for Your Listener Script
We will be utilizing the Linux rc.local functionality and create a new shell script that will run the python part we just created in the previous step:
nano /home/pi/qhue/sensor.sh
Content:
#!/bin/sh<br># sensor.sh sudo python /home/pi/qhue/sensor.py
Now make this script executable by performing:
chmod +x sensor.sh
Open up the /etc/rc.local file
nano /etc/rc.local
and enter the following line before exit 0 to run the script at startup
sudo /home/pi/qhue/sensor.sh
Save the file and reboot your Raspberry Pi via
sudo shutdown -r now
Step 7: Summary / To-Do
This instructable outlines the very basics, I will add a few pictures and more details once I got the time to fine-tune this.
It is worth noticing that the distance to your sound source shouldn't be too far away, neither should you place the Pi in a spot where you would expect regular noise.
Hope you enjoyed the guide, any suggestions? Feel free to comment ;)
Cheers
I tried to do this but i'm having problems with the script.
the if condition if GPIO.input(pin) == GPIO.LOW: is always true! so it's detecting sound all the time i already adjusted the sensitivity but nothing happens it keeps throwing alerts
can you pls help me with this?
Hey there,
you could try to test if the sound sensor works first.
In the middle of the code try something like:
while 1:
Print GPIO.input(pin)
To print the status of the sensor. Next trigger the sensor witrh a clap or something and see if the status changes. Also make sure you have connected the pins correctly.
If found this tutortial, they have better pictures that show you how to connect the sensor to the GPIO from the Pi. Note that they are using pin #12 instead of #7 as I do. You could use their setup and change the pin number in my code to see if it works? :)
Here's the instructable:
Cheers
David
Really nice!
But I have a question, has the sensor an digital output? Or is just analog?
In the case that is just analog, is it safe to connect it directly to the raspberry?
Hey there! The sound sensor I've used is analog.
I'm no expert here but I guess you should NOT hook it up directly. My setup is up and running since 14 months now though :)
Cheers!
Can i use it with my Milight bridge ? Or only coded for Hue?
This should work with any hardware that has an API you can access.
I could think of a setup with FHEM ( ) that most definitely will be able to control your lights.
nice
This is so cool! I love finding ways to control internet connected things! | http://www.instructables.com/id/Using-a-sound-sensor-with-a-Raspberry-Pi-to-contro/ | CC-MAIN-2017-51 | refinedweb | 968 | 72.05 |
Proposal for adding very usefull macro (not function) to generate delay with specified number of cpu machine cycles.
It is present in some other compilers and named __delay_cycles().
Compiler should translate it to assembler code as follows:
__delay_cycles(1) -> NOP
__delay_cycles(2) -> 2xNOP
__delay_cycles(..) -> calculated assembler loop
__delay_cycles(...) -> calculated assembler loop in loop
I know that it isn't accurate while interrupts are active, but delay is not shorter than expected.
Whiteboard
Let me propose my decision:
#include <stdint.h>
static __inline__ __attribute_
{
#if ARCH_PIPELINE_
# define EXTRA_NOP_CYCLES "nop"
#else
# define EXTRA_NOP_CYCLES ""
#endif
__asm__ __volatile__
(
".syntax unified" "\n\t" // is to prevent CM0,CM1 non-unified sintax
"loop%=:" "\n\t"
" subs %[cnt],#1" "\n\t"
" bne loop%=" "\n\t"
: [cnt]"+r"(cy) // output: +r means input+output
: // input:
: "cc" // clobbers:
);
}
static __inline__ __attribute_
{
#define MAXNOPS 4
if (x<=MAXNOPS)
{
if (x==1) {nop();}
else if (x==2) {nop(); nop();}
else if (x==3) {nop(); nop(); nop();}
else if (x==4) {nop(); nop(); nop(); nop();}
}
else // because of +1 cycle inside delay_4cycles
{
uint32_t rem = (x-1)%MAXNOPS;
if (rem==1) {nop();}
else if (rem==2) {nop(); nop();}
else if (rem==3) {nop(); nop(); nop();}
if ((x=(x-1)/MAXNOPS)) delay_4cycles(x); // if need more then 4 nop loop is more optimal
}
}
By @Traumflug
For a calibrated delay loop with microseconds as parameter, see https:/
Next to interrupts, the prefetch engine is another source of unexpected additional delays. To deal with this, one can add a __ASM (".balign 16"). Then the compiler adds NOPs to make sure code always starts at a 16-byte boundary, giving consistent behavior in the loop. Moving such a loop to a place where it crosses a 16-byte boundary makes it slower by a few clocks without additionally executed instructions, the CPU just sleeps for a clock tick or two.
Also something to consider is that more feature rich Cortex' may simply ignore NOPs. They enter the CPU pipeline then, but get discarded before they consume time. This info is picked up from one of the Cortex-M user manuals.
By David Brown
The delay_cycles function should check that the parameter x is constant:
static __inline__ __attribute_
{
if (__builtin_
... // same as above
} else {
delay_4cycles(x / 4);
}
}
It would be a bit fiddly to try and get the dynamic version cycle-perfect, rather than rounded to a multiple of 4 cycles - and I doubt if it is worth the effort. But this version would still be more accurate than the first version.
And could another instruction be used instead of NOP? Like:
asm volatile(" add %[x], #0 " : [x] "+r" (x) : ) | https://blueprints.launchpad.net/gcc-arm-embedded/+spec/delay-cycles | CC-MAIN-2017-51 | refinedweb | 432 | 58.62 |
Right, i'm doing a piece of work for uni and basically i need it to read in a set of results that are contained in a .txt file and split them at the delimiter (which i have managed to do). The problem is i need to take these results and organise them into a table which looks something like this :
<home_team_name> [<home_team_score>] | <away_team_name> [<away_team_score>]
Then i need to make sure that all of the scores read in were valid i.e. both names and both scores were there. If a team name or score was missing then it should not be output to the screen.
Finally at the end i need to output something that will say... "The valid match count was # . Total goals scored were #. Invalid match count was # .
I'm very new to java this is one of my first pieces of work. Would i need to split each of the names and scores into a seperate class? I'll post what code i have so far. Thanks!!
import java.io.FileReader; import java.io.BufferedReader; import java.io.IOException; public class linebased { public static void main(String[] args) throws IOException { BufferedReader inputStream = null; try { inputStream = new BufferedReader(new FileReader("results2.txt")); String text; while ((text = inputStream.readLine()) != null) { String [] splitupText = text.split(":"); // split the text into multiple elements for ( int i = 0; i < splitupText.length; i++ ) { // loop over each element String nextBit = splitupText[i]; // get the next element (indexed by i) nextBit = nextBit.trim(); // use the trim method to remove leading and trailing spaces System.out.println(nextBit); // display the next element } } } finally { if (inputStream != null) { inputStream.close(); } } } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/1298-football-results.html | CC-MAIN-2015-11 | refinedweb | 272 | 76.32 |
Hi Dag,
Thanks for your effort. The problem is as far I understand a special behavoiour on AIX Java
Mashine. Does anybody of the developing or testing people got access on something like that?
-----Ursprüngliche Nachricht-----
Von: Dag.Wanvik@Sun.COM [mailto:Dag.Wanvik@Sun.COM]
Gesendet: Freitag, 27. November 2009 16:15
An: Derby Discussion
Betreff: Re: AW: AW: AW: OOM with millions of weakly-referenced Derby objects
Hi Malte,
Malte.Kempff@de.equens.com writes:
> Hi Dag, The problem in my case is, that I don't have access to any
> AIX-Computer, and that is really a pity, because I cannot
> reconstruct the scenario on my own and watch it with my own eyes.
> That all happened on a production system and the provider denies
> trying it out again. But attached you can find the SQL-script
> without being UTF8. So that Problem should occur in the same way
> used on AIX (5.3) BTW: is there a way to be kind of independent of
> the input format?
I converted the script you enclosed to UTF-8 and ran it through ij
without any problems.
I also tried to run it with the enclosed program, and it gave me no
errors when the inout file was encoded with UTF-8 (Comments had
non-ASCII German character), and it printed no errors on my
OpenSolaris system (I don't have access to AIX).
In any case, should you see this problem again, please feel free to
file a bug report(*), preferably with a repro and/or logs.
As for independence of the input format, since runScript requires that
you specify the encoding, you could do some encoding auto-detect logic
on the script, I guess..
(*)
Hope this helps,
Dag
import org.apache.derby.tools.ij;
import java.sql.*;
import java.io.*;
public class Foo {
public Foo() {};
static public void main(String[] args)
throws SQLException,
UnsupportedEncodingException,
FileNotFoundException,
ClassNotFoundException {
Class.forName("org.apache.derby.jdbc.EmbeddedDriver");
Connection c = DriverManager.getConnection("jdbc:derby:wombat;create=true");
FileInputStream fs = new FileInputStream(args[0]);
int errs = ij.runScript(c,
fs,
"UTF-8",
System.out,
"UTF-8");
System.out.println("\n\nerrs=" + errs);
c.close();
}
}
>
> Malte
>
> -----Ursprüngliche Nachricht-----
> Von: Dag.Wanvik@Sun.COM [mailto:Dag.Wanvik@Sun.COM]
> Gesendet: Freitag, 27. November 2009 01:19
> An: Derby Discussion
> Betreff: Re: AW: AW: OOM with millions of weakly-referenced Derby objects
>
> Mal:h
>> Number of SQLExceptions thrown during the execution, -1 if not
>> known.
>>
>> If so, are you seeing 0 or -1 returned here?
>>
>> Dag | http://mail-archives.apache.org/mod_mbox/db-derby-user/200911.mbox/%3C0773A3CE9DAC7B42B8F564C52805678D02647E87@EVS01.INTERN.INTERPAY.NL%3E | CC-MAIN-2014-49 | refinedweb | 420 | 58.08 |
fopen - open a stream
#include <stdio.h> FILE *fopen(const char *filename, const char *mode);
The fopen() function opens the file whose pathname is the string pointed to by filename, and associates a stream with it.
The argument mode points conformance., output must not be directly followed by input without an intervening call to fflush() or to a file positioning function (.Fn fseek , fsetpos() or rewind()), and input must not be directly followed by output without an intervening call to a file positioning function, unless the input operation encounters end-of-file.
When opened, a stream is fully buffered if and only if it can be determined not to refer to an interactive device. The error and end-of-file indicators for the stream are cleared.
If mode is w, a, w+ or a+ and the file did not previously exist, upon successful completion, fopen() function will mark for update the st_atime, st_ctime and st_mtime fields of the file and the st_ctime and st_mtime fields of the parent directory.
If mode is w or w+ and the file did previously exist, upon successful completion, fopen() will mark for update the st_ctime and st_mtime fields of the file. The fopen() function will allocate a file descriptor as open() f fopen() function may fail if:
- [EINVAL]
- The value of the mode argument is not valid.
- .
None.
None.
None.
fclose(), fdopen(), freopen(), <stdio.h>.
Derived from Issue 1 of the SVID. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/fopen.html | CC-MAIN-2015-27 | refinedweb | 238 | 60.35 |
Full Stack Web Development Internship Program
- 3k Enrolled Learners
- Weekend/Weekday
- Live Class
The very first program that any Java programmer learns to code is Hello World Program in Java. But many a time we miss out on the nitty-gritty of the basic syntax. Through the medium of this article, I will get into the details of the Hello World Program in Java.
Below are the topics covered in this article:
Let’s get started.
Before we get into the details, lets first start with the coding and see how a basic Hello World program in Java is coded.
public class HelloWorldDemo { public static void main(String[] args) { System.out.println( "Hello World!" ); System.exit( 0 ); //success } }
Now that you are done with the coding, lets now analyze the program’s syntax in depth.
Line 1: public class HelloWorldDemo {
This line makes use of the keyword class for declaring a new class called HelloWorldDemo. Since Java is an Object-Oriented Programming (OOP) Language, the entire class definition, including all of its members must be contained in between the opening curly brace { and the closing curly brace}. Also, it is using the public keyword to specify the accessibility of the class from outside the package.
Line 2: public static void main( String[] args ) {
This line declares a method called main(String[]). It is called the main method and acts as the entry point for the Java compiler to begin the execution of the program. In other words, whenever any program is executed in Java, the main method is the first function to be invoked. Other functions in the application are then invoked from the main method. In a standard Java application, one main method is mandatory to trigger the execution.
Lets now break down this entire line and analyze each word:
public: it is an access modifier specifies the visibility. It allows JVM to execute the method from anywhere.
static: It is a keyword which helps in making any class member static. The main method is made static as there is no need for creating an object to invoke the static methods in Java. Thus, JVM can invoke it without having to create an object which helps in saving the memory.
void: It represents the return type of the method. Since the Java main method doesn’t return any value its return type is declared as void.
main(): It is the name of the method that has been configured in the JVM.
String[]: It represents that the Java main method can accept a single line argument of the type String array. This is also known as java command line arguments. Below I have listed down a number of valid java main method signatures:
Line 3: System.out.println( “Hello World!” );
System: It is a pre-defined class in java.lang package which holds various useful methods and variables.
out: It is a static member field of type PrintStream.
println: It is a method of PrintStream class and is used for printing the argument that has been passed to the standard console and a newline. You can also use print() method instead of println().
Line 4: System.exit( 0 );
The java.lang.System.exit() method is used to exit the current program by terminating the currently executing Java Virtual Machine. This method takes a status code as input which is generally a non-zero value. It indicates in case any abnormal termination occurs.
So that was all about the program syntax. Let’s now see how to compile Hello World in Java program.
Now what you need to is type in this program in your text editor save it with the class name that you have used in your program. In my case, I will be saving it as HelloWorldDemo.java.
Next step is to, go to your console window and navigate to the directory where you have saved your program.
Now in order to compile the program type in the below command:
javac HelloWorldDemo.java
Note: Java is case-sensitive, thus make sure that you type in the file name in the correct format.
If successfully executed, this command will generate a HelloWorldDemo.class file which will be machine independent and portable in nature.
Now that you have successfully compiled the program, let us try to execute our Hello World Program in Java and get the output.
In order to execute your HelloWorld in Java program on the command line, all you need to do is type in the below code:
java HelloWorldDemo
Voila! You have successfully executed your first program in Java.
In case you are using an IDE, you can skip all this hassle and just press the execute button in your IDE to compile and execute your Hello World in Program Java.
This brings us to the end of this article on Hello World Program in Java. If you want to know more about Java you can refer to our other Java Blogs.
Now that you have understood what is a Hello World Program in Java, check out the Java Certification Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. Edureka’s Java J2EE and SOA training and certification course “Hello World Program in Java” article and we will get back to you as soon as possible.
edureka.co | https://www.edureka.co/blog/hello-world-program-in-java/ | CC-MAIN-2022-21 | refinedweb | 895 | 63.9 |
Hi All,
I want to update the workflow priority such that all the steps should be in the updated priority. By default, it is getting medium priority.
I tried to create a new custom process step to update the priority, however, it is not working properly.
PFB the code to I currently have in the custom process step
import com.adobe.granite.workflow.exec.InboxItem.Priority;
@component(service=WorkflowProcess.class, property = {"process.label=Assign Priority"})
public class AssignPriority implements WorkflowProcess {
public void execute(WorkItem item, WorkflowSession wfSession, MetaDataMap args) throws WorkflowException {
item.setPriority(Priority.HIGH);
}
}
Any help would be appreciated.
Topics help categorize Community content and increase your ability to discover relevant content.
Views
Replies
Total Likes
Hi,
You can set the priority for the workflowItem. I am not sure if you need to call wfsession save at the end. If workflow API does not work then you can use JCR/Node API.
Priority: The available options are High, Medium, and Low. The default value is Medium.
You can change it though in process step using WorkItem API....
OR
While creating new task, you can set the priority.
The logic to fetch or create data for TouchUI inbox is at the Java side, below is the basic flow:
~ A request to '/aem/inbox' resolves to path '/libs/cq/inbox/content/inbox'.
~ This renders data per [1] which creates the html page per [2].
~ At the same time [1] triggers a get request to fetch the data for the inbox page [3] which is handled by a servlet [4] and used by [2] to create the complete inbox page.
~ Further on logic goes to [5] and [6] to fetch the data per the logic.
[0]: /libs/cq/inbox/content/inbox
[1]: /libs/cq/inbox/content/inbox/jcr:content/views/list/datasource
[2]: /libs/cq/inbox/gui/components/inbox/inboxitem/list/list.html
[3]: cq/inbox/gui/components/inbox/datasource/itemsdatasource
[4]: com.adobe.cq.inbox.impl.servlet.ItemsDataSourceServlet.java
[5]: com.adobe.granite.workflow.core.WorkflowSessionImpl.java
[6]: com.adobe.granite.workflow.core.jcr.WorkItemManager.java
Views
Replies
Total Likes
-
Views
Replies
Total Likes
This answer is not correct at all.
WorkItem API is not working neither for priority or due date.
I'm searching for a while now, how can I. update some details for the workItem and I am not able to find anything valide.Is really frustrating.
Please let us know how should we use this WorkItem API.
Thanks you! | https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/how-to-update-workflow-priority/m-p/443966/highlight/true | CC-MAIN-2022-40 | refinedweb | 412 | 51.04 |
Documenting Python¶
The Python language has a substantial body of documentation, much of it contributed by various authors. The markup used for the Python documentation is reStructuredText, developed by the docutils project, amended by custom directives and using a toolset named Sphinx to post-process the HTML output.
This document describes the style guide for our documentation as well as the custom reStructuredText markup introduced by Sphinx to support Python documentation and how it should be used.
The documentation in HTML, PDF or EPUB format is generated from text files written using the reStructuredText format and contained in the CPython Git repository..
Introduction¶!
Style guide¶.
A sentence-ending period may be followed by one or two spaces; while reST ignores the second space, it is customarily put in by some users, for example to aid Emacs’ auto-fill mode.
Footnotes¶.
Capitalization¶
In the Python documentation, the use of sentence case in section titles is preferable, but consistency within a unit is more important than following this rule. If you add a section to a chapter where most sections are in title case, you can either convert all titles to sentence case or use the dominant style in the new section title.
Sentences that start with a word for which specific rules require starting it with a lower case letter should be avoided.
Note
Sections that describe a library module often have titles in the form of “modulename — Short description of the module.” In this case, the description should be capitalized as a stand-alone sentence.:
-.
- reST
For “reStructuredText,” an easy to read, plaintext markup syntax used to produce Python documentation. When spelled out, it is always one word and both forms start with a lower case ‘r’.
- Unicode
The name of a character coding system. This is always written capitalized.
- Unix
The name of the operating system developed at AT&T Bell Labs in the early 1970s.
Affirmative Tone¶
The documentation focuses on affirmatively stating what the language does and how to use it effectively.
Except for certain security or segfault risks, the docs should avoid wording along the lines of “feature x is dangerous” or “experts only”. These kinds of value judgments belong in external blogs and wikis, not in the core documentation.
Bad example (creating worry in the mind of a reader):
Warning: failing to explicitly close a file could result in lost data or excessive resource consumption. Never rely on reference counting to automatically close a file.
Good example (establishing confident knowledge in the effective use of the language):
A best practice for using files is use a try/finally pair to explicitly close a file after it is used. Alternatively, using a with-statement can achieve the same effect. This assures that files are flushed and file descriptor resources are released in a timely manner.
Economy of Expression¶
More documentation is not necessarily better documentation. Err on the side of being succinct.
It is an unfortunate fact that making documentation longer can be an impediment to understanding and can result in even more ways to misread or misinterpret the text. Long descriptions full of corner cases and caveats can create the impression that a function is more complex or harder to use than it actually is.
Security Considerations (and Other Concerns)¶
Some modules provided with Python are inherently exposed to security issues
(e.g. shell injection vulnerabilities) due to the purpose of the module
(e.g.
ssl). a note.
Code Examples¶
Short code examples can be a useful adjunct to understanding. Readers can often grasp a simple example more quickly than they can digest a formal description in prose.
People learn faster with concrete, motivating examples that match the context of
a typical use case. For instance, the
str.rpartition() method is better
demonstrated with an example splitting the domain from a URL than it would be
with an example of removing the last word from a line of Monty Python dialog.
The ellipsis for the
sys.ps2 secondary interpreter prompt should only
be used sparingly, where it is necessary to clearly differentiate between input
lines and output lines. Besides contributing visual clutter, it makes it
difficult for readers to cut-and-paste examples so they can experiment with
variations.
Code Equivalents¶
Giving pure Python code equivalents (or approximate equivalents) can be a useful adjunct to a prose description. A documenter should carefully weigh whether the code equivalent adds value.
A good example is the code equivalent for
all(). The short 4-line code
equivalent is easily digested; it re-emphasizes the early-out behavior; and it
clarifies the handling of the corner-case where the iterable is empty. In
addition, it serves as a model for people wanting to implement a commonly
requested alternative where
all() would return the specific object
evaluating to False whenever the function terminates early.
A more questionable example is the code for
itertools.groupby(). Its code
equivalent borders on being too complex to be a quick aid to understanding.
Despite its complexity, the code equivalent was kept because it serves as a
model to alternative implementations and because the operation of the “grouper”
is more easily shown in code than in English prose.
An example of when not to use a code equivalent is for the
oct() function.
The exact steps in converting a number to octal doesn’t add value for a user
trying to learn what the function does.
Audience¶
The tone of the tutorial (and all the docs) needs to be respectful of the reader’s intelligence. Don’t presume that the readers are stupid. Lay out the relevant information, show motivating use cases, provide glossary links, and do your best to connect-the-dots, but don’t talk down to them or waste their time.
The tutorial is meant for newcomers, many of whom will be using the tutorial to evaluate the language as a whole. The experience needs to be positive and not leave the reader with worries that something bad will happen if they make a misstep. The tutorial serves as guide for intelligent and curious readers, saving details for the how-to guides and other sources.
Be careful accepting requests for documentation changes from the rare but vocal category of reader who is looking for vindication for one of their programming errors (“I made a mistake, therefore the docs must be wrong …”). Typically, the documentation wasn’t consulted until after the error was made. It is unfortunate, but typically no documentation edit would have saved the user from making false assumptions about the language (“I was surprised by …”).
reStruct.
Paragraphs¶`.
Lists and Quotes¶
List markup is natural: just place an asterisk at the start of a paragraph and
indent properly. The same goes for numbered lists; they can also be
automatically numbered.
Source Code¶:”.-linking markup.
Sections¶, here is a suggested convention:
#with overline, for parts
*with overline, for chapters
=, for sections
-, for subsections
^, for subsubsections
", for paragraphs
Explicit Markup¶
.
Footnotes¶.
Source encoding¶.
Gotchas¶
There are some problems one commonly runs into while authoring reST documents:
Separation of inline markup: As said above, inline markup spans must be separated from the surrounding text by non-word characters, you have to use an escaped space to get around that.
Additional Markup Constructs¶
Sphinx.
Meta-information markup¶
- sectionauthor.
Module-specific markup¶.
- module
This directive marks the beginning of the description of a module, package, or submodule. The name should be fully qualified (i.e. including the package name for submodules)..
- not include the class name, but be nested in a class directive. The generated files will reflect this nesting, and the target identifiers (for HTML output) will use both the class and method name, to enable consistent cross-references. If you describe methods belonging to an abstract protocol such as context managers, use a class directive with a (pseudo-)type name too to make the index entries more informative.
The directives are:
- c:function:type
Describes a C type. The signature should just be the type name.
- c:var
Describes a global C variable. The signature should include the type, such as:
.. c:var:: PyObject* PyClass_Type
- data
Describes global data in a module, including both variables and values used as “defined constants.” Class and object attributes are not documented using this directive.
- exception
Describes an exception class. The signature can, but need not include parentheses with constructor arguments.
- function
Describes a module-level function. The signature should include the parameters, enclosing optional parameters in brackets. Default values can be given if it enhances clarity. For example:
.. function::.
- coroutinefunction
Describes a module-level coroutine. The description should include similar information to that described for
function.
- decorator
Describes a decorator function. The signature should not represent the signature of the actual function, but the usage as a decorator. For example, given the functions
def removename(func): func.__name__ = '' return func def setnewname(name): def decorator(func): func.__name__ = name return func return decorator
the descriptions should look like this:
.. decorator:: removename Remove name of the decorated function. .. decorator:: setnewname(name) Set name of the decorated function to *name*.
There is no
decorole to link to a decorator that is marked up with this directive; rather, use the
:func:role.
- class
Describes a class. The signature can include parentheses with parameters which will be shown as the constructor arguments.
- attribute
Describes an object data attribute. The description should include information about the type of the data to be expected and whether it may be changed directly. This directive should be nested in a class directive, like in this example:
.. class:: Spam Description of the class. .. attribute:: ham Description of the attribute.
If is also possible to document an attribute outside of a class directive, for example if the documentation for different attributes and methods is split in multiple sections. The class name should then be included explicitly:
.. attribute:: Spam.eggs
- method
Describes an object method. The parameters should not include the
selfparameter. The description should include similar information to that described for
function. This directive should be nested in a class directive, like in the example above.
- coroutinemethod
Describes an object coroutine method. The parameters should not include the
selfparameter. The description should include similar information to that described for
function. This directive should be nested in a
classdirective.
- decoratormethod
Same as
decorator, but for decorators that are methods.
Refer to a decorator method using the
:meth:role.
- staticmethod
Describes an object static method. The description should include similar information to that described for
function. This directive should be nested in a
classdirective.
- classmethod
Describes an object class method. The parameters should not include the
clsparameter. The description should include similar information to that described for
function. This directive should be nested in a
classdirective.
- abstractmethod
Describes an object abstract method. The description should include similar information to that described for
function. This directive should be nested in a
classdirective.
- cmdoption
Describes a Python. By
code-blockdirective can be used to specify the highlight language of a single code block, e.g.:
.. code-block:: c #include <stdio.h> void main() { printf("Hello world!\n"); }
The values normally used for the highlighting language are:
python(the default)
c
rest
none(no highlighting).
Inline markup¶.:
- c:data
The name of a C-language variable.
- c:func
The name of a C-language function. Should include trailing parentheses.
- c:macro
The name of a “simple” C macro, as defined above.
- c:type
The name of a C-language type.
- c:member
The name of a C type member, as defined above.
The following roles do not refer to objects, but can create cross-references or internal links:
- envvar
An environment variable. Index entries are generated.
- keyword
The name of a Python keyword. Using this role will generate a link to the documentation of the keyword.
True,
Falseand
Nonedo not use this role, but simple code markup (
``True``), given that they’re fundamental to the language and should be known to any programmer.
- option
A command-line option of Python. The leading hyphen(s) must be included. If a matching
cmdoptiondirective exists, it is linked to. For options of other programs or scripts, use simple
``code``markup.
-:
-:
``spam`` is installed in :file:`/usr/lib/python2.{x}/site-packages` ...
In the built documentation, the
xwill be displayed differently to indicate that it is to be replaced by the Python minor version.
- guilabel)`.
- menuselection:.
If you don’t need the “variable part” indication, use the standard
``code``instead.
The following roles generate external links:
- pep
A reference to a Python Enhancement Proposal. This generates appropriate index entries. The text “PEP number” is generated; in the HTML output, this text is a hyperlink to an online copy of the specified PEP. Such hyperlinks should not be a substitute for properly documenting the language in the manuals.
-.
Alternatively, you can reference any label (not just section titles)
if you provide the link text
:ref:`link text <reference-label>`.
Paragraph aware of when using whatever bit of API the warning pertains to. The content of the directive should be written in complete sentences and include all appropriate punctuation. In the interest of not scaring users away from pages filled with warnings, this directive should only be chosen over
notefor information regarding the possibility of crashes, data loss, or security implications.
- versionadded
This directive documents the version of Python which added the described feature, or a part of it, to the library or C API. When this applies to an entire module, it should be placed at the top of the module section before any prose.
The first argument must be given and is the version in question. The second argument is optional and can be used to describe the details of the feature.
Example:
.. versionadded:: 3.5
- versionchanged
Similar to
versionadded, but describes when and what changed in the named feature in some way (new parameters, changed side effects, platform support, etc.). This one must have the second argument (explanation of the change).
Example:
.. versionchanged:: 3.1 The *spam* parameter was added.
Note that there should be no blank line between the directive head and the explanation; this is to make these blocks visually continuous in the markup.
- deprecated
Indicates the version from which the described feature is deprecated.
There is one required argument: the version from which the feature is deprecated.
Example:
.. deprecated:: 3.8
- deprecated-removed
Like
deprecated, but it also indicates in which version the feature is removed.
There are two required arguments: the version from which the feature is deprecated, and the version in which the feature is removed.
Example:
.. deprecated-removed:: 3.8 4.0
- impl-detail.
- strings datatypes numeric ,
stringsand. The builtin entry type is slightly different in that “built-in function” is used in place of “builtin” when creating the two entries.
For index directives containing only “single” entries, there is a shorthand notation:
.. index:: BNF, grammar, syntax, notation
This creates four index entries.directive`
Building the documentation¶
The toolset used to build the docs is written in Python and is called Sphinx.
Sphinx is maintained separately and is not included in this tree. Also needed
are blurb, a tool to create
Misc/NEWS on demand; and
python-docs-theme, the Sphinx theme for the Python documentation.
To build the documentation, follow the instructions from one of the sections
below. You can view the documentation after building the HTML by pointing
a browser at the file
Doc/build/html/index.html.
You are expected to have installed the latest stable version of
Sphinx and blurb on your system or in a virtualenv (which can be
created using
make venv), so that the Makefile can find the
sphinx-build command. You can also specify the location of
sphinx-build with the
SPHINXBUILD make variable.
Using make / make.bat¶
On Unix, run the following from the root of your repository clone to build the output as HTML:
cd Doc make venv make html
or alternatively
make -C Doc/ venv html.
You can also use
make help to see a list of targets supported by
make. Note that
make check is automatically run when
you submit a pull request, so you should make
sure that it runs without errors.
On Windows, a
make.bat batchfile tries to emulate make
as closely as possible, but the venv target is not implemented, so you will
probably want to make sure you are working in a virtual environment before
proceeding, otherwise all dependencies will be automatically installed on your
system.
When ready, run the following from the root of your repository clone to build the output as HTML:
cd Doc make html
You can also use
make help to see a list of targets supported by
make.bat.
See also
Doc/README.rst for more information.
Using sphinx-build¶
Sometimes we directly want to execute the sphinx-build tool instead of through make (although the latter is still the preferred way). In this case, you can use the following command line from the Doc directory (make sure to install Sphinx, blurb and python-docs-theme packages from PyPI):
sphinx-build -b<builder> . build/<builder>
where
<builder> is one of html, text, latex, or htmlhelp (for explanations
see the make targets above).
Translating¶
Python documentation translations are governed by the PEP 545, they are built by docsbuild-scripts and hosted on docs.python.org. There are several documentation translations already in production, other are work in progress:
Starting a new translation¶
First subscribe to the doc-sig mailing list, and introduce yourself and the translation you’re starting.
Then you can bootstrap your new translation by using our cookiecutter.
The important steps looks like this:
Create the github repo (anywhere), with the right hierarchy (using the cookiecutter).
Gather people to help you translating, you can’t do it alone.
You can use any tool to translate, as long as you can synchronize with git. Some are using Transifex, some are using only github, or you can choose another way, it’s up to you.
Ensure we updated this page to reflect your work and progress, either via a PR, or by asking on the doc-sig mailing list.
When
tutorial/,
bugs.pyand
library/functionsare complete, ask on doc-sig for your language to be added in the language picker on docs.python.org.
PEP 545 summary:¶
Here are the essential points of PEP 545:
Each translation is assigned an appropriate lowercased language tag, with an optional region subtag, joined with a dash, like
pt-bror
Each translation is under CC0 and marked as so in the README (as in the cookiecutter).
Translations files are hosted on-{LANGUAGE_TAG}(not mandatory to start a translation, but mandatory to land on
docs.python.org).
Translations having completed
tutorial/,
library/stdtypesand
library/functionsare hosted on{LANGUAGE_TAG}/{VERSION_TAG}/.
How to get help¶
Discussions about translations occur on the doc-sig mailing list,
and there’s a Libera.Chat IRC channel,
#python-doc.
Translation FAQ¶
Which version of Python documentation should be translated?¶
Consensus is to work on current stable, you can then propagate your translation from a branch to another using pomerge.
Are there some tools to help in managing the repo?¶
Here’s what’s we’re using:
How a coordinator is elected?¶
There is no election, each country have to sort this out. Here are some suggestions.
Coordinator requests are to be public on doc-sig mailing list.
If the given language have a native core dev, the core dev have its word on the choice.
Anyone who wants to become coordinator for its native language, and shows motivation by translating and building a community, will be named coordinator.
In case of concurrency between two persons, no one will sort this out for you. It is up to you two to organize a local election or whatever needed to sort this out.
In case a coordinator become inactive or unreachable for a long period of time, someone else can ask for a takeover on doc-sig.
The entry for my translation is missing/not up to date on this page¶
Ask on doc-sig, or better, make a PR on the devguide.
I have a translation, but not on git, what should I do?¶
You can ask for help on the doc-sig mailing list, and the team will help you create an appropriate repository. You can still use tools like transifex, if you like.
My git hierarchy does not match yours, can I keep it?¶
No, inside the
github.com/python organization we’ll all have the
exact same hierarchy so bots will be able to build all of our
translations. So you may have to convert from one hierarchy to another.
Ask for help on the doc-sig mailing list if you’re not sure on how to do
it.
What hierarchy should I use in my github repository?¶
As for every project, we have a branch per version. We store
po
files in the root of the repository using the
gettext_compact=0
style.
Every explicit markup block which isn’t a valid markup construct (like the footnotes above) is regarded as a comment. | https://devguide.python.org/documenting/ | CC-MAIN-2021-39 | refinedweb | 3,516 | 56.25 |
AWS and Computing Clusters and MPI
Just been curious about parallel computation. Clusters. Gives me a little nerd hard-on.
Working my way up to running some stuff on AWS (Amazon Web Services).
So I’ve been goofing around with mpi. Mpi (message passing interface) is sort of an instant messager for programs to pass around data . It’s got some convenient functions but its mostly pretty low level.
I’ll jot some fast and incomplete notes and examples
Tried to install mpi4py.
sudo pip install mpi4py
but it failed, first had to install openmpi
To install on Mac I had to follow these instructions here. Took about 10 minutes to compile
so mpi4py
give this code a run
#mpirun -np 3 python helloworld.py from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() name = MPI.Get_processor_name() print "Hello. This is rank " + str(rank) + " of " + str(size) + " on processor " + name`
the command mpirun runs a couple instances. You know which instance you are by checking the rank number which in this case is 0 through 2.
Typically rank 0 is some kind of master
lower case methods in mpi4py work kind of like how you’d expect. You can communicate between with comm.send and comm.recv
#mpirun -np 2 python helloworld.py from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() name = MPI.Get_processor_name() if rank == 0: comm.send("fred",dest=1) else: counter = comm.recv(source=0) print counter
However I think the these are toy methods. Apparently they use pickle (python’s fast and dirty file storage library) in the background. On the other hand, maybe since you’re writing in python anyhow, you don’t need the ultimate in performance and just want things to be easy. On the third hand, why are you doing parallel programming if you want things to be easy? On the fourth hand, maybe you
The capital letter mpi functions are the ones that are better, but they are not pythony. They are direct translations of the C api which uses no returns values. Instead you pass along pointers to the variables you want to be filled.
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() name = MPI.Get_processor_name() nprank = np.array(float(rank)) result = np.zeros(1) comm.Reduce(nprank, result, op=MPI.SUM, root=0) if rank == 0: print result | https://www.philipzucker.com/aws-and-computing-clusters-and-mpi/ | CC-MAIN-2021-39 | refinedweb | 407 | 79.36 |
SETNS(2) Linux Programmer's Manual SETNS(2)
setns - reassociate thread with a namespace
program below takes two or more arguments. The first argument specifies the pathname of a namespace file in an existing /proc/[pid]/ns/ directory. The remaining arguments specify a command and its arguments. The program opens the namespace file, joins that namespace using setns(), and executes the specified command inside that namespace. The following shell session demonstrates the use of this program (compiled as a binary named ns_exec) in conjunction with the CLONE_NEWUTS example program in the clone(2) man page (complied as a binary named newuts). We begin by executing the example program in clone(2) in the background. That program creates a child in a separate UTS namespace. The child changes the hostname in its namespace, and then both processes display the hostnames in their UTS namespaces, so that we can see that they are different. $ su # Need privilege for namespace operations Password: # ./newuts bizarro & [1] 3549 clone() returned 3550 uts.nodename in child: bizarro uts.nodename in parent: antero # uname -n # Verify hostname in the shell antero We then run the program shown below, using it to execute a shell. Inside that shell, we verify that the hostname is the one set by the child created by the first program: # ./ns_exec /proc/3550/ns/uts /bin/bash # uname -n # Executed in shell started by ns_exec bizarro
#define _GNU_SOURCE #include <fcntl.h> #include <sched.h> #include <unistd.h> #include <stdlib.h> #include <stdio.h> #define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \ } while (0) int main(int argc, char *argv[]) { int fd; if (argc < 3) { fprintf(stderr, "%s /proc/PID/ns/FILE cmd args...\n", argv[0]); exit(EXIT_FAILURE); } fd = open(argv[1], O_RDONLY); /* Get descriptor for namespace */ if (fd == -1) errExit("open"); if (setns(fd, 0) == -1) /* Join that namespace */ errExit("setns"); execvp(argv[2], &argv[2]); /* Execute a command in namespace */ errExit("execvp"); }
clone(2), fork(2), vfork(2), proc(5), unix(7)
This page is part of release 3.51 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. Linux 2013-01-01 SETNS(2)
HTML rendering created 2013-05-17 by Michael Kerrisk, author of The Linux Programming Interface, maintainer of the Linux man-pages project
Hosting by jambit GmbH | http://man7.org/linux/man-pages/man2/setns.2.html | CC-MAIN-2013-20 | refinedweb | 391 | 56.45 |
Hello!
Not that I really have the hardware, but it breaks my "allyesconfig" build.
So here is this compile fix for ipmi driver in current 2.4 bk tree.
(I see that Alan have some similarly named fix in his tree and
actually there is whole new version of the driver on the net somewhere,
but it is unclear when it is planned to be pushed to 2.4 tree,
so I'd better post this now ;) ).
Bye,
Oleg
===== drivers/char/ipmi/ipmi_kcs_intf.c 1.3 vs edited =====
--- 1.3/drivers/char/ipmi/ipmi_kcs_intf.c Sat May 24 01:12:48 2003
#include <linux/acpi.h>
/* A real hack, but everything's not there yet in 2.4. */
-#define COMPILER_DEPENDENT_UINT64 unsigned long
-#include <../drivers/acpi/include/acpi.h>
-#include <../drivers/acpi/include/actypes.h>
+#include <acpi/acpi.h>
+#include <acpi/actypes.h>
+#include <acpi/actbl.h>
struct SPMITable {
static unsigned long acpi_find_bmc(void)
{
acpi_status status;
- acpi_table_header *spmi;
+ struct acpi_table_header *spmi;
static unsigned long io_base = 0;
if (io_base != 0)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
More majordomo info at
Please read the FAQ at | http://www.verycomputer.com/180_58fe7e7bc1cd6348_1.htm | CC-MAIN-2022-21 | refinedweb | 189 | 61.12 |
#include "async_fetch.h"
Creates an AsyncFetch object using an existing AsyncFetcher*, sharing the response & request headers, and by default delegating all 4 Handle methods to the base fetcher. Any one of them can be overridden by inheritors of this class, but to propagate the callbacks to the base-fetch, overrides should upcall this class, e.g. SharedAsyncFetch::HandleWrite(...).
Indicates whether the request is a background fetch. These can be scheduled differently by the fetcher.
Reimplemented from net_instaweb::AsyncFetch..
Reimplemented in net_instaweb::ProxyFetch.
Returns the request context associated with this fetch, if any, or NULL if no request context exists.
Reimplemented from net_instaweb::AsyncFetch. | http://modpagespeed.com/psol/classnet__instaweb_1_1SharedAsyncFetch.html | CC-MAIN-2017-09 | refinedweb | 103 | 50.84 |
AWS recently announced the public preview of Serverless Application Model (SAM) support for CDK. SAM is an open-source framework that can be used to build, test and deploy serverless applications on AWS. It provides a Lambda-like execution environment that lets you locally build, test, and debug applications. Previously this could only be defined by SAM templates but now it is also possible through the AWS Cloud Development Kit (CDK)!
I will guide you through a small demo project to demonstrate how to build a serverless application with AWS CDK and test it locally with AWS SAM.
We will build a simple REST API which shows the current bid or ask price of a certain cryptocurrency on Binance (exchange), expressed in the value of Bitcoin.
The API expects two query parameters:
- coin: (ETH, DOG, LINK, DOT, ...)
- type: (bid or ask price)
Example of the API call:
$ curl "" {"coin": "ETH", "price": 0.066225}
The setup in AWS will also be pretty straight forward.
We will set up a Lambda proxy integration in API Gateway
To get started, we need to install the AWS CDK CLI and create a new CDK project. I use Python as client language.
$ npm install -g aws-cdk $ cdk init app --language python
The project structure looks like this:
. ├── README.md ├── app.py ├── cdk.json ├── requirements.txt ├── sam_cdk_demo │ ├── __init__.py │ └── sam_cdk_demo_stack.py └── setup.py
The file
sam_cdk_demo/sam_cdk_demo_stack.py should contain our code to define the AWS cloud resources we need but first let's start with writing our Lambda.
Create a folder inside the root of the project called "lambda" and add a
handler.py. The ccxt library is used by our Lambda to interact with the Binance API. The Lambda itself is very basic on purpose.
import ccxt import json # use CCXT library to connect with Binance API exchange = getattr(ccxt, 'binance')({ 'timeout': 3000, 'enableRateLimit': True }) def get_current_price(coin_name, price_type): # fetch latest ticker data for coin pair xxx/BTC ticker = exchange.fetch_ticker('{}/BTC'.format(coin_name)) # get ask/bid price from ticket data current_price = ticker[price_type] return current_price def lambda_handler(event, context): # get values from query string parameters coin = event['queryStringParameters']['coin'] price = event['queryStringParameters']['type'] # CCXT exchange expects coin in uppercase valid_coin = coin.upper() # get current price based on coin name and price type (ask/bid) current_price = get_current_price(valid_coin, price) return { 'statusCode': 200, 'headers': { 'Content-Type': 'application/json' }, 'body': json.dumps({ 'coin': valid_coin, 'price': current_price, }) }
Don't forget to add a
requirements.txt inside the folder to make the ccxt library available to the Lambda.
ccxt==1.50.13
The Lambda is ready! Now we will use AWS CDK to define our AWS infrastructure. We need to deploy the Lambda and create an API Gateway in front of it. Update the file
demo/demo_stack.py. We keep the code pretty basic again.
from aws_cdk import ( aws_lambda as _lambda, aws_apigateway as apigw, core, ) class CdkLambdaSamStack(core.Stack): def __init__(self, scope: core.Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) # creating Lambda function that will be triggered by the API Gateway get_price_handler = _lambda.Function(self,'CryptoFunction', handler='handler.lambda_handler', runtime=_lambda.Runtime.PYTHON_3_8, code=_lambda.Code.asset('lambda'), timeout=core.Duration.seconds(30), ) # create REST API api = apigw.RestApi(self, 'crypto-api') # add resource /crypto resource = api.root.add_resource('crypto') # create Lambda integration get_crypto_integration = apigw.LambdaIntegration(get_price_handler) # add method which requires two query string parameteres (coin and type) resource.add_method( http_method='GET', integration=get_crypto_integration, request_parameters={ 'method.request.querystring.coin': True, 'method.request.querystring.type': True }, request_validator_options=apigw.RequestValidatorOptions(validate_request_parameters=True) )
Update the
requirements.txt in the project root with the necessary modules.
aws-cdk.core aws-cdk.aws_lambda aws-cdk.aws_apigateway
Start the Python virtual environment which is created by CDK and install the modules.
$ source .venv/bin/activate (.venv)$ pip3 install -r requirements.txt
We will use AWS SAM to test our setup locally. It's important to mention that you need to have Docker installed. We will use Docker to build our code. The Lambda will also run inside as a Lambda-like Docker container.
Prepare the deployment artifact.
(.venv)$ sam-beta-cdk build --use-container
Start the local API Gateway.
$ sam-beta-cdk local start-api ... * Running on
We can use a tool like Postman (or
curl or just your browser) to perform calls against our API.
It takes a few seconds to execute the function because AWS SAM is spinning up a Docker container to execute our code. After the execution the container is destroyed.
When everything looks fine we can deploy it to AWS.
(.venv)$ cdk bootstrap (.venv)$ cdk deploy -a .aws-sam/build
Now test against the deployed API.
We were able to test our API and Lambda using the new Serverless Application Model integration with CDK! You can find all code on my GitHub. Be aware that this feature is in preview. Feel free to do more extensive testing. You can report bugs and submit feature requests to the SAM opensource repository.
Top comments (1)
This is awesome, I'll have to give it a try in TypeScript/JavaScript! | https://dev.to/aws-builders/build-serverless-applications-using-cdk-and-sam-4oig | CC-MAIN-2022-40 | refinedweb | 843 | 51.14 |
This is the fourth part in a series of blog posts (read part 1, part 2 and part 3)
giving some practical examples of lambdas, how functional programming
in Java could look like and how lambdas could affect some of the well
known libraries in Java land. This part describes shortly the changes
of a new proposal, that has been published while writing this series,
and how it changes some of the examples in this series.
The in lambdas has different semantics than before, break and continue aren't allowed in lambdas, and instead of return, yield will automatically in this case. But despite the fileFilter method function from part 3 again:
public <S> List<S> map(#S(T) f) { List<S> result = new ArrayList<S>(); for (T e : this) { result.add(f.(e)); } return result; }We need to replace its function type argument f by a SAM type. We could start by writing an interface Mapper with a single method map. This would be an interface especially for usage with the map function. But having in mind, that there are probably many more functions similar to our map function, we could create a more generic Function interface, or more specifically an interface Function1<X1,Y> which represents a function that takes exactly one argument of type X1 and returns a value of type Y.
public interface Function1<X1,Y> { Y apply(X1 x1); }With this we could change our map function taking a Function1 insteadLine method, for a function that takes no arguments, Function1 for a function that takes one argument and so on (up to Function23, then Java would have more than Scala, yay!). Further complexity is removed by the stronger type inferencing capabilities.
From
J Szy replied on Sat, 2010/07/24 - 3:38pm
Gregory Smith replied on Sat, 2010/07/24 - 11:05pm
Thank you, Nick, for the fantastic series of articles on Lambdas in Java. I am very excited and optimistic about all of the changes coming in jdk 1.7. I really like your idea of having generic function SAM types in the standard libraries.
I am hopeful that Lambdas will placate most (ok, a few) of Java's critics while not over-complicating the language or making it too cryptic. But I confess to having a few misgivings.
Most of the Java programmers I know don't even have a firm grip on Generics yet. Are we sure that it's a good idea to introduce another strange new syntax into the language? I think one of Java's greatest strength's is its clarity when compared to most other programming languages. I hope that whatever form Lambda syntax finally takes will preserve that clarity.
I know that reducing boilerplate code is one of the driving motivations for these changes. If you are using vi as your primary editor, I can see how the verbosity of many Java constructs would be a problem. But for most of us, code completion and other features in modern IDEs have eliminated this as a burden.
I'm also not so sure if type inferencing is such a good idea. It's great that the compiler can infer the type, but can a human programmer who is attempting to decipher the contractor-written gobbledygook that has just been dumped in his lap infer them so easily? (This may be an opportunity for another IDE feature to help programmers – automatic type inferencing!)
I think most of what you have done here is already doable with the current syntax, albeit somewhat more verbosely (see example below). Have I missed something?
/****************************************/
public interface Function1<X,Y> {
public Y apply(X x);
}
/****************************************/
public class MapToString implements Function1<Integer,String> {
@Override
public String apply(Integer x) {
return "Hello World".substring(x);
}
}
/****************************************/
public class MapToDate implements Function1<Integer,Date> {
@Override
public Date apply(Integer x) {
GregorianCalendar gc = new GregorianCalendar();
gc.add(Calendar.DAY_OF_WEEK, x);
return gc.getTime();
}
}
/****************************************/
public interface MappableList<T> extends List<T> {
public <S> List<S> map(Function1<T,S> f1);
}
/****************************************/
public class SimpleMappableList<T> extends ArrayList<T> implements MappableList<T> {
...
public <S> List<S> map(Function1<T,S> f1) {
List<S> result = new ArrayList<S>();
for (T e : this) {
result.add(f1.apply(e));
}
return result;
}
}
/****************************************/
Integer[] intarray = {2,4,2,5};
MappableList<Integer> list = new SimpleMappableList<Integer>(intarray);
List<String> resultS = list.map( new MapToString() );
System.out.println(resultS.toString());
List<Date> resultD = list.map( new MapToDate() );
System.out.println(resultD.toString());
Bob Smith replied on Sun, 2010/07/25 - 7:39pm
I can't understand why there's any doubt that *some form* of closures should be added to Java. All modern programming languages have closures. Even C (blocks) and C++ (C++0x) have them now for God's sake!
Annonymous inner classes just don't cut it anymore.
If you're not convinced, look at the examples of the Fork-Join framework with and without closures. Look at the collections examples with and without closures. Look at what Apple's been able to accomplish with C blocks, namely, Grand Central Dispatch. The Fork-Join framework can be even better than Grand Central Dispatch, but it's simply very onerous to use without closures.
Even the amended closures proposal seems to be a little too close to Josh Block's CICE for my liking. The whole point of closures (as opposed to annonymous inner classes) is to be able to pass functions to methods and execute them within methods. This latest proposal still requires some boilerplate, namely, overriding Function (or whatever the main closure interface turns out to be).
Yeah, Generics was probably implemented with too much complexity, but anybody who argues that generics have made Java too complex simply doesn't understand their power, and the design patterns that can be implemented with them.
Neil Gafter can explain the arguments in favour of closures better than I can:
Oh, and the argument to use a newer JVM language like Scala really isn't going to hold water in the enterprise, especially not since tooling for Scala just isn't there yet.
J Szy replied on Tue, 2010/07/27 - 4:34am
in response to:
Bob Smith
I have no doubt, I'm strongly convinced that any form of closures should be left for more "dynamic" languages (read SLDJ). At least any that's to leave now, since Java already has too much of it.
Yes, I know that most less successful languages have closures and I really like it this way.
They never did. Unfortunately it's too late to drop them.
This argument falls on two counts:
1. The examples are designed with specific purpose of demonstrating advantages of the technique shown;
2. The examples are too short to provide any insight of possible negative impact the proposed change could have on larger, long maintained code bases.
Maybe, but the code using it will be very onerous to maintain with closures.
Ease of code maintenance is far more important than of writing it. Closures would encourage writing more "write once, never dare touch again" code which is exactly what Java should avoid.
It is indeed, but that doesn't necessarily mean that it is good to have that possibility.
This, or they have spent too much time looking into Generics FAQ. Oh, and I wouldn't overestimate their power either. More than once have I concluded (with help from the FAQ) that they aren't powerful enough to achieve what I wanted to. Yes, I think that Generics are too complex for what they provide.
Alex(JAlexoid) ... replied on Wed, 2010/07/28 - 11:23am
in response to:
J Szy
Since you can always compile you code using older Java language specification, this point is moot. And your whole position on Java smells of conservatism, with no constructive insight. Please bring something to the table that resembles progress so that the side that is more active in this can actually see what you are proposing.
Java language has not been essentially changed in over 6 years now. API's have changed, but not language. That starts to smell like COBOL.
J Szy replied on Thu, 2010/07/29 - 3:56am
J Szy replied on Thu, 2010/07/29 - 3:56am
in response to:
Alex(JAlexoid) Panzin
RLY? Can maintenance guys maintain your code using older Java specs as well?
Not really. If you read my older comments you'd see that there are a few things I'd like to see in Java that aren't there. But since the post was about closures I write about closures.
About closures? I propose no change.
That's not necessarily bad. While there are a few things I think Java could have, I believe that changing just for the sake of change is nonsense.
And I don't see many businesses switching to Ruby or any other SLDJ just to have frequent changes in language.
Sometimes even APIs have changed too much (like JDBC in J6). This probably means that any change should be really thought of with emphasis on possible negative consequences.
Could you be less emotional please? Calling me a conservatist, or writing about COBOL doesn't really bring anything positive into the discussion and I wouldn't like it degraded to a flame war.
Rehman Khan replied on Sat, 2012/02/25 - 4:01am | http://java.dzone.com/articles/lambdas-java-preview-part-4 | CC-MAIN-2014-42 | refinedweb | 1,559 | 62.68 |
A of life without if-statement or loop. In one of the sessions we tried to implement the the solution using a rules based approach. The sessions were not long enough to totally finish the code. It was more to see how a particular approach work. By the way it was amazing how many ways there are to code this simple program. Anyway I was fascinated by this rules based approach so later I tried this approach on the kata Fizzbuzz.
Fizzbuzz
Fizzbuzz is simple: Write a method that prints the number you put in except print “fizz” if the number is dividable by 3, and “buzz” if it is dividable by 5, and “fizzbuzz” if it is dividable by 3 and 5.
Simple Solution
The first solution that comes to mind would probably be something like this:
public class FizzBuzzer { public static string Evaluate(int number) { if (number % 15 == 0) { return "fizzbuzz"; } else if (number % 3 == 0) { return "fizz"; } else if (number % 5 == 0) { return "buzz"; } else { return number.ToString(); } } }
For this simple requirement this solution is completely fine. However what would you do if you wanted to be able to add new rules to the program without having to modify places where the exiting rules are coded?
Rule Interface: IRule
One could hide the behavior of a rule behind an interface. DoesApply() tells if the rule applies. Apply() applies the rule and executes it.
public interface IRule { string Apply(int number); bool DoesApply(int number); }
The implementation of this interface for the Fizz-rule looks like this:
public class FizzRule : IRule { public bool DoesApply(int number) { return number % 3 == 0; } public string Apply(int number) { return "fizz"; } }
Basically the if part of the first solution has moved to DoesApply() and the “calculation” has moved to Apply().
How to Use the Rules
Now we could create a list with all the rules and apply the rules in a foreach loop.
public class FizzBuzzer { private List<IRule> _rules; public FizzBuzzer() { _rules = new List<IRule>(); _rules.Add(new FizzBuzzRule()); _rules.Add(new FizzRule()); _rules.Add(new BuzzRule()); _rules.Add(new CatchAllRule()); } public string Evaluate(int number) { foreach (var rule in _rules) { if (rule.DoesApply(number)) { return rule.Apply(number); } } return null; }
Chain of Responsibility Pattern
However the constraint says no if and no loop. Lets use the Chain of Responsibility pattern to get rid of the loop.
SetNextHandler() is used to create the chain of concrete handlers and HandleRequest() uses the methods in IRules interface to either handle the request directly or pass it to the next handler.
public abstract class Handler : IRule { Handler _nextHandler; public void SetNextHandler(Handler nextHandler) { _nextHandler = nextHandler; } public string HandleRequest(int number) { return DoesApply(number) ? Apply(number) : _nextHandler.HandleRequest(number); } // IRules interface public abstract string Apply(int number); public abstract bool DoesApply(int number); }
Because the class hierarchy changed a bit the rules are renamed to handlers and derive from Handler. Here’s the FizzHandler as example.
public class FizzHandler : Handler { public override bool DoesApply(int number) { return number % 3 == 0; } public override string Apply(int number) { return "fizz"; } }
Now we need to construct the objects which is done in the class FizzBuzzer. Here it is:
public class FizzBuzzer { private Handler _firstHandler = null; public FizzBuzzer() { AddHandler(new CatchAllHandler()); AddHandler(new BuzzHandler()); AddHandler(new FizzHandler()); AddHandler(new FizzBuzzHandler()); } private void AddHandler(Handler previousHandler) { previousHandler.SetNextHandler(_firstHandler); _firstHandler = previousHandler; } public string Evaluate(int number) { return _firstHandler.HandleRequest(number); } }
You can find the complete FizzBuzz solution on Github. | http://www.huwyss.com/fundamentals/fizzbuzz-without-if-or-loop | CC-MAIN-2019-47 | refinedweb | 579 | 53.92 |
Java is a very powerful language. Python is a very easy and very efficient language. Isn’t it a pity that both of these languages remain separate? Or do they? Let me introduce Jython, a technology capable of merging Python and Java together to create efficient and powerful applications.
Jython is an implementation of Python in Java. With it, you can embed Python directly into your Java applets and applications, or you can use Java classes in Python, along with the modules of the standard library that Jython supports (at the time of writing this, the number of supported modules is a bit limited).
The source of Jython is also freely availible, just like Python’s source.
{mospagebreak title=Installing Jython}
Jython may be obtained from the Jython website:
Currectly, the latest version is Jython 2.1. Download it and then move to the directory where it is installed. Next, open the file with your Java interpreter. For example:
java jython-21
The installation program will be presented to you, and installing it from there should be pretty straightforward. If you’re using Jython 2.1 in conjunction with Java 1.4 or later, be sure to include the source in your installation. Move to the directory where you installed Jython and open up “org/python/core/Py.java” in your favorite text editor. Replace every appearance of “assert” with “a_ssert” and ever appearance of “enum” with “e_num”. Do the same with “org/python/compiler/CodeCompiler.java” and “org/python/parser/PythonGrammar.java”. Note that the latter will require you to replace “e_numeration” to “Enumeration” when you are done. Jython 2.1 uses the two terms, which were set as keywords in Java 1.4.
Once you’re done, move to the directory where you installed Jython and start up the Jython interactive interpreter:
jython
If everything has gone correctly, you should see something that looks a lot like the Python interactive interpreter. Let’s test it out by executing some Python code:
>>> print ‘Hello World!’
Hello World!
>>> import math
>>> print math.sqrt ( 25 )
5.0
>>> someList = [ 5, 3, 4, 2 ]
>>> print someList [ 0 ]
5
>>> print math
(module ‘math’ (built-in))
As you can see, things work exactly the same as they do in Python.
{mospagebreak title=Calling Java Classes}
As I mentioned earlier, it is possible to call Java classes in Jython, which makes Jython extremely powerful. Anything that can be done in Java can be done in Jython using Python’s easy syntax. Development time can also be significantly cut by accessing Java classes through Jython.
Let’s play around with Java a bit. Using Swing in Java, it is fairly easy to create a simple dialog bearing a short message:
import javax.swing.JOptionPane;
class testDialog {
public static void main ( String[] args ) {
javax.swing.JOptionPane.showMessageDialog ( null, “This is a test.” );
}
}
Using Jython, however, we can cut the amount of characters involved quite a bit. Save the following script, and pass its filename to the Jython interpreter:
import javax.swing.JOptionPane
javax.swing.JOptionPane.showMessageDialog ( None, “This is a test.” )
As you can see, Python’s syntax is used, but we’re using a Java package. We don’t have to compile anything, either. This speeds up the development process quite a bit, since we don’t have to wait for the Java compiler each time we fix a small bug in our applications.
If you do not see the advantage to using Jython yet, let’s convert a larger application from Java to Jython. We’ll create a die roller in Java. The user will be able to specify how many sides the die has and select how many times he or she wishes to roll the die. The application will then generate random values within the appropriate ranges and present the results to the user.
import java.util.Random;
import javax.swing.JOptionPane;
class DieRoller {
public static void main ( String[] args ) {
System.out.println ( “Die Roller” );
System.out.println ( “- – – – -” );
query();
}
public static void query () {
String sidesS = JOptionPane.showInputDialog ( null, “How many sides should the die have?” );
String rollsS = JOptionPane.showInputDialog ( null, “How many times should we roll the die?” );
try {
int sides = Integer.parseInt ( sidesS );
int rolls = Integer.parseInt ( rollsS );
roll( sides, rolls );
} catch ( Exception e ) {
JOptionPane.showMessageDialog ( null, “Error!” );
}
}
public static void roll( int sides, int rolls ) {
int current = 1;
Random rand = new java.util.Random();
while ( current <= rolls ) {
System.out.println ( “Roll ” + current + “: ” + rand.nextInt ( sides ) );
current++;
}
}
}
Using Jython, we can reduce the number of lines in the application while accomplishing the same exact product:
import java.util.Random;
import javax.swing.JOptionPane;
print ‘Die Roller’
print ‘- – – – -‘
try:
sides = int ( javax.swing.JOptionPane.showInputDialog ( None, ‘How many sides should the die have?’ ) )
rolls = int ( javax.swing.JOptionPane.showInputDialog ( None, ‘How many times should we roll the die?’ ) )
rand = java.util.Random()
for current in range ( rolls ):
print ‘Roll ‘ + str ( current + 1 ) + ‘: ‘ + str ( rand.nextInt ( sides ) )
except:
javax.swing.JOptionPane.showMessageDialog ( None, ‘Error!’ )
Notice how we replace the while loop with Python’s for loop, which is more appropriate in this situation. In my opinion, the ability to use Python’s for loop is a major advantage with Jython.
Of course, you are not restricted to building applications with Java. You can build servlets, beans and applets, too. Let’s build a simple applet in Java to work with:
import java.applet.Applet;
class TestApplet {
public void paint ( Graphics g ) {
g.drawSring ( “A script a day keeps the doctor away.”, 5, 5 );
}
}
Converting the application to Jython is very easy, and, again, we are left with the same product:
import java.applet.Applet;
class TestApplet ( java.applet.Applet ):
def paint ( self, g ):
g.drawString ( “A script a day keeps the doctor away.”, 5, 5 )
Of course, we’ll need to compile the applet before we can use it. This can be easily done:
jythonc –-deep –-core –-compiler <compiler path> TestApplet.py
Compiling an application is just as easy.
It’s also very possible and very simple to subclass an existing Java class. Subclassing a Java class is done in exactly the same way as subclassing a Python class. Let’s modify the nextInt method of java.util.Random to return a string rather than an integer. To do the random number generation, we’ll call the superclass’s method:
>>> import java.util.Random
>>> class StringRandom ( java.util.Random ):
… def nextInt ( self ):
… return str ( java.util.Random.nextInt ( self ) )
…
>>> x = StringRandom()
>>> x.nextInt()
‘834361961’
>>> x.nextInt()
‘159629831’
>>> x.nextInt()
‘-1197591800′
Since adding an attribute to an existing Java object is not possible in Jython, you are forced to create a subclass in Jython if you wish to add additional attributes:
>>> import java.util.Random
>>> rand = java.util.Random()
>>> rand.doesNotExist = 5
Traceback (innermost last):
File “<console>”, line 1, in ?
TypeError: can’t set arbitrary attribute in java instance: doesNotExist
>>> class DumbyRandom ( java.utilRandom ):
… pass
…
>>> rand = DumbyRandom()
>>> rand.doesNotExist = 5
>>> print rand.doesNotExist
5
>>> print rand.nextInt()
-1884599813
As you can see, the DumbyRandom class functions exactly the same as the Random class, except for the fact that we can add attributes to objects created from it.
{mospagebreak title=Embedding}
Jython allows us to embed Python code in Java code easily, and there are several approaches to it. The first is to use the PythonInterpreter object to execute Python code contained in a separate file. Let’s create a Python file called “testCode.py” with the following code inside of it:
import math
print ‘Hello Jython World!’
print math.pi
print math.e
print math.sqrt ( 25 )
Execuing it in Java is simple using the execfile method:
import org.python.util.PythonInterpreter;
import org.python.core.*;
class TestPython {
public static void main ( String[] args ) {
try {
org.python.util.PythonInterpreter python = new org.python.util.PythonInterpreter();
python.execfile ( “testCode.py” );
} catch ( Exception e ) {
System.out.println ( “An error was encountered.” );
}
}
}
We can also place code within the Java application itself using the exec method. Let’s recreate our first example with the Python code included in the Java application rather than an external file:.exec ( “print math.sqrt ( 25 )” );
} catch ( Exception e ) {
System.out.println ( “An error was encountered.” );
}
}
}
We can also get and set variables in the local namespace by using the get and set methods. This allows us to interact with Python a bit more, which is our goal when embedding Python code inside Java applications:.set ( “ourSetVariable”, new org.python.core.PyInteger ( 25 ) );
python.exec ( “ourSetVariable = math.sqrt ( ourSetVariable )” );
org.python.core.PyObject ourSetVariable = python.get ( “ourSetVariable” );
python.exec ( “print ” + ourSetVariable );
} catch ( Exception e ) {
System.out.println ( “An error was encountered.” );
}
}
}
Conclusion
Jython is an extremely powerful tool for development in both Python and Java. It allows you to use Python to call Java classes, increasing the power of Python, and it allows you to execute Python code inside Java applications by using the exec and execfile methods. Jython also allows you to subclass Java classes using Python. Jython can significantly decrease the effort needed to produce a powerful application, or even an applet, servlet or bean.
Of course, this article only covers the tip of the iceburg (forgive the cliché!). There is a lot more to Jython, but it would take a very lengthy article to cover all of it. It’s up to you to explore more of Jython. Good luck! | http://www.devshed.com/c/a/python/introduction-to-jython/1/ | CC-MAIN-2015-22 | refinedweb | 1,559 | 60.82 |
jGuru Forums
Posted By:
George_Simple
Posted On:
Wednesday, March 19, 2003 03:53 AM
Hi, everyone!
I am a C++ programmer before and after reading the
internal implemention method of Java Thread. I think
Java has no function level lock and only object level
lock. I mean to acquire a function lock is the same as
acquire an object's lock.
For example, when objectA.synchronizedFunctionA is called,
suppose synchronizedFunctionA is a synchronized function of
class A. Java only check whether objectA is locked. The synchronized function synchronizedFunctionA has no individual function level lock.
Am I correct?
If I am correct, I think in some certain application, function level lock will be more efficient than object level lock. Even though it should be used more carefully.
Thanks in advance,
George
Re: Java has no function level lock?
Posted By:
Marian_Olteanu
Posted On:
Saturday, March 29, 2003 01:53 AM
public class A{ private static final Object syncObjectA = new Object(); public void synchronizedFunctionA( .... ) { synchronized( syncObjectA ) { // Your sync code } }// Other stuff.......}
Posted By:
Anonymous
Posted On:
Wednesday, March 19, 2003 10:44 PM
Posted By:
Anonymous
Posted On:
Wednesday, March 19, 2003 10:39 PM | http://www.jguru.com/forums/view.jsp?EID=1067801 | CC-MAIN-2014-23 | refinedweb | 193 | 58.79 |
About |
Projects |
Docs |
Forums |
Lists |
Bugs |
Get Gentoo! |
Support |
Planet |
Wiki
-------- Original Message --------
Subject: java-config broken
Date: Mon, 19 Sep 2005 17:54:30 +0100
From: Peter B. West <pbw@...>
To: gentoo-java@g.o
Having just updated my jdk, I am now receiving the following error
message when I try to run java-config:
Traceback (most recent call last):
File "/usr/bin/java-config", line 14, in ?
from java_config import jc_options
ImportError: No module named java_config
Anyone know what's happening here?
Peter
I sent the above message under the wrong email address, and in the
meantime, I have found the problem. I had enabled ~x86 on python to get
some other packages installed, and I finally updated python itself to
2.4.1-r1. I had to emerge java-config again to get it working.
Peter
--
Peter B. West <>
Folio <>
Updated Jun 17, 2009
Summary:
Archive of the gentoo-java mailing list.
Donate to support our development efforts.
Your browser does not support iframes. | http://archives.gentoo.org/gentoo-java/msg_3d053b8a3106195e00ada29cc3b4888e.xml | CC-MAIN-2013-20 | refinedweb | 167 | 57.57 |
00001 // -*- C++ -*- forwarding header. 00002 00003 // Copyright (C) 2001, 2002, 2003, 2004, 2005, 2009, 2010 00004 // Free Software Foundation, Inc. 00005 // 00006 // This file is part of the GNU ISO C++ Library. This library is free 00007 // software; you can redistribute it and/or modify it under the 00008 // terms of the GNU General Public License as published by the 00009 // Free Software Foundation; either version 3, or (at your option) 00010 // any later version. 00011 00012 // This library is distributed in the hope that it will be useful, 00013 // but WITHOUT ANY WARRANTY; without even the implied warranty of 00014 // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 00015 // GNU General Public License for more details. 00016 00017 // Under Section 7 of GPL version 3, you are granted additional 00018 // permissions described in the GCC Runtime Library Exception, version 00019 // 3.1, as published by the Free Software Foundation. 00020 00021 // You should have received a copy of the GNU General Public License and 00022 // a copy of the GCC Runtime Library Exception along with this program; 00023 // see the files COPYING3 and COPYING.RUNTIME respectively. If not, see 00024 // <>. 00025 00026 /** @file ciso646 00027 * This is a Standard C++ Library file. You should @c \#include this file 00028 * in your programs, rather than any of the @a *.h implementation files. 00029 * 00030 * This is the C++ version of the Standard C Library header @c iso646.h, 00031 * and its contents are (mostly) the same as that header, but are all 00032 * contained in the namespace @c std (except for names which are defined 00033 * as macros in C). 00034 */ | http://gcc.gnu.org/onlinedocs/gcc-4.6.3/libstdc++/api/a00801_source.html | CC-MAIN-2015-06 | refinedweb | 272 | 70.23 |
If you copy ContactBlock.cs, make sure to create a unique GUID - to not use the same.
You do not need to create a separate model and a controller, if you do not want to. If you are only displaying data as the editor inputs it, you will probably not need it.
Simple example.
Block type definition (also used as the model):
using System.ComponentModel.DataAnnotations; using EPiServer.DataAbstraction; using EPiServer.DataAnnotations; using EPiServer.Core; namespace Alloy.Models.Blocks { [SiteContentType(GUID = "176CE440-C869-448B-B2E2-8626D479B368")] public class MyBlock : BlockData { [Display(GroupName = SystemTabNames.Content, Order = 1)] [CultureSpecific] public virtual string Message { get; set; } } }
Name the view MyBlock.cshtml and place it in /Views/Shared/Blocks
@model Alloy.Models.Blocks.MyBlock @Html.PropertyFor(x => x.Message)
Check the file /Alloy.Business.Rendering.SiteViewEngine for the definition on where block views are placed.
If you were to create a controller (MyBlockController), and a separate model (MyBlockModel), you could place the view were you initially placed it.
The documentation on the episerver website tells you how to create a page, but doesnt tell you how to create a block with its corresponding model and view.
With Alloy, I have tried copying some of the code to create a new block, but have not had much luck yet.
in Models/Blocks I copied "ContactBlock.cs" to MyBlock.cs and removed its fields and added just one, "name". I am not sure what this is - its kind of like a model. Its not a veiw, and not a controller.
in Models/ViewModels I copied "ContactBlockModel.cs" to MyBlockModel.cs with just one property: name. Presumably this is actually the model. As an aside, I tried creating a property of type long, but this gives errors, it only works for sting.
I copied Views/ContactBlock/index.cshtml to Views/MyBlock/index.thml thus:
I can now Create "My" blocks in the CMS editor, and I can drag them to content areas, but they show as "The "MyBlock" can not be displayed.
How do I get the block to display in the CMS Page Editor?
Once I have figured this out, ill try to get it to display "name" on the actual website.
Basially, I am trying to do a "hello world" block. I cant find any documenation/samples of this other than alloy. Alloy is great, but not easy to reverse engineer without documentation. | https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2020/6/how-to-create-a-preview-for-a-block/ | CC-MAIN-2021-17 | refinedweb | 398 | 61.12 |
Managing Ensemble
Purging Production Data
[Back]
Ensemble Management Guides
>
Managing Ensemble
>
Purging Production Data
Class Reference
Search
:
The
[Ensemble] > [Purge Management Data]
page allows you to delete outdated entries from the Event Log, message warehouse, business process log, business rule log, and I/O archive log. You can only purge entries associated with the active namespace. To navigate to this page, click
Ensemble
, click
Manage
, click
Purge Management Data
, and then click
Go
.
This chapter contains the following topics:
First-time Purges
Purging Data Manually
Purging Data Periodically
First-time Purges
The activity of purging generates extra journaling, especially if you purge a large volume of data, and this journaling can consume a large amount of disk space. For performance reasons, you might adopt the following approach when you first purge management data:
Start by setting the purge parameters so that a relatively small amount of data is purged.
As discussed in the next section you use the option
Do not purge most recent
to specify how many days’ worth of records to keep. Set this to a relatively large number.
Perform the purge.
Gradually decrease the number of days of data to keep until
Do not purge most recent
is set as desired.
Purging Data Manually
Typically you purge data manually during development and testing. (In contrast, for a live system, you typically purge data on a periodic basis, as described in the
next section
.)
To purge data manually, use the
[Ensemble] > [Purge Management Data]
page. This page displays the following information:
Type of Record
Identifies the purpose of each row in the table. Each row contains one type of artifact that running productions produce on an ongoing basis:
Event Log
,
Messages
,
Business Processes
,
Business Rule Log
,
I/O Log
, or
Managed Alerts
.
Count
Total number of entries of this type stored for this production. Use the
Count
to decide whether or not it is worthwhile to purge the records and if so, how many days’ worth of records you want to keep.
Total number of entries of this type that the purge process deleted after you click
Start Purge
and the purge completes.
Purge Criteria
Beneath the table is a box to enter your purge criteria and the command to perform the purge.
Include message bodies
If selected, this check box indicates that when Ensemble purges message headers, it should also purge the associated message bodies. Ensemble verifies that body classes exist and are persistent, before purging them.
If this check box is clear (the default), message header data is purged, but message body data is retained. If you purge message headers but retain the message bodies, then the Management Portal provides no way to purge the orphaned message bodies. In this case, you must purge the message bodies programmatically. In the
ENSDEMO
database, the class
Demo.Util.CleanupSet
provides an example of how you might do this.
Important:.
Keep data integrity
If selected (the default) this check box indicates that when Ensemble purges message headers, even if a message meets the age criterion for purging, it is not deleted unless its status is
complete
. Ensemble considers messages to be
complete
if they are marked Complete, Error, Aborted, or Discarded. This is to keep session-level integrity.
The query that identifies the messages to delete checks all the messages (including business process instances) in a session to see whether any of them are not complete. The purge only performs the delete if all the messages are complete. The scope of this query, therefore, has an impact on the time taken to do the purge.
This option is important to support long-running business processes. Usually this is the desired behavior. However, if you know there are old messages in the system whose incomplete status is not significant, you can purge them by clearing the
Keep data integrity
check box.
Important:
The purge criteria for using
Keep data integrity
also includes Ensemble system processes such as the Scheduler. Before clearing this check box, carefully consider the value for the
Do not purge most recent
setting.
Do not purge most recent
This tells Ensemble how many days’ worth of records to keep. The number can be 0 (zero), which keeps nothing and deletes all entries that exist at the time of the purge operation. The default value for
Do not purge most recent
is 7, which keeps entries for the last seven days.
The count of days includes
today
, so keeping messages for 1 day keeps the messages generated on the current day, according to local server time.
At the bottom of the
Purge Criteria
box is the
Start Purge
command. If you click it, Ensemble immediately purges the persistent store according to the parameters you have entered. The page uses a background job to do purges, and reports the results of the last-run purge, including a status code, or a notice if the background job is running or has failed to run. When the purge completes, the
Deleted
column contains the number of records purged for each type. If a purge is currently executing in the namespace, the
Start Purge
command is inactive.
Caution:
You cannot undo the
Start Purge
operation.
Purging Data Periodically
For a live system, you typically purge data on a periodic basis. To do so
Click
System Operation
, click
Task Manager
, and then click
New Task
.
For
Task name
, enter a task name.
For
Namespace to run task in
, select the applicable namespace.
For
Task type
Ens.Util.Tasks.Purge
.
When you select this class, the page updates to display configurable information for this task.
Specify the following options:
BodiesToo
Corresponds to the
Include message bodies
option. It is very important to purge message bodies; consequently, this option should be set to true in most cases. If you are not regularly purging message bodies, you should provide another mechanism to ensure that this storage is freed.
KeepIntegrity
Corresponds to the
Keep data integrity
option.
NumberOfDaysToKeep
Corresponds to the
Do not purge most recent
option.
TypesToPurge
Select the type of data to purge.
Specify other options as needed. For details, see
Using the Task Manager
in the chapter
Managing Caché
in the
Caché System Administration Guide
.
Complete the wizard as described in the previously referenced section.
[Back]
[Top of Page]
© 1997-2018, InterSystems Corporation
Content for this page loaded from EGMG.xml on 2018-10-15 05:49:28 | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EGMG_purge | CC-MAIN-2018-43 | refinedweb | 1,064 | 54.63 |
Splash screen in not showing in exec file
Hello there,
I have created a splash screen which is shown on the launch of my project when I run my application in qt creator. However, when I try to click on the app.exe file in the release folder, my application launches without showing the splash screen. Can you please tell me why is this happening?
I tried building my application several times and the behavior stays the same.
Thanks,
Vidushi
- Wieland Moderators
Hi! Please provide a minimal working example.
Also, I have added all the images in the resources folder. So that is also checked. Following is my code
#include "home.h"
#include "splashscreen.h"
#include <QApplication>
#include <QStyleFactory>
#include <QDebug>
#include "QsLog.h"
#include "QsLogDest.h"
#include "QsLogWindow.h"
#include <QDir>
#include <QSplashScreen>
#include <QTimer>
#include <QtNetwork>
#include <QUrl>
#include <QUrlQuery>
#include <QJsonDocument>
#include <QJsonParseError>
#include <QJsonArray>
#include <string>
#include <QRect>
void sendRequest();
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
//set the application icon and splash screen a.setWindowIcon(QIcon("./img/icon_2.png")); QPixmap splashImage("./img/icon_3.png"); splashscreen* splash = new splashscreen(splashImage); QFont splashFont; splashFont.setFamily("Arial"); splashFont.setBold(true); splashFont.setPixelSize(11); splashFont.setStretch(125); splash->setFont(splashFont); // splash->setMask(splashMask); splash->setWindowFlags(Qt::WindowStaysOnTopHint | Qt::SplashScreen); splash->show(); a.processEvents(); splash->showStatusMessage(QObject::tr("Initializing…")); sendRequest(); a.processEvents(); splash->showStatusMessage(QObject::tr("Getting user permissions and roles…")); Home w; w.ui->tabWidget->setStyle(QStyleFactory::create("Fusion")); QTimer::singleShot(2500,splash,SLOT(close())); QTimer::singleShot(2500,&w,SLOT(show())); // w.show(); return a.exec();
}
void sendRequest()
{
// http call
}
what is
splash? it's undeclared identifier in
main
If it's a QSplashScreen and you are setting a pixmap on it, make sure you deploy the imageformats plugin to handle the image format. for example, if you set a jpg as splash you have to deploy the
qjpeg.dll
Hi,
To add to @VRonin your path is relative so you should also have the files in the proper place when you deploy your application. Or since it's a splash screen, you might want to embed it using Qt's resource system.
@SGaist : I have embedded it in the resources file as well explicitly and added a relative path. I can see my splash screen image in the img.qrc. Is there any other folder i need to put my file?
@VRonin: Will i also have to include a dll associated to png image? splash is declared as an instance of splashcreen class. code of that class is as follows:
#include "splashscreen.h" #include <QRect> #define COPYRIGHT_PREFIX 0xA9 splashscreen::splashscreen(const QPixmap& pixmap) { QSplashScreen::setPixmap(pixmap); } splashscreen::~splashscreen() { } void splashscreen::drawContents(QPainter *painter) { QPixmap textPix = QSplashScreen::pixmap(); painter->setPen(this->color); QString copyrightString; copyrightString.sprintf( "%c", COPYRIGHT_PREFIX); painter->drawText(QRect(200,253,415, 200), Qt::AlignLeft, "Version: 0.1\n\nCopyright"+copyrightString+" 2017-2018 Intel Corporation\n\nApplication suite: METIS Client\n\n"+this->message); // painter->drawText(QRect(200,453,415, 200), Qt::AlignLeft, this->message); // painter->drawText(this->rect, this->alignement, this->message); } void splashscreen::showStatusMessage(const QString &message, const QColor &color) { this->message = message; this->color = color; this->showMessage(this->message, this->alignement, this->color); } void splashscreen::setMessageRect(QRect rect, int alignement) { this->rect = rect; this->alignement = alignement; }
IIRC png is a builtin format.
You still need to pass the correct path to your embedded file. Did you check that ?
And it works! Path issue. thanks guys!
You're welcome !
Since you have it working now, please mark the thread as solved using the *Topic Tools" button so that other forum users may know a solution has been found :)
Hey guys,
I was able to see the icons and splash screen when i launched my application using the .exe file in the release folder finally. However, I faced another issue when I packed my file for distribution (using innosetup). When I installed the packaged file, I was not able to see the icon I created on the desktop icon and the task bar(once open).
This problem was not there before the packaging.
Did you follow that guide ? | https://forum.qt.io/topic/76221/splash-screen-in-not-showing-in-exec-file | CC-MAIN-2017-51 | refinedweb | 685 | 50.43 |
Once.
I’m a fan of the Cartoon Network show Adventure Time! The sample site we’ll use is essentially a very simple Adventure Time! fan site. You can see a glimpse of what the sample looks like below:
While the site is intentionally simple for example purposes, it offers the opportunity to explore a number of important features beyond the basics of templating and content creation:
You can pull the full source of the example (plus alternatives built using other static site engines) on GitHub.
All the character data, content and images used in the sample are from the Adventure Time! wiki. The design was based upon a free template from HTML5UP (because you don’t want to see what this would look like if I designed it myself).
Jekyll is built on Ruby. Installing it on OSX is easy.
sudo gem install jekyll
Unfortunately, Jekyll is not officially supported on Windows. However, the official documentation does link to a walkthrough on how to get Jekyll running on a Windows machine. While not as simple as the one command install on OSX, I can confirm that it worked for me on a Surface Pro running Windows 8.1.
To start a new site using Jekyll, simply enter the following command:
jekyll new [project name]
For the example site, we just gave the project a name of “jekyllsite”, so the command is
jekyll new jekyllsite. This will generate a folder with the given project name that includes bunch of files that we can modify to start building our Jekyll site.
One important aspect to understand within these generated files is that any file or folder that is preceded by an underscore (ex. _layouts) does not create a corresponding file or folder when the code is generated. This means, for example, that when you build and deploy the static files there will be no _layouts folder.
To test these the starter site, simply change directory into the project folder and fire up the Jekyll local server.
cd jekyllsite jekyll serve
The local server runs on port 4000 by default (though this is configurable with the
-P option), so you can access your page by opening in your browser. The Jekyll server has a number of potentially useful options such as
-w to watch for changes and automatically rebuild,
-t to show a full backtrace when an error occurs or
-V to enable verbose output. For a full list of server options use
jekyll serve -h.
By default, Jekyll uses a Ruby templating library created by Shopify called Liquid. I say “by default” because Jekyll supports extensions and there are a number of extensions for supporting additional tempalating languages. However, Liquid itself is fairly easy to learn and quite powerful.
Your templates should be placed in the _templates folder. Let’s look at some of the basics. Again, we’ll be using the code from the sample available on GitHub.
To output a variable using Liquid in your Jekyll templates, you simply wrap the variable name using curly braces. For example, the below HTML (from _layouts/page.html) is outputting the title of a page.
<h2>{{ page.title }}</h2>
If you’re wondering how you know what variables Jekyll makes available within your templates, here’s the list.
By default Liquid also offers a bunch of standard filters that can be extremely helpful for formatting data. For example, in the following example from index.html in the sample formats the post’s date using the date filter:
<p>Posted {{ post.date | date: "%b %-d, %Y" }}</p>
Jekyll actually expands on Liquid filters, adding some of its own.
Of course, when you are writing templates, you’re going to want to split up your files both to make your code more readable and easier to maintain, but also to allow you to reuse portions. This is easy to do using the
include tag. Includes should be placed under the
_includes folder.
The following code from _layouts/default.html includes the file _includes/header.html:
{% include header.html %}
You can wrap display elements in
if/
else (or
elseif) to display it based upon certain conditions. For instance, in _includes/header.html, we only want to display the banner if we are on the home page:
{% if page.url == "/index.html" %} <!-- Banner --> <section id="banner"> <header> <h2>Explore the Land of Ooo...</h2> <p>...and its many kingdoms!</p> </header> </section> {% endif %}
Looping is something you will be doing frequently throughout your templates. Whether you are looping through your posts or looping through data from a YAML/JSON/CSV file (I’ll discuss this in detail later in this article). This can be done with a for loop.
This example from index.html, we loop over an array of data to populate a part of the page that displays Adventure Time! characters. It passes each item in the array as an object in the variable
character. Within the loop, we are outputting all of the properties of the character object.
{% for character in site.data.characters %} <div class="4u"> <section class="box"> <span class="image featured"><img src="{{character.image}}" alt="" /></span> <header> <h3>{{character.name}}</h3> </header> <p>{{character.description}}</p> </section> </div> {% endfor %}
We can also limit how many iterations in a loop are allowed. The following example, also from index.html, shows only the first two posts using
limit.
{% for post in site.posts limit:2 %} <div class="6u"> <section class="box"> <a href="{{ post.url | prepend: site.baseurl }}" class="image featured"><img src="{{ post.banner | prepend: site.baseurl }}" alt="" /></a> <header> <h3>{{ post.title }}</h3> <p>Posted {{ post.date | date: "%b %-d, %Y" }}</p> </header> <p>{{ post.excerpt }}</p> <footer> <ul class="actions"> <li><a href="{{ post.url | prepend: site.baseurl }}" class="button icon fa-file-text">Continue Reading</a></li> </ul> </footer> </section> </div> {% endfor %}
We can even start a loop with an offset. For example, the following code from _includes/footer.html starts at the third post entry and stops after five iterations.
{% for post in site.posts limit:5 offset:2 %} <li> <span class="date">{{ post.date | date: "%b" }} <strong>{{ post.date | date: "%-d" }}</strong></span> <h3><a href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a></h3> <p>{{ post.shortdesc }}</p> </li> {% endfor %}
Within loops you also have access to a number of variables to get the current iteration, whether this is the first or last iteration and so on. You also have ability to reverse a loop. Check the documentation for more details.
Now that we’re comfortable building our templates with Liquid, it’s time to add some posts. By default, posts are written in Markdown, though other formats are supported by plugins.
If you aren’t familiar with Markdown, it’s very easy to learn and many code editors either support it by default or via free extensions. There are even a ton of standalone Markdown editors, for example I use Mou on OSX and there is MarkdownPad on Windows. We won’t go into detail about the specifics of Markdown here.
There are two important aspects to understand about posts. The first is that posts must be placed in the _posts folder and must be named using the format of
year-month-day-title.markdown (or .md). For instance, a post from our example site posted on April 21, 2014 is
2014-04-21-season-6-escape-the-citadel.markdown. The title portion of the file name is up to you, and doesn’t actually have to match the actual title of the page in the metadata (we’ll look at that in a moment). This will be translated to
/2014/04/21/season-6-escape-the-citadel.html when the final URL is created.
The other important aspect of creating posts is that every post must have “front matter“, which is YAML formatted metadata at the beginning of each post. Front matter can be added to any Jekyll file, but plays an important role in posts.
Front matter is identified by two sets of triple dashes (
---) at the beginning of a post. While, technically speaking, front matter can be left empty, it generally is not for posts. In our examples, we include some of the predefined variables such as:
layout– this specifies which layout file (from _layouts) will be used when generating this post/page.
categories– this is a space separated list that allows you to place posts/pages in subfolders when the site is generated. For example, a post with
categories: season6 episodeswill be generated into
/season6/episodes/. If you only have one category, you can alternately use just a
categoryproperty.
date– this is the only predefined variable specific to posts and overrides the date that is parsed from the post’s file name. This can be useful for ensuring that posts are properly date-sorted by Jekyll.
title– while not technically listed as a predefined variable in the documentation, posts generally include a title property (in fact the default generated posts do) that you can access via
{{ page.title }}when displaying the post.
We can add any arbitrary front matter variable we want as well, which we’ll discuss in more detail later in this article. It’s also worth noting that front matter can be added to anything, including templates or even CSS files.
When displaying posts or articles, you’ll often want to display an initial portion of text as an excerpt. Sometimes you’ll just want to grab the initial paragraph of text, but, most often, you’ll want to designate a break point where the excerpt should end to help ensure that it doesn’t exceed a particular length.
Jekyll gives you the ability to set an exerpt seperator. The _config.yml file is where we put Jekyll configuration. In our sample application, we chose a seperator of
<--more--> by adding the following setting:
excerpt_separator: "<!--more-->"
The excerpt is available as a property on a post/page (from index.html).
<p>{{ post.excerpt }}</p>
If you want to disable the excerpt entirely, you can set it to an empty string. You can also manually set the excerpt by adding a property named
excerpt to the “front matter” metadata on each post.
It is very common that you’ll have your own set of metadata, whether global to the site or local to a post/page, that you’ll want to add. Jekyll makes this very easy to do.
Any variable that you set within the _config.yml in the root of your Jekyll site will be available throughout the site whenever the page is being processed. For example, let’s look at a portion of the configuration file from our sample site:
# Site settings title: Adventure Time! email: brian.rinaldi@gmail.com banner: "/images/about.jpg" description: > # this means to ignore newlines until "baseurl:". baseurl: "" # the subpath of your site, e.g. /blog/
In the settings above, we have added a banner and description. These are accessible within the
site object. For instance, we use these to populate a portion of the footer within _includes/footer.html:
<section> <header> <h2>What's this all about?</h2> </header> <a href="#" class="image featured"><img src="{{site.banner | prepend: site.baseurl}}" alt="" /></a> <p>{{ site.description }}</p> </section>
As was briefly mentioned earlier, we can add any arbitrary item to a post or page within the front matter. Let’s look at the front matter within the _posts/2014-06-12-season-6-food-chain.markdown file:
--- layout: post title: "Food Chain (Season 6)" date: 2014-06-12 10:33:56 categories: season6 episodes shortdesc: Finn and Jake learn about the food chain by becoming the food chain. banner: /images/foodchain.jpg ---
As we discussed earlier, the
layout,
title,
date and
categories variables are standard post variables. However, we’ve added
shortdesc (which is used for a very short description of the post) and
banner (which adds a per post banner image).
These variables are, of course, accessible when outputting the display of a post, as in _layouts/post.html:
<!-- Content --> <article class="box post"> <div class="image featured" style="background-image: url('{{ page.banner | prepend: site.baseurl }}');"></div> <header> <h2>{{ page.title }}</h2> <p>{{ page.shortdesc }}</p> </header> {{ content }} </article>
Just as importantly, we can access these when looping through a list of posts, such as in _includes/footer.html in our sample site where we utilize the short description (
shortdesc) variable that was added:
<ul class="dates"> {% for post in site.posts limit:5 offset:2 %} <li> <span class="date">{{ post.date | date: "%b" }} <strong>{{ post.date | date: "%-d" }}</strong></span> <h3><a href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a></h3> <p>{{ post.shortdesc }}</p> </li> {% endfor %} </ul>
Not all content on a site is a post, of course. Sometimes, you’ll want the ability to have other types of data that can populate sections of the site. For instance, on our sample site we maintain a list of Adventure Time! characters with names, descriptions and photographs. By placing these in a data file within Jekyll, we are free to output (and filter) them wherever and however we want, while maintaining only a single copy of the data. Let’s look at how this works.
Data files are placed in the _data folder in the root of your Jekyll site. Jekyll actually allows us to use YAML, JSON or even a CSV file to maintian the data. In the case of our character data, we’ll use YAML. Here’s a snippet from _data/characters.yaml:
- name: "Finn the Human" image: "/images/finn.jpg" description: "Finn is a 15-year-old human. He is roughly five feet tall and is missing several teeth due to his habit of biting trees and rocks among other things." - name: "Jake the Dog" image: "/images/jake.jpg" description: "Jake can morph into all sorts of fantastic shapes with his powers, but typically takes the form of an average sized yellow-orange bulldog."
Any data files within the _data folder are processed by Jekyll and made available under the
site.data object, using the file name (sans extension) as the object to access the data. For example, our character data is accessible under
site.data.characters since the file was characters.yaml.
We can now use our data to loop through and access the character information and display it, as in index.html from the sample site.
{% for character in site.data.characters %} <div class="4u"> <section class="box"> <span class="image featured"><img src="{{character.image}}" alt="" /></span> <header> <h3>{{character.name}}</h3> </header> <p>{{character.description}}</p> </section> </div> {% endfor %}
Our data structures are simple, but we’re free to add nested data structures if needed.
Jekyll also supports collections, which would have worked for our characters data just as well. You can see documentation for collections here.
Now that’s we’ve built our awesome Adventure Time! fan site, it’s time to generate and deploy it. Actually, if you’ve previewed the site using Jekyll’s local server, Jekyll has already done a build for you. The generated files can be found in the _site folder. To manually build the site, simply enter the command:
jekyll build
This will regenerate all the files and place them in the _site folder. You can override this destination if you choose to using the
-d option and specifying a directory where the generated site should be placed.
We’re ready to launch our site, all you need to do is open your FTP client and upload the generated files to our host!
There are a number of options for deploying other than manually uploading via FTP. For example, I’ve use a Ruby gem called Glynn which is designed for automate connecting to FTP and deploy a Jekyll site. It can easily be configured via the _config.yml.
If you want to host your site on GitHub Pages, there is a gem file available to make the deployment process easy (check Jekyll’s documentation on this topic here).
There are numerous other options, some of which are covered in the Jekyll deployment documentation. Keep in mind, of course, that the generated files are just static HTML, CSS and JavaScript, so you can deploy them just about anywhere without needing an integrated deployment solution.
Whether you are running a blog, an online magazine or a company web site, static sites are a viable option, especially if your site is content-focused. Jekyll makes it relatively easy to build static web sites that are easy to update and deploy. Hopefully this guide has piqued your interest and you’ll give Jekyll a try. If you do, please feel free to share your experiences in the comments below.
Header image courtesy of unicornlover69 | https://developer.telerik.com/featured/getting-started-with-jekyll/ | CC-MAIN-2018-22 | refinedweb | 2,802 | 66.33 |
Grunt.
Latest beta release is
6.0.0-beta.19 which is compatible with TypeScript 2.7, and any future version of TypeScript by using the tsconfig.json passthrough feature, or the additionalFlags option.
Latest stable release is
5.5.1 with built-in support for features added in TypeScript 1.8. Full changelog is here.
Thank you for your interest in contributing! Please see the contributing guide for details.
Do you use grunt-ts? Would you like to help keep it up-to-date for new TypeScript versions? Please let @nycdotnet know.
To install grunt-ts, you must first install TypeScript and GruntJS.
npm install typescript --save-dev.
npm install grunt --save-dev.
npm install grunt-cli -g.
devDependenciesin your
package.json.
<%and
%>as tokens for html replacements with grunt-ts anymore. In grunt-ts 6.0 and higher, you must use
{%and
%}for HTML replacement tokens.
fastmode unless
verbose: trueis specified in the task or target
options(See #389).
If you've never used GruntJS on your computer, you should follow the detailed instructions here to get Node.js and the grunt-cli working. If you're a Grunt expert, follow these steps:
npm install grunt-tsin your project directory; this will install
grunt-ts, TypeScript, and GruntJS.
tstask in your
Gruntfile.js(see below for a minimalist one).
gruntat the command line in your project folder to compile your TypeScript code.
This minimalist
Gruntfile.js will compile your TypeScript project using the specified
tsconfig.json file. Using a
tsconfig.json is the best way to use TypeScript:
//module {grunt;grunt;grunt;};
If you prefer the GruntJS idiom, this minimalist
Gruntfile.js will compile
*.ts files in all subdirectories of the project folder, excluding anything under
node_modules. Please note - it is almost always better to use a
tsconfig.json to compile your TypeScript instead of doing it this way:
module {grunt;grunt;grunt;};
A more extensive sample
Gruntfile.js is available here.
filesobject (for instantiating multiple independent
tscruns in a single target), etc.
tscTypeScript Compiler via options in the gruntfile
tstask, and also supports switch overrides per-target.
--outswitch
Grunt-ts provides explicit support for most
tsc switches. Any arbitrary switches can be passed to
tsc via the additionalFlags feature.
For file ordering, look at JavaScript Generation.
Note: In the above chart, if "where to define" is "target", the property must be defined on a target or on the
ts object directly. If "where to define" is "options", then the property must be defined on an
options object on
ts or on a target under
ts.
Grunt-ts does not support the GruntJS standard
dest target property. Instead, you should use files, out, or outDir.
Grunt-ts supports use of the GruntJS-centric
files property on a target as an alternative to the
tsc-centric use of
src and
out/
outDir.
Notes:
fastgrunt-ts option is not supported in this configuration. You should specify
fast: 'never'to avoid warnings when
filesis used.
destwith grunt-ts. A warning will be issued to the console. If a non-empty array is passed, the first element will be used and the rest will be truncated.
destparameter ends with ".js", the value will be passed to the
--outparameter of the TypeScript compiler. Otherwise, if there is a non-blank value, it will be passed to the
--outDirparameter.
--outDirparameter, specify it as "src/" in the dest parameter to avoid grunt-ts warnings.
Here are some examples of using the target
files property with grunt-ts:
grunt;
Grunt-ts supports compilation of
.html file content to TypeScript variables which is explained in detail here. The
html target property acts similarly to
src, except that it searches for html files to convert to TypeScript variables. See also htmlModuleTemplate and htmlVarTemplate.
// How to use the html target property (incomplete example)grunt;
Note: the
html compilation functionality will not fire if the
src property is not specified. If you wish to only have the HTML compile to TypeScript without compiling the resulting
.ts files to JavaScript, make sure they're excluded from the
src globs, or else specify an empty
src array alongside the
html task property, and set the target
compile option to
false:
// Example of how to compile html files to TypeScript without compiling the resulting// .ts files to JavaScript.grunt;
This section allows global configuration for the grunt-ts task. All target-specific options are supported. If a target also has options set, the target's options override the global task options.
Passes the --out switch to
tsc. This will cause the emitted JavaScript to be concatenated to a single file if your code allows for that.
Note - the sequence of concatenation when using namespaces (formerly called internal modules) is usually significant. You can assist TypeScript to order the emitted JavaScript correctly by changing the sequence in which files appear in your glob. For example, if you have
a.ts,
b.ts, and
c.ts and use the glob
'*.ts, the default would be for TypeScript to concatenate the files in alphabetical order. If you needed the content from
b.ts to appear first, and then the rest in alphabetical order, you could specify the glob like this:
['b.ts','*.ts'].
Note - the
out feature should not be used in combination with
module because the TypeScript compiler does not support concatenation of external modules; consider using a module bundler like WebPack, Browserify, or Require's r.js to concatenate external modules.
grunt;
Warning: Using the compiler with
out and
reference will prevent grunt-ts from using its fast compile feature. Consider using external modules with transforms instead.
Passes the --outDir switch to
tsc. This will redirect the emitted JavaScript to the specified directory and subdirectories.
grunt;
Grunt-ts can automatically generate a TypeScript file containing a reference to all other found
.ts files. This means that the developer will not need to cross-reference each of their TypeScript files manually; instead, they can just reference the single
reference file in each of their code files.
grunt;
Note: the TypeScript file identified in the
reference property must be included in the
src or
files property in the Grunt target, or
reference won't work (either directly or via wildcard/glob).
Note: It is not supported to use
reference with
files.
Warning: Using the compiler with
out and
reference will prevent grunt-ts from using its fast compile feature. Consider using external modules with transforms instead.
Allows you to specify the TypeScript files that will be passed to the compiler. Supports standard GruntJS functionality such as globbing. More info at Configuring GruntJS Tasks]().
grunt;
Grunt-ts can use the TypeScript compilation settings from a Visual Studio project file (.csproj or .vbproj).
In the simplest use case, specify a string identifying the Visual Studio project file name in the
vs target property. Grunt-ts will extract the TypeScript settings last saved into the project file and compile the TypeScript files identified in the project in the manner specified by the Visual Studio project's configuration.
grunt;
If more control is desired, you may pass the
vs target property as an object literal with the following properties:
project: (
string, mandatory) the relative path (from the
gruntfile.js) to the Visual Studio project file.
config: (
string, optional, default = '') the Visual Studio project configuration to use (allows choosing a different project configuration than the one currently in-use/saved in Visual Studio).
ignoreFiles: (
boolean, optional, default =
false) Will ignore the files identified in the Visual Studio project. This is useful if you want to keep your command-line build settings synchronized with the project's TypeScript Build settings, but want to specify a custom set of files to compile in your own
srcglob. If not specified or set to false, the TypeScript files referenced in the Visual Studio project will be compiled in addition to any files identified in the
srctarget property.
ignoreSettings: (
boolean, optional, default =
false) Will ignore the compile settings identified in the Visual Studio project. If specified, grunt-ts will follow its normal behavior and use any TypeScript build settings specified on the target or its defaults.
All features of grunt-ts other than
files, are compatible with the
vs target property. If you wish to add more files to the compilation than are referenced in the Visual Studio project, the
src grunt-ts property can be used; any files found in the glob are added to the compilation list (grunt-ts will resolve duplicates). All other target properties and target options specified in the gruntfile.js will override the settings in the Visual Studio project file. For example, if you were referencing a Visual Studio project configuration that had source maps enabled, specifying
sourcemap: false in the gruntfile.js would keep all other Visual Studio build settings, but disable generation of source maps.
Note: Using the
vs target property with
files is not supported.
Example: Use all compilation settings specified in the "Release" TypeScript configuration from the project, but compile only the TypeScript files in the
lib subfolder to a single file in the
built folder.
grunt;
If you wish to disable the Visual Studio built-in TypeScript build, but keep the Visual Studio project properties TypeScript Build pane working, follow these instructions.
Grunt-ts can watch a directory and recompile TypeScript files when any TypeScript or HTML file is changed, added, or removed. Use the
watch target option specifying a target directory that will be watched. All subdirectories are automatically included.
Note: this feature does not allow for additional tasks to run after the compilation step is done - for that you should use
grunt-contrib-watch.
grunt;
Allows passing arbitrary strings to the compiler. This is intended to enable compatibility with features not supported directly by grunt-ts. The parameters will be passed exactly as-is with a space separating them from the previous switches. It is possible to pass more than one switch with
additionalFlags by separating them with spaces.
grunt;
Allows JavaScript files to be compiled. This setting works well with
outDir. This feature requires grunt-ts 5.5 or higher and TypeScript 1.8 or higher.
grunt;
Allows use of ES6 "default" import syntax with pre-ES6 modules when not using SystemJS. If using module format "amd", "commonjs" or "umd", the following import syntax for jQuery will give the error "Module 'jquery' has no default export" when exporting to "amd", "commonjs", or "umd" format:
import * as $ from 'jquery';. In that case, passing allowSyntheticDefaultImports will eliminate this error. NOTE: This is the default behavior when SystemJS module format is used (
module: "system"). This switch (and behavior) requires TypeScript 1.8 or higher. See this issue for more details.
grunt;
When set to true, TypeScript will not report errors on unreachable code. Requires TypeScript 1.8 or higher.
grunt;
When set to true, TypeScript will not report errors when there are unused labels in your code. Requires TypeScript 1.8 or higher.
grunt;
Deprecated - when using TypeScript >= 1.5 (most common), use rootDir instead.
When using fast compile with outDir, tsc won't guarantee the output directory structure will match the source structure. Setting baseDir helps to ensure the original source structure is mapped to the output directory. This will create a .baseDir.ts file in the baseDir location. A .baseDir.js and .baseDir.js.map will be created in the outDir.
grunt;
true default| false
Indicates if the TypeScript compilation should be attempted. Turn this off if you wish to just run transforms.
grunt;
This target option allows the developer to select an alternate TypeScript compiler.
By default,
grunt-ts will use the TypeScript compiler that came bundled with it. Alternate compilers can be used by this target option (for custom compiler builds) or using
package.json (for npm released version of
typescript).
To use a custom compiler, update your gruntfile.js file with this code:
grunt;
Download custom compilers from the current TypeScript repository on GitHub or the old TypeScript repository on CodePlex and extract it to a folder in your project. The compiler will be in the
bin folder. Copy all of the files to your project folder and then reference
tsc using the
compiler task option. For example, if you extracted everything to a
mycompiler folder in your project, you'd set the grunt-ts
compiler property to
'./mycompiler/tsc'.
In the absence of a compiler argument,
grunt-ts will look for an alternate compiler in its peer
node_modules folder (where
grunt-ts and
typescript are peers).
The
package.json would look something like this for a legacy project:
"devDependencies":"grunt" : "~0.4.1""grunt-ts" : "~1.9.2""typescript" : "0.9.7"
Note: It is safest to pin the exact TypeScript version (do not use
~ or
>).
true | false default
Retains comments in the emitted JavaScript if set to
true. Removes comments if set to
false. Note that if
removeComments are both used, the value of
removeComments will win; regardless, please don't do this as it is just confusing to everyone.
grunt;
true | false default
Generates corresponding .d.ts file(s) for compiled TypeScript files.
grunt;.
This is only available in TypeScript 1.5 and higher. If enabled, will automatically enable
experimentalDecorators
grunt;
true | false default
A new compatability mode to enable consistent runtime behavior with Babel and Webpack with regards to callable default ES module imports. See the TypeScript 2.7 Anouncement blog post for more details.
grunt;
true | false default
Set to true to emit events in Grunt upon significant events in grunt-ts. This is used by the task
validate_failure_count in the Gruntfile.js of grunt-ts itself. Currently, the only supported event is
grunt-ts.failure which will be raised upon a failed build if
emitGruntEvents is true. This is only available in grunt-ts 5.2.0 or higher.
grunt;
Example usage:
gruntevent;
true | false default
Enable support for experimental proposed ECMAScript async functionality. This is only available in TypeScript 1.6 and higher in 'es6' mode.
grunt;
true | false default
Enable support for experimental proposed ECMAScript decorators. This is only available in TypeScript 1.5 and higher.
grunt;
true default | false
TypeScript has two types of errors: emit preventing and non-emit preventing. Generally, type errors do not prevent the JavaScript emit. Therefore, it can be useful to allow the Grunt pipeline to continue even if there are type errors because
tsc will still generate JavaScript.
If
failOnTypeErrors is set to
false, grunt-ts will not halt the Grunt pipeline if a TypeScript type error is encountered. Note that syntax errors or other general
tsc errors will always halt the pipeline.
grunt;
"watch" default | "always" | "never"
If you are using external modules, grunt-ts will try to do a
fast compile by default, basically only compiling what's changed. It should "just work" with the built-in file watching as well as with external tools like
grunt-contrib-watch.
To do a fast compile, grunt-ts maintains a cache of hashes for TypeScript files in the
.tscache folder to detect changes (needed for external watch tool support). It also creates a
.baseDir.ts file at the root, passing it to the compiler to make sure that
--outDir is always respected in the generated JavaScript.
You can customize the behaviour of grunt-ts
fast.
If you are using
files, grunt-ts can't do a fast compile. You should set
fast to 'never'.
grunt;
When set to true, disallows inconsistently-cased references to the same file. For example, when using ES6-style imports, importing a file as "./MyLibrary" in one file and "./mylibrary" in another.
grunt;
Grunt-ts supports compilation of
.html file content to TypeScript variables which is explained in detail here. The
htmlModuleTemplate target property allows the developer to define a namespace for the templates. See also html and htmlVarTemplate.
//Note: incomplete - combine with html and htmlVarTemplategrunt;
Grunt-ts supports compilation of
.html file content to TypeScript variables which is explained in detail here. The
htmlVarTemplate target property allows the developer to define a property name for the template contents. See also html and htmlModuleTemplate.
//Note: incomplete - combine with html and htmlModuleTemplategrunt;
Sets a root for output of transformed-to-TypeScript HTML files. See detailed explanation of grunt-ts HTML template support.
//Note: incomplete - combine with html and src/files/etc.grunt;
Will flatten the transformed HTML files to a single folder. See detailed explanation of grunt-ts HTML template support.
//Note: incomplete - combine with html and src/files/etc.grunt;
Grunt-ts supports compilation of
.html file content to TypeScript variables which is explained in detail here. The
htmlOutputTemplate target property allows the developer to override the internally defined output template to a custom one, useful if one would like to define the HTML output as an external modules, for example.
Three variables can be used in the template, namely:
htmlModuleTemplateoption.
htmlVarTemplateoption.
//Note: Outputs an external modulegrunt;
true | false default
When true, TypeScript will emit source maps inline at the bottom of each JS file, instead of emitting a separate
.js.map file. If this option is used with
sourceMap,
inlineSourceMap will win.
grunt;
true | false default
When true, TypeScript will emit TypeScript sources "inline". This must be used with either
inlineSourceMap or
sourceMap. When used with
inlineSourceMap, the TypeScript sources and the source map itself are included in a Base64-encoded string in a comment at the end of the emitted JavaScript file. When used with
sourceMap, the escaped TypeScript sources are included in the .js.map file itself under a
sourcesContent property.
grunt;
true | false default
When true, makes scenarios that break single-file transpilation into an error. See for more details. If you are using TypeScript 1.5, and fast compilation, it is ideal to use this to take advantage of future compilation optimizations.
grunt;
`'react'` default | `'preserve'`
Specify the JSX code generation style. Documentation is here: TypeScript Wiki - JSX.
grunt;
List of library files to be included in the compilation. If
--lib is not specified a default library is injected.
grunt;
Specify culture string for error messages - will pass the
--locale switch. Requires appropriate TypeScript error messages file to be present (see TypeScript documentation for more details).
grunt;
Specifies the root for where
.js.map sourcemap files should be referenced. This is useful if you intend to move your
.js.map files to a different location. Leave this blank or omit entirely if the
.js.map files will be deployed to the same folder as the corresponding
.js files. See also sourceRoot.
grunt;
"amd" | "commonjs" | "system" | "umd" | "es6" | "es2015" | "" default | "none" same behavior as ""
Specifies if TypeScript should emit AMD, CommonJS, SystemJS, "ES6", or UMD-style external modules. Has no effect if internal modules are used. Note - this should not be used in combination with
out prior to TypeScript 1.8 because the TypeScript compiler does not support concatenation of external modules; consider using a module bundler like WebPack, Browserify, or Require's r.js to concatenate external modules.
grunt;
"node" | "classic" default
New in TypeScript 1.6. TypeScript is gaining support for resolving definition files using rules similar to common JavaScript module loaders. The first new one is support for CommonJS used by NodeJS, which is why this parameter is called
"node" The
"node" setting performs an extra check to see if a definition file exists in the
node_modules/modulename folder if a TypeScript definition can't be found for an imported module. if this is not desired, set this setting to "classic".
On Defaults. When using
--module commonjs the default
--moduleResolution will be
node. For all other
--module options the default is
--moduleResolution classic. If specified, the specified value will always be used.
grunt;
"CRLF" | "LF" | "" default
Will force TypeScript to use the specified newline sequence. Grunt-ts will also use this newline sequence for transforms. If not specified, TypeScript and grunt-ts use the OS default.
grunt;
true | false default
Set to true to pass
--noEmit to the compiler. If set to true, TypeScript will not emit JavaScript regardless of if the compile succeeds or fails.
grunt;
true | false default
Set to true to pass
--noEmitHelpers to the compiler. If set to true, TypeScript will not emit JavaScript helper functions such as
__extends. This is for very advanced users who wish to provide their own implementation of the TypeScript runtime helper functions.
grunt;
true | false default
Set to true to pass
--noEmitOnError to the compiler. If set to true, TypeScript will not emit JavaScript if there is a type error. This flag does not affect the Grunt pipeline; to force the Grunt pipeline to continue (or halt) in the presence of TypeScript type errors, see failOnTypeErrors.
grunt;
true | false default
Report errors for fallthrough cases in switch statement.
grunt;
true | false default
Set to true to pass
--noImplicitAny to the compiler. Requires more strict type checking. If
noImplicitAny is enabled, TypeScript will raise a type error whenever it is unable to infer the type of a variable. By default, grunt-ts will halt the Grunt pipeline on type errors. See failOnTypeErrors for more info.
grunt;
true | false default
Report error when not all code paths in function return a value.
grunt;
true | false default
Set to true to pass
--noImplicitThis to the compiler. Requires more strict type checking. Raise error on
this expressions with an implied
any type.
grunt;
true | false default
Set to true to pass
--noImplicitGenericChecks to the compiler. Disables strict checking of generic signatures in function types.
grunt;
true | false default
Specify this option if you do not want the lib.d.ts to be loaded by the TypeScript compiler. Generally this is used to allow you to manually specify your own lib.d.ts.
grunt;
true | false default
Do not add triple-slash references or module import targets to the list of compiled files.
grunt;
true | false default
Set to true to pass
--preserveConstEnums to the compiler. If set to true, TypeScript will emit code that allows other JavaScript code to use the enum. If false (the default), TypeScript will inline the enum values as magic numbers with a comment in the emitted JS.
grunt;
true | false default
Set to true to pass
--preserveSymlinks to the compiler. If set, TypeScript will not resolve symlinks to their real path; instead it will treat a symlinked file like a real one.
grunt;
true | false default
Stylize errors and messages using color and context.
grunt;
string
Specifies the object invoked for
createElement and
__spread when targeting 'react' JSX emit. Requires TypeScript 1.8 or higher and grunt-ts 5.5 or higher.
grunt;
true default| false
Removes comments in the emitted JavaScript if set to
true. Preserves comments if set to
false. Note that if
removeComments are both used, the value of
removeComments will win; regardless, please don't do this as it is just confusing to everyone.
grunt;
string
Affects the creation of folders inside the
outDir location.
rootDir allows manually specifying the desired common root folder when used in combination with
outDir. Otherwise, TypeScript attempts to calculate this automatically. Not specifying
rootDir can result in
outDir not matching structure of src folder when using
fast compilation. baseDir provides a poor man's version of
rootDir for those using TypeScript < 1.5.
grunt;
true | false default
Don't check a user-defined default lib file's validity. This switch is deprecated in TypeScript 2.5+ (use skipLibCheck instead).
grunt;
true | false default
Skip type checking of all declaration files (*.d.ts).
grunt;
true | false default
The strict property is a macro to enable all of the strict checks in TypeScript
grunt;
true | false default
Enforce contravariant function parameter comparison. Under
--strictFunctionTypes, any function type that doesn't originate from a method has its parameters compared contravariantly.
grunt;
true | false default
In strict null checking mode, the
null and
undefined values are not in the domain of every type and are only assignable to themselves and
any (the one exception being that
undefined is also assignable to
void).
grunt;
true | false default
The strictPropertyInitialization property ensures that properties are initialized before use
grunt;
true default | false
If true, grunt-ts will instruct
tsc to emit source maps (
.js.map files). If this option is used with
inlineSourceMap,
inlineSourceMap will win.
grunt;
The sourceRoot to use in the emitted source map files. Allows mapping moved
.js.map files back to the original TypeScript files. See also mapRoot.
grunt;
Use stripInternal to prevent the emit of members marked as @internal via a comment. For example:
/* @internal */
grunt;
true | false default
Set to true to disable strict object literal assignment checking (experimental). See for more details.
grunt;
true | false default
Set to true to pass
--suppressImplicitAnyIndexErrors to the compiler. If set to true, TypeScript will allow access to properties of an object by string indexer when
--noImplicitAny is active, even if TypeScript doesn't know about them. This setting has no effect unless
--noImplicitAny is active.
grunt;
For example, the following code would not compile with
--noImplicitAny alone, but it would be legal with
--noImplicitAny and
--suppressImplicitAnyIndexErrors both enabled:
;p = 101; //property age does not exist on interface person.console.logp;.
"es5" default | "es3" | "es6"
Allows the developer to specify if they are targeting ECMAScript version 3, 5, or 6. Support for
es6 emit was added in TypeScript 1.4 and is listed as experimental. Only select ES3 if you are targeting old browsers (IE8 or below). The default for grunt-ts (es5) is different than the default for
tsc (es3).
grunt;
Grunt-ts can integrate with a
tsconfig.json file in three ways which offer different behavior:
boolean: simplest way for default behavior.
string: still uses defaults, but allows specifying a specific path to the
tsconfig.jsonfile or the containing folder.
object: allows detailed control over how grunt-ts works with
tsconfig.json
When specifying tsconfig as a boolean
In this scenario, grunt-ts will use all settings from the
tsconfig.json file in the same folder as
Gruntfile.js.
includeproperty is present in the
tsconfig.jsonfile:
includearray and
excludearray (if present).
includeproperty is present in the
tsconfig.jsonfile and grunt-ts has
overwriteFilesGlobor
updateFilesset to true. These settings were developed for a time before
includewas available, and they don't make sense to use with it.
filesGlobproperty is present in the
tsconfig.jsonfile:
filesproperty is present, it will be modified with the result from evaluating the
filesGlobthat is present inside
tsconfig.json(the
fileselement will not be updated with the results from any glob inside
Gruntfile.js).
excludeis present, it will be ignored.
filesGlobproperty is NOT present, but
filesis present:
fileswill be added to the compilation context.
excludeis present, it will be ignored.
filesGlobnor
filesis present:
excludeproperty.
Gruntfile.js, grunt-ts will NOT update the
filesGlobin the
tsconfig.jsonfile with it nor will those files be added to the
tsconfig.json
fileselement.
tsconfigproperty should function correctly as either a task option or a target property.
tsconfig.jsonfile does not exist or there is a parse error, compilation will be aborted with an error.
grunt;
When specifying tsconfig as a string
This scenario follows the same behavior as specifying
tsconfig.json as a boolean, except that it is possible to use an explicit file name. If a directory name is provided instead, grunt-ts will use
tsconfig.json in that directory. The path to
tsconfig.json (or the directory that contains it) is relative to
Gruntfile.js.
grunt;
When specifying tsconfig as an object
This provides the most control over how grunt-ts integrates with
tsconfig.json. Supported properties are:
tsconfig:
string(optional) - if absent, will default to
tsconfig.jsonin same folder as
Gruntfile.js. If a folder is passed, will use
tsconfig.jsonin that folder.
ignoreFiles:
boolean(optional) - default is
false. If true, will not inlcude files in
filesarray from
tsconfig.jsonin the compilation context.
ignoreSettings:
boolean(optional) - default is
false. If true, will ignore
compilerOptionssection in
tsconfig.json(will only use settings from
Gruntfile.jsor grunt-ts defaults)
overwriteFilesGlob:
boolean(optional) - default is
false. If true, will overwrite the contents of the
filesGlobarray with the contents of the
srcglob from grunt-ts. This option is not supported if
includeis specified in the
tsconfig.jsonfile.
updateFiles:
boolean(optional) - If
includein the tsconfig.json file is not specified and there is a
filesGlobpresent, default is
true, otherwise false. Will modify the
filesarray in
tsconfig.jsonto match the result of evaluating a
filesGlobthat is present inside
tsconfig.json(the
fileselement will not be updated with the results from any glob inside
Gruntfile.jsunless
overwriteFilesGlobis also
true).
passThrough:
boolean(optional) - default is
false. See passThrough, below.
grunt;
If
passThrough is set to
true, grunt-ts will run TypeScript (
tsc) with the specified tsconfig, passing the
--project option only (plus anything in
additionalFlags). This provides support for custom compilers with custom implementations of
tsconfig.json support. Note: Since this entirely depends on support from
tsc, the
tsconfig option must be a directory (not a file) as of TypeScript 1.6. If you are entirely happy with your
tsconfig.json, this is the way you should run grunt-ts.
filesGlobin
tsconfig.jsonare relative to the
tsconfig.json, not the
Gruntfile.js.
tsconfighas a restriction when used with
filesin the Grunt task configuration:
overwriteFilesGlobis NOT supported if
fileshas more than one element. This will abort compilation.
filesis absent in
tsconfig.json, but
filesGlobis present, grunt-ts will create and update the
filesarray in
tsconfig.jsonas long as
updateFilesis
true(the default). Since
fileswill be created in this scenario, any values in the
excludearray will be ignored.
vskeyword. Any settings found in
tsconfig.jsonwill override any settings found in the Visual Studio project file. Any files referenced in the Visual Studio file that are not also referenced in tsconfig.json will be included in the compilation context after any files from
tsconfig.json(any files from
srcbut not in
vsor
tsconfigwill be included after that). The order of the files in
tsconfig.jsonwill override the order of the files in the VS project file.
false default | true
Will print the switches passed to
tsc on the console. Helpful for debugging.
grunt;
Objective: To allow for easier code refactoring by taking relative path maintenance burden off the developer. If the path to a referenced file changes,
grunt-ts will regenerate the relevant lines.
Transforms begin with a three-slash comment
/// and are prefixed with
ts:. When grunt-ts is run against your TypeScript file, it will add a new line with the appropriate TypeScript code to reference the file, or it will generate a comment indicating that the file you referenced could not be found.
For example, if you put this in your code:
///ts:ref=mylibrary
The next time grunt-ts runs, it might change that line to this:
///ts:ref=mylibrary/// <reference path='../path/to/mylibrary.d.ts'/> ///ts:ref:generated
Important Note: All transforms require the searched-for file to be included in the result of the
files,
src, or
vs Grunt globs. Grunt-ts will only search within the results that Grunt has identified; it does not go searching through your disk for files!
You can also run transforms without compiling your code by setting
compile: false in your config. For example:
grunt;
///ts:import=<fileOrDirectoryName>[,<variableName>]
This will generate the relevant
import foo = require('./path/to/foo'); code without you having to figure out the relative path.
If a directory is provided, the entire contents of the directory will be imported. However if a directory has a file
index.ts inside of it, then instead of importing the entire folder only
index.ts is imported.
Import file:
///ts:import=filename; ///ts:import:generated
Import file with an alternate name:
///ts:import=BigLongClassName,foo; ///ts:import:generated
Import directory:
///ts:import=directoryName; ///ts:import:generated; ///ts:import:generated...
Import directory that has an
index.ts file in it:
///ts:import=directoryName; ///ts:import:generated
See Exports for examples of how grunt-ts can generate an
index.tsfile for you
///ts:export=<fileOrDirectoryName>[,<variableName>]
This is similar to
///ts:import but will generate
export import foo = require('./path/to/foo'); and is very useful for generating indexes of entire module directories when using external modules (which you should always be using).
Export file:
///ts:export=filename; ///ts:export:generated
Export file with an alternate name:
///ts:export=filename,foo; ///ts:export:generated
Export directory:
///ts:export=dirName; ///ts:export:generated; ///ts:export:generated...
///ts:ref=<fileName>
This will generate the relevant
/// <references path="./path/to/foo" /> code without you having to figure out the relative path.
Note: grunt-ts only searches through the enumerated results of the
src or
files property in the Grunt target. The referenced TypeScript file must be included for compilation (either directly or via wildcard/glob) or the transform won't work. This is so that grunt-ts doesn't go searching through your whole drive for files.
Reference file:
///ts:ref=filename/// <reference path='../path/to/filename'/> ///ts:ref:generated.
///// Put comments here and they are preserved//grunt-start////////grunt-end///
As of grunt-ts v2.0.2, If you wish to standardize the line endings used by grunt-ts transforms, you can set the
grunt.util.linefeed property in your gruntfile.js to the desired standard line ending for the grunt-ts managed TypeScript files.
module {gruntutillinefeed = '\r\n'; // this would standardize on CRLF/* rest of config */};
Note that it is not currently possible to force TypeScript to emit all JavaScript with a particular line ending, but a switch to allow that is under discussion here:
TypeScript programming using grunt-ts (YouTube):
AngularJS + TypeScript : Workflow with grunt-ts (YouTube)
Licensed under the MIT License. | https://www.npmjs.com/package/grunt-ts | CC-MAIN-2018-13 | refinedweb | 5,591 | 59.09 |
P/Invoke in .NET Core on Red Hat Enterprise Linux
P/Invoke(Platform Invocation Service) is one of the features of CLI (Common Language Interface) on .NET Framework. P/Invoke enables managed code to call a native function in DLL (Dynamic Link Library). It’s a powerful tool for .NET Framework to execute existing C-style functions easily. .NET Core also has a P/Invoke feature and it means we can call a native function in .so file (Linux) and . file (Max OSX). I will show you the short example P/Invoke in .NET Core on Red Hat Enterprise Linux (RHEL).
Here is the simple P/Invoke sample using read function in libc. It is the same way as .NET Framework on Windows to import native function.
using System; using System.Runtime.InteropServices; using System.Text; namespace ConsoleApplication { public class Program { [DllImport("libc")] static extern int read(int handle, byte[] buf, int n); public static void Main(string[] args) { Console.Write("Input value:"); var buffer = new byte[100]; read(0, buffer, 100); Console.WriteLine("Input value is:" + Encoding.UTF8.GetString(buffer)); } } }
[dllimport] is the attribute to import a native function. We can declare method name as a native function name, and declare method name as you like with specifying native function name in EntryPoint attribute value as below.
[DllImport("libc", EntryPoint="read")] static extern int Read(int handle, byte[] buf, int n);
We can read text from console input with native libc method.
Next, I’d like to execute GUI sample written in .NET Core on RHEL. .NET Core doesn’t have GUI Framework at this point. However, we can call GUI library such as gtk+ from managed code in .NET Core. At first, install the package.
$ sudo yum install gtk3-devel
Now we can call functions in gtk+ from C# code. Here is the whole code to open dialog from C#.
using System; using System.Runtime.InteropServices; namespace ConsoleApplication { public class Program { [DllImport("libgtk-x11-2.0.so.0")] private static extern void gtk_init (ref int argc, ref IntPtr argv); [DllImport("libgtk-x11-2.0.so.0")] static extern IntPtr gtk_message_dialog_new(IntPtr parent_window, DialogFlags flags, MessageType type, ButtonsType bt, string msg, IntPtr args); [DllImport("libgtk-x11-2.0.so.0")] static extern int gtk_dialog_run(IntPtr raw); [DllImport("libgtk-x11-2.0.so.0")] static extern void gtk_widget_destroy(IntPtr widget); [Flags] public enum DialogFlags { Modal = 1 << 0, DestroyWithParent = 1 << 1, } public enum MessageType { Info, Warning, Question, Error, Other, } public enum ButtonsType { None, Ok, Close, Cancel, YesNo, OkCancel, } public static void Main(string[] args) { var argc = 0; var argv = IntPtr.Zero; gtk_init(ref argc, ref argv); var diag = gtk_message_dialog_new(IntPtr.Zero, DialogFlags.Modal, MessageType.Error, ButtonsType.Ok, "Hello from .NET Core on Red Hat!", IntPtr.Zero); var res = gtk_dialog_run(diag); gtk_widget_destroy(diag); Console.WriteLine(res); } } }
Here is a result. The dialog is opened.
P/Invoke was a technology only for Windows platform, but now it enables many platforms to call native function easily from managed code. Of course, we shouldn’t forget Mono, which enabled P/Invoke on Linux.
About the author:
Takayoshi Tanaka is the Software Maintenance Engineer of Red Hat. He is mainly in charge of OpenShift, .NET Core on Red Hat Enterprise Linux and Red Hat solutions on Microsoft Azure. He is a Microsoft MVP for Visual Studio and Development Technologies. He writes many articles in his personal blogs, writes many web articles, and makes many technical sessions in community groups.
Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets, books, and product downloads. | https://developers.redhat.com/blog/2016/09/14/pinvoke-in-net-core-rhel/ | CC-MAIN-2017-22 | refinedweb | 593 | 60.31 |
Python Programming, news on the Voidspace Python Projects and all things techie.
The Python Object Model Revisited (data descriptors)
A few weeks ago I demonstrated the complexity of the Python object model by fetching docstrings from objects. A while after posting it I thought of a bug - or at least a way in which it could return the wrong result when looking up an attribute on an object. It will probably come as no surprise that this is due to the descriptor protocol.
Descriptors are special types of objects that have __get__ and or __set__ and __delete__ methods and have special behaviour when fetched, set or deleted as object attributes. They are how methods, class methods, static methods, properties and __slots__ are implemented in Python.
Descriptors that have both __get__ and __set__ are called data descriptors (properties are the canonical example), descriptors with only __get__ are non-data descriptors (methods being the canonical example). Data descriptors have interesting behaviour when they are on a class which has the same member in the instance dictionary.
Instance members are stored in the __dict__ attribute of the object. Normally if this instance dictionary has a member then fetching that member will pull it out of the dictionary. The exception is that if the class has a data-descriptor with the same name then that will be invoked instead of the object in the instance dictionary. This is easy to demonstrate:
... @property
... def a(self):
... return 'property'
...
>>> a = A()
>>> a.__dict__['a'] = 'attribute'
>>> a.a
'property'
So a data-descriptor on the class will override a member with the same name on the instance - but the 18 lines of code I wrote before for fetching docstrings from attributes will always look on the instance first.
The same is true for inherited data-descriptors:
...
>>> b = B()
>>> b.__dict__['a'] = 'attribute'
>>> b.a
'property'
Non-data descriptors don't override instance attributes and data-descriptors on a base class don't override normal class attributes on a subclass.
To handle this we need to check both the instance and walk the inheritance hierarchy. If we find the member we are looking for in both then we check the member from the class for a __set__ method. If the member from the class (or one of its base classes) has a __set__ member then we return that - otherwise we return the member from the instance.
Our modified full code that takes this into account has grown to 22 lines and now looks like:
import inspect
def get_doc(obj, member):
found = []
if hasattr(obj, '__dict__') and member in obj.__dict__:
found.append(obj.__dict__[member])
if isinstance(obj, (type, types.ClassType)):
search_order = inspect.getmro(obj)
else:
search_order = inspect.getmro(obj.__class__)
for entry in search_order:
if member in entry.__dict__:
if hasattr(entry.__dict__[member], '__set__'):
return entry.__dict__[member].__doc__
found.append(entry.__dict__[member])
return found[0].__doc__
def get_docstrings(obj):
try:
members = dir(obj)
except Exception:
return [(member, get_doc(obj, member)) for member in members]
Note
In practise there is another exception that we haven't handled here. Although you can override methods with instance attributes (very useful for monkey patching methods for test purposes) you can't do this with the Python protocol methods. These are the 'magic methods' whose names begin and end with double underscores. When invoked by the Python interpreter they are looked up directly on the class and not on the instance (however if you look them up directly - e.g. x.__repr__ - normal attribute lookup rules apply).
There is a corner case (that I alluded to in my previous post), classes can define __slots__ and create a dummy __dict__ member. If this member isn't a dictionary then our code will barf horribly - but really this is such an evil corner case that I'm not going to worry about it.
I have seen one use case for __slots__ in combination with a fake __dict__ member: proxying attribute access. This is a part of the werkzeug web framework - the LocalProxy class defines __dict__ as a property which returns the __dict__ member of the object it is proxying...
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2009-06-22 23:08:08 | |
Categories: Python, Hacking Tags: descriptors, object model
discover: Test discovery for unittest backported to Python 2.4+
I kind of promised you no more entries on unittest for a while, but oh well.
I've backported the test discovery in Python-trunk, what will become Python 2.7 & Python 3.2. Test discovery allows you to run all the unittest based tests (or just a subset of them) in your project without you having to write your own test collection or running machinery. Once installed, test discovery can be invoked with python -m discover. I've tested the discover module with Python 2.4 and 3.0.
Most of the work of backporting was providing an implementation of os.path.relpath (added in Python 2.6) and refactoring the command line handling for standalone use.
The discover module also implements the load_tests protocol which allows you to customize test loading from modules and packages. Test discovery and load_tests are implemented in the DiscoveringTestLoader which can be used from your own test framework.
This is the test discovery mechanism and load_tests protocol for unittest backported from Python 2.7 to work with Python 2.4 or more recent (including Python 3).):
Usage: discover.py [options] Options: -v, --verbose Verbose output -s directory Directory to start discovery ('.' default) -p pattern Pattern to match test files ('test*.py' default) -t directory Top level directory of project (default to start directory) For test discovery all test modules must be importable from the top level directory of the project.
For example to use a different pattern for matching test modules run:
python -m discover -p '*test.py'
(Remember to put quotes around the test pattern or shells like bash will do shell expansion rather than passing the pattern through to discover.)
Test discovery is implemented in discover.DiscoveringTestLoader.discover. As well as using discover as a command line script you can import DiscoveringTestLoader, which is a subclass of unittest.TestLoader, and use it in your test framework.
This method finds and returns all test modules from the specified start directory, recursing into subdirectories to find them. Only test files that match pattern will be loaded. (Using shell style pattern matching.)
All test modules must be importable from the top level of the project. If the start directory is not the top level directory then the top level directory must be specified separately.
The load_tests protocol allows test modules and packages to customize how they are loaded. This is implemented in discover.DiscoveringTestLoader.loadTestsFromModule. If a test module defines a load_tests function then tests are loaded from the module by calling load_tests with three arguments: loader, standard_tests, None.
If a test package name (directory with __init__.py) matches the pattern then the package will be checked for a load_tests function. If this exists then it will be called with loader, tests, pattern.
If load_tests exists then discovery does not recurse into the package, load_tests is responsible for loading all tests in the package.
The pattern is deliberately not stored as a loader attribute so that packages can continue discovery themselves. top_level_dir is stored so load_tests does not need to pass this argument in to loader.discover().
discover.py is maintained in a google code project (where bugs and feature requests should be posted):
The latest development version of discover.py can be found at:
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2009-06-20 19:35:56 | |
Categories: Python, Projects Tags: testing, unittest, discovery
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter... | http://www.voidspace.org.uk/python/weblog/arch_d7_2009_06_20.shtml?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+voidspace+%28The+Voidspace+Techie+Blog%29 | CC-MAIN-2016-07 | refinedweb | 1,309 | 56.55 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.