text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Find the Largest Element in an array
#include <stdio.h> int main() { int i, n; float arr[100]; printf("Enter the number of elements (1 to 100): "); scanf("%d", &n); for (i = 0; i < n; ++i) { printf("Enter number%d: ", i + 1); scanf("%f", &arr[i]); } // storing the largest number to arr[0] for (i = 1; i < n; ++i) { if (arr[0] < arr[i]) arr[0] = arr[i]; } printf("Largest element = %.2f", arr[0]); return 0; }
Output[].
To find the largest element,
- the first two elements of array are checked and the largest of these two elements are placed in
arr[0]
- the first and third elements are checked and largest of these two elements is placed in
arr[0].
- this process continues until the first and last elements are checked
- the largest number will be stored in the
arr[0]position
We have used a
for loop to accomplish this task.
for (i = 1; i < n; ++i) { if (arr[0] < arr[i]) arr[0] = arr[i]; } | https://cdn.programiz.com/c-programming/examples/array-largest-element | CC-MAIN-2020-40 | refinedweb | 167 | 64.04 |
On 09/08/2010 06:41 PM, Reuben Martin wrote: > Yo, back on Wednesday, September 08, 2010 Reuben Martin was all like: >>. >> >> 09-gxf__disabled_AVCI.patch >> >> >> --- ffmpeg-old/libavformat/gxfenc.c 2010-09-08 17:27:04.569000110 -0500 >> +++ ffmpeg-new/libavformat/gxfenc.c 2010-09-08 17:28:29.148000128 -0500 >> @@ -889,6 +889,14 @@ >> media_info = 'D'; >> } >> break; >> +#if 0 >> + case CODEC_ID_H264: >> + sc->media_type = 26; >> + gxf->flags |= 0x02000000; >> + media_info = 'I'; >> + sc->track_type = 11; >> + break; >> +#endif >> default: >> av_log(s, AV_LOG_ERROR, "video codec not supported\n"); >> return -1; >> >> I'm not a big fan of disabled code in svn. -- Baptiste COUDURIER Key fingerprint 8D77134D20CC9220201FC5DB0AC9325C5C1ABAAA FFmpeg maintainer | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-September/100977.html | CC-MAIN-2013-20 | refinedweb | 104 | 50.63 |
Overview
- We look at the latest state-of-the-art NLP library in this article called PyTorch-Transformers
- We will also implement PyTorch-Transformers in Python using popular NLP models like Google’s BERT and OpenAI’s GPT-2!
- This has the potential to revolutionize the landscape of NLP as we know it
Introduction
“NLP’s ImageNet moment has arrived.” – Sebastian Ruder
Imagine having the power to build the Natural Language Processing (NLP) model that powers Google Translate. What if I told you this can be done using just a few lines of code in Python? Sounds like an incredibly exciting opportunity.
Well – we can now do this sitting in front of our own machines! The latest state-of-the-art NLP release is called PyTorch-Transformers by the folks at HuggingFace. This PyTorch-Transformers library was actually released just yesterday and I’m thrilled to present my first impressions along with the Python code.
The ability to harness this research would have taken a combination of years, some of the best minds, as well as extensive resources to be created. And we get to simply import it in Python and experiment with it. What a time to be alive!.
Now, I can’t stress enough the impact that PyTorch-Transformers will have on the research community as well as the NLP industry. I believe this has the potential to revolutionize the landscape of NLP as we know it.
Table of Contents
- Demystifying State-of-the-Art in NLP
- What is PyTorch-Transformers?
- Installing PyTorch-Transformers on our Machine
- Predicting the next word using GPT-2
- Natural Language Generation
- GPT-2
- Transformer-XL
- XLNet
- Training a Masked Language Model for BERT
- Analytics Vidhya’s Take on PyTorch-Transformers
Demystifying State-of-the-Art in NLP
Essentially, Natural Language Processing is about teaching computers to understand the intricacies of human language.
Before we get into the technical details of PyTorch-Transformers, let’s quickly revisit the very concept on which the library is built – NLP. We’ll also understand what state-of-the-art means as that will set the context for the article.
Here are a few things that you need to know before we start with PyTorch-Transformers:
- State-of-the-Art means an algorithm or a technique that is currently the “best” for a task. When we say “best”, we mean these are the algorithms pioneered by giants like Google, Facebook, Microsoft, and Amazon
- NLP has many well-defined tasks that researchers are studying to create intelligent techniques to solve them. Some of the most popular tasks are Language Translation, Text Summarization, Question Answering systems, etc.
- Deep Learning techniques like Recurrent Neural Networks (RNNs), Sequence2Sequence, Attention, and Word Embeddings (Glove, Word2Vec) have previously been the State-of-the-Art for NLP tasks
- These techniques were superseded by a framework called Transformers that is behind almost all of the current State-of-the-Art NLP models
Note: This article is going to be full of Transformers so I’d highly recommend that you read the below guide in case you need a quick refresher:
What is PyTorch-Transformers?
PyTorch-Transformers is a library of state-of-the-art pre-trained models for Natural Language Processing (NLP).
I have taken this section from PyTorch-Transformers’ documentation. This library currently contains PyTorch implementations, pre-trained model weights, usage scripts and conversion utilities for the following models:
- BERT (from Google) released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- GPT (from OpenAI) released with the paper Improving Language Understanding by Generative Pre-Training
- GPT-2 (from OpenAI) released with the paper Language Models are Unsupervised Multitask Learners
- Transformer-XL (from Google/CMU) released with the paper Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
- XLNet (from Google/CMU) released with the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding
- XLM (from Facebook) released together with the paper Cross-lingual Language Model Pretraining
All of the above models are the best in class for various NLP tasks. Some of these models are as recent as the previous month!
Most of the State-of-the-Art models require tons of training data and days of training on expensive GPU hardware which is something only the big technology companies and research labs can afford. But with the launch of PyTorch-Transformers, now anyone can utilize the power of State-of-the-Art models!
Installing PyTorch-Transformers on your Machine
Installing Pytorch-Transformers is pretty straightforward in Python. You can just use pip install:
pip install pytorch-transformers
or if you are working on Colab:
!pip install pytorch-transformers
Since most of these models are GPU heavy, I would suggest working with Google Colab for this article.
Note: The code in this article is written using the PyTorch framework.
Predicting the next word using GPT-2
Because PyTorch-Transformers supports many NLP models that are trained for Language Modelling, it easily allows for natural language generation tasks like sentence completion.
In February 2019, OpenAI created quite the storm through their release of a new transformer-based language model called GPT-2. GPT-2 is a transformer-based generative language model that was trained on 40GB of curated text from the internet.
Being trained in an unsupervised manner, it simply learns to predict a sequence of most likely tokens (i.e. words) that follow a given prompt, based on the patterns it learned to recognize through its training.:
The code is straightforward. We tokenize and index the text as a sequence of numbers and pass it to the GPT2LMHeadModel. This is nothing.
Natural Language Generation using GPT-2, Transformer-XL and XLNet
Let’s take Text Generation to the next level now. Instead of predicting only the next word, we will generate a paragraph of text based on the given input. Let’s see what output our models give for the following input text:
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
We will be using the readymade script that PyTorch-Transformers provides for this task. Let’s clone their repository first:
!git clone
GPT-2
Now, you just need a single command to start the model!
Let’s see what output our GPT-2 model gives for the input text:
The unicorns had seemed to know each other almost as well as they did common humans. The study was published in Science Translational Medicine on May 6. What's more, researchers found that five percent of the unicorns recognized each other well. The study team thinks this might translate into a future where humans would be able to communicate more clearly with those known as super Unicorns. And if we're going to move ahead with that future, we've got to do it at least a
Isn’t that crazy? The text that the model generated is very cohesive and actually can be mistaken as a real news article.
XLNet
XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin. XLNet achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.
You can use the following code for the same:
This is the output that XLNet gives:
St. Nicholas was located in the valley in Chile. And, they were familiar with the southern part of Spain. Since 1988, people had lived in the valley, for many years. Even without a natural shelter, people were getting a temporary shelter. Some of the unicorns were acquainted with the Spanish language, but the rest were completely unfamiliar with English. But, they were also finding relief in the valley.<eop> Bioinfo < The Bioinfo website has an open, live community about the
Interesting. While the GPT-2 model focussed directly on the scientific angle of the news about unicorns, XLNet actually nicely built up the context and subtly introduced the topic of unicorns. Let’s see how does Transformer-XL performs!
Transformer-XL
Transformer networks are limited by a fixed-length context and thus can be improved through learning longer-term dependency. That’s why Google proposed a novel method called Transformer-XL (meaning extra long) for language modeling, which enables a Transformer architecture to learn longer-term dependency.
Transformer-XL is up to 1800 times faster than a typical Transformer.
You can use the below code to run Transformer-XL:
Here’s the text generated:
both never spoke in their native language ( a natural language ). If they are speaking in their native language they will have no communication with the original speakers. The encounter with a dingo brought between two and four unicorns to a head at once, thus crossing the border into Peru to avoid internecine warfare, as they did with the Aztecs. On September 11, 1930, three armed robbers killed a donkey for helping their fellow soldiers fight alongside a group of Argentines. During the same year
Now, this is awesome. It is interesting to see how different models focus on different aspects of the input text to generate further. This variation is due to a lot of factors but mostly can be attributed to different training data and model architectures.
But there’s a caveat. Neural text generation has been facing a bit of backlash in recent times as people worry it can increase problems related to fake news. But think about the positive side of it! We can use it for many positive applications like- helping writers/creatives with new ideas, and so on.
Training a Masked Language Model for BERT
The BERT framework, a new language representation model from Google AI, uses pre-training and fine-tuning to create state-of-the-art NLP models for a wide range of tasks. These tasks include question answering systems, sentiment analysis, and language inference.
BERT is pre-trained using the following two unsupervised prediction tasks:
- Masked Language Modeling (MLM)
- Next Sentence Prediction
And you can implement both of these using PyTorch-Transformers. In fact, you can build your own BERT model from scratch or fine-tune a pre-trained version. So, let’s see how can we implement the Masked Language Model for BERT.
Problem Definition
Let’s formally define our problem statement:
Given an input sequence, we will randomly mask some words. The model then should predict the original value of the masked words, based on the context provided by the other, non-masked, words in the sequence.
So why are we doing this? The model learns the rules of the language during the training process. And we’ll soon see how effective this process is.
First, let’s prepare a tokenized input from a text string using
BertTokenizer:
This is how our text looks like after tokenization:
The next step would be to convert this into a sequence of integers and create PyTorch tensors of them so that we can use them directly for computation:
Notice that we have set [MASK] at the 8th index in the sentence which is the word ‘Hensen’. This is what our model will try to predict.
Now that our data is rightly pre-processed for BERT, we will create a Masked Language Model. Let’s now use
BertForMaskedLM to predict a masked token:
Let’s see what is the output of our model:
Predicted token is: henson
That’s quite impressive.
This was a small demo of training a Masked Language Model on a single input sequence. Nevertheless, it is a very important part of the training process for many Transformer-based architectures. This is because it allows bidirectional training in models – which was previously impossible.
Congratulations! You’ve just implemented your first Masked Language Model! If you were trying to train BERT, you just finished half your work. This example will have given you a good idea of how to use PyTorch-Transformers to work with the BERT model.
Analytics Vidhya’s take on PyTorch-Transformers
In this article, we implemented and explored various State-of-the-Art NLP models like BERT, GPT-2, Transformer-XL, and XLNet using PyTorch-Transformers. This was more like a firest impressions expertiment that I did to give you a good intuition on how to work with this amazing library.
Here are 6 compelling reasons why I think you would love this library:
- Pre-trained models: It provides pre-trained models for 6 State-of-the-Art NLP architectures and pre-trained weights for 27 variations of these models
- Preprocessing and Finetuning API: PyTorch-Transformers doesn’t stop at pre-trained weights. It also provides a simple API for doing all the preprocessing and finetuning steps required for these models. Now, if you have read recent research papers, you’d know many of the State-of-the-Art models have unique ways of preprocessing the data and a lot of times it becomes a hassle to write code for the entire preprocessing pipeline
- Usage scripts: It also comes with scripts to run these models against benchmark NLP datasets like SQUAD 2.0 (Stanford Question Answering Dataset), and GLUE (General Language Understanding Evaluation). By using PyTorch-Transformers, you can directly run your model against these datasets and evaluate the performance accordingly
- Multilingual: PyTorch-Transformers has multilingual support. This is because some of the models already work well for multiple languages
- TensorFlow Compatibility: You can import TensorFlow checkpoints as models in PyTorch
- BERTology: There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)
Have you ever implemented State-of-the-Art models like BERT and GPT-2? What’s your first take on PyTorch-Transformers? Let’s discuss in the comments section below.You can also read this article on our Mobile APP
21 Comments
Great article Mohd Sanad Zaki Rizvi. Thanks for sharing this work.
Hey Vaibhav glad you liked it!
Nice article..
Suneel thanks for your feedback! 🙂
Simple and rich article! Nice work 🙂
would the same work for other languages say Hindi or Urdu???
Awesome article, thanks man.
Can you please guide me to implement in same manner for Q&A part.
Hey Mahi,
I haven’t explored the QA part yet but you can look up the documentation here:
Hey Mahi,
I haven’t explored the QA part yet but you can look up the documentation here:
Glad to see another post introducing this awesome open source projects!
For those who want to handle Chinese text, there is a Chinese tutorial on how to use BERT to fine-tune multi-label text classification task with the package. Hope we can get more people involved.
Awesome! i feel enlightened..
Could you pl share link to some videos which elaborate the maths behind Transformers.
Hey, Pankaj glad that you liked the article! You can check out this video from Stanford for understanding the underlying principles of Transformers
very informative.
Hey Ashvika,
Glad you liked it!
how to import XLM models?
The approach will be similar to what we have done.. you can read more in the documentation:
Hey Rizvi, Great article.
I had a problem to apply “GPT-2.
When I try to run, appeer this error:
(base) C:\Users\Marco>python pytorch-transformers/examples/run_generation.py
Traceback (most recent call last):
File “pytorch-transformers/examples/run_generation.py”, line 25, in
import torch
ModuleNotFoundError: No module named ‘torch’
Conda and all of packages are updated.
Do I need a GPU?
Thanks!
Hey Antonio,
You do need a GPU but this is not a GPU error. This is the error because you do not have “torch” installed which is the pre-requisite for Pytorch-Transformers.
That’s an amazing article on latest breakthroughs in Natural Language Processing. Thank you!
I not able to comprehend the max sequence length of 512 in BERT. Does it mean i will not be able to build a classifier if a documents are long ( Eg: having more than 1000 words)
Hey Satish,
Let’s say you have:
the man went to the store and bought a gallon of milk
And had max_seq_length = 6, stride = 3, then you could split it up like this:
the man went to the store
to the store and bought a
and bought a gallon of milk
You’ll have to be a little careful though. You can read more at this thread:
The exact implementation is task-specific of course. | https://www.analyticsvidhya.com/blog/2019/07/pytorch-transformers-nlp-python/ | CC-MAIN-2021-31 | refinedweb | 2,762 | 52.7 |
In our series on video APIs, we have so far examined YouTube and Dailymotion. In this post we provide a step-by-step guide to using the Vimeo API.
In some ways, Vimeo can be considered a niche product: It forbids the upload of any content not uniquely created by its users, and thus excludes commercial video, videogame trailers, and the like. However, at the time this article was written, the site registered hundreds of millions of unique visitors every month (approximately 10% of YouTube’s monthly traffic) and had more than 20 million registered users. Indeed, Vimeo was actually launched before YouTube, and it was the first service to support HD playback.
Three APIs at the price of one
We’ll start our journey at the Vimeo developers home page.
As you can see, there are several entries in the menu, and a few different APIs. As with the other video-sharing services we have explored, there is a Data API and a Player API. However, there are two versions of the Vimeo Data API. (Both look the same but are very different.)
- The Vimeo Data API (version 2) uses OAuth (v1) and offers different response formats (XML, JSON, JSONP, PHP) and two subsets:
- A Simple API that can be used, without authentication, to obtain information about public videos, users, groups, channels, albums and activity. You might expect to be able to query the video database using the Simple API, but you can’t. You can only retrieve information about videos created by specific users; if you need to perform searches, you must authenticate. This setup makes it easier to keep traffic and quotas under control: If you make too many requests in a brief amount of time, in fact, your developer account will be suspended. Responses by the Simple API are also limited to a maximum of 20 items. If you need to retrieve more items, you must authenticate.
- The Advanced API allows users to perform every other operation without restrictions, after authenticating.
- The Vimeo New API (version 3) uses OAuth 2 and returns results in JSON format. The main difference between this version of the API and version 2 is that this version is a RESTful API.
To clarify the difference between the two Vimeo APIs consider the search for videos about NASA:
- With API v2, the URL to fetch would be.
- With API v3, users instead would need to fetch
Documentation on the Vimeo developers website includes the complete list of endpoints (i.e., methods) for Data API v2 and API v3; for the old API, a playground is also available to test any method with the parameters you are actually going to use. (See, for example, the videos.search method.)
However, if you are expecting anything like the DailyMotion explorer, you will be greatly disappointed. This tool is not as easy to use, nor as useful, as DailyMotion’s. For example, to use OAuth you’ll need to pass the authentication parameters in the headers, so you can’t just build the URL you need (as you can with DailyMotion) and be ready to go. Maybe this is also why, instead of returning a clear URL that would help you understand how the interface works, Vimeo’s playground tool just prints the (escaped) GET request. No drama--except, maybe, for novices--but it is an unnecessary complication nonetheless.
Authentication
Whether you use Data API v2 or v3, the first thing you will need is to register and then create one or more applications. Once you have registered, browse to the Apps Center-- you can follow the “My Apps” link in the developers home page.
Users can create as many new apps as they need and access any existing apps. Users are discouraged from using the same app profile for different pages/applications. For each profile, users will find a page containing three tabs with the information and the parameters needed to properly perform authentication:
As mentioned, in this article we will focus on OAuth 2 authentication; in particular, we will need the unauthenticated authorization header (see image, above).
The new API allows for two authentication workflows for applications, depending on how people will interact with Vimeo user accounts:
- Single-user applications => Unauthorized authentication
This workflow is best suited for applications that do not need to access individual users’ private data or just need to access a single user’s data (app owners). Authentication can be performed without involving users at all, exclusively on the server side. It is possible to use the authorization header provided for the app (a sort of Developer KEY), but in this case the scope of the application can only be one of "public", "private" and "upload". Otherwise, the unauthenticated access token can be generated dynamically for each request using the client identifier and client secret. (We’ll show you exactly how to do that in the next section.)
- Multiuser applications => Authentication requests
This workflow is the best choice when an application allows registered Vimeo users to interact with their accounts, so that each user will need authentication. This protocol is a bit more complicated because it requires the active involvement of users: Users will have to log into their accounts; client traffic will be first redirected to the Vimeo authentication service, and then back to your application, which will therefore have to manage this extra step. (This kind of request goes beyond our example needs, so we won’t examine it in depth for the moment.)
At this point, it should be easy to set everything up for a request, right? Well, almost.
If you are using PHP, there is an official library for Advanced API (v2); if you are using languages such as Python, Ruby or C++, there are a few unofficial libraries, as well.
For version 3, there are not one but three official libraries: for PHP, Python and Nodejs. Since we developed our example using Python, we should be in good shape, right? Well, again, almost.
The python-vimeo library is well-crafted: All you need to do is create an object (as shown in the self-explaining examples that go with the library), pass to its constructor your client_id and client_secret, and then call the methods you need.
The only problem is that this library relies on Tornado client, and in particular on _ctypes, and, unfortunately, this is one of the C modules that isn’t supported by Google App Engine for Python 2.7. (See the complete list of supported modules here.)
So, if you are using a different Python framework, you are almost good to go. Otherwise, on GAE you’ll have to sort out the details.
Exacerbating the issue is the fact that support on Vimeo is just not that great. I wasn’t able to find a single complete example of the whole process, from authentication to request, and forums don’t help much, either. API support on Vimeo’s forum was discontinued a year ago. There are other channels listed on the help page, but it’s hard to find specific answers.
Workflow to call an API method
We will now walk step by step through the single-user application workflow:
You can either use the authenticated header on your app page (Step 1b - the header can also be built using the access token on the same page) …
… or you can use the unauthenticated header provided to obtain a dynamic access token (step 1b- you can also build the header using client identifier and client secret provided on the same page, after encoding them base64).
Step 1a is preferable when you need a particular scope, like create, since, as mentioned, only public, private and upload scopes are available for the static token. To show every single step, we start with our cid (Client Identifier) and secret, encode them base64, and then combine them to create the same unauthenticated header shown above:
def get_access_token(self, cid, secret, api_url=''): As you might have noticed, we need to:
encoded = base64.b64encode("%s:%s" % (cid, secret))
payload = {
"grant_type": "client_credentials",
"scope": "public create"
}
headers = {
"Accept": "application/vnd.vimeo.*+json; version=3.0",
"Authorization": "basic %s" % encoded
}
response = urlfetch.fetch(api_url,
method="POST",
headers=headers,
payload=urlencode(payload),
)
if response.status_code != 200:
raise ValueError(response.status_code)
else:
return json_loads(response.content)
- Use the HTTP POST method;
- Pass two headers:
- “Accept”, will clarify that we are seeking authorization for API version 3 and seek JSON results.
- “Authorization” contains the access token created from Client Identifier and Client Secret.
- The payload—that is, the body of the request—must contain a grant_type parameter with value “client_credentials” (see here); it can also contain a scope parameter with a space-separated list of valid scopes. And don’t forget to properly encode the payload before passing it to urlfetch.
The response from Vimeo (held in the content field of the object returned by urlfetch), will contain three fields:
{
"access_token": "ACCESS_TOKEN",
"scope": "public create",
"token_type": "bearer"
}
We are mainly interested in the
access_token field, as we will need to pass it in the header of our requests to the Vimeo API’s methods.
VIMEO_OAUTH_HEADERS = {
'Accept': 'application/vnd.vimeo.*+json;version=3.0',
'Authorization': ('bearer %s' %
self.get_access_token(cid=CLIENT_ID, secret=CLIENT_SECRET)['access_token'])
}
url = '?’ + '&'.join(["%s=%s" % (k, str(v)) for k,v in search_params.items()])
search_response = urlfetch.fetch(url, method="GET", headers=VIMEO_OAUTH_HEADERS)
if search_response.status_code == 200:
search_response = json_loads(search_response.content)
Here, we prepare the header for the second step. It basically differs only for the “Authorization” field, which is going to contain the access token generated during the previous step.
All we need to do at this point is build the URL for the API method we need to call, and fetch it passing that header.
The response returned by Vimeo will have the following structure. (Some parts are omitted to improve readability.):
We are going to use the “data” field to build our response, since it contains the list of items retrieved; at the top level of the JSON response, however, we can also find a few more useful fields:
- total - The total number of results matching our query (in this case, just 4)
- page - The current page number (results are paginated in the same way that DailyMotion does it)
- per_page - The number of items per page
- paging - A collection of useful links to navigate through the results pages. For example, you wouldn’t need to generate the URL for the next page of results; you could just use search_response.paging.next.
Putting everything together
The logic of our vimeo.py module will largely follow the structure used for the dailymotion.py module, taking into account the OAuth workflow and a few other differences:
- Videos can’t be filtered by country: therefore, this option could either not be provided for Vimeo or the filter could just be ignored;
- The sorting parameters are pretty different, in comparison to YouTube and DailyMotion (and even to Vimeo API v2). There is no complete list of valid parameters in the documentation, but, basically, results can be sorted according to any of the fields in the response;
- As with DailyMotion, you can’t pass the query parameter when calling the related method to retrieve related videos. You will probably need to take extra care implementing this feature: Always use documentation as a reference, but version 3 of the API has been recently released, so it's still undergoing some improvement, both to the implementation of the library itself and to documentation.
You can find the gist with all the updated modules here, and, as always, the final result is available online. | http://www.programmableweb.com/news/how-to-search-videos-vimeo-hosting-service/how-to/2014/08/12 | CC-MAIN-2014-42 | refinedweb | 1,936 | 50.16 |
I am currently creating a RESTful webservice in python utilizing flask. Now on the client side that will use / implement the webservice APIs, I want to get the output in XML (or JSON) format. Do you have any ideas on how to do this? I already tried jsonify but no success. Also, i prefer an XML format in output, but again, I don't know how to do it. So I hope someone can give me ideas.
Below are dummy code snippets to hopefully clarify my question:
/*** webservice ***/ from flask import Flask, jsonify app = Flask(__name__) @app.route("/") def hello_world(): return jsonify(message = "hello world!") if __name__ == "__main__": app.run() /*** client code ***/ import urllib2 server = "" req = urllib2.Request(server) # req has no data at all :(
Hoping to receive feedback. Than=ks in advance
The server code runs fine. You should test it with a normal web browser and you will see the json response. Your client code isn't complet. There my correction:
import urllib2 server = "" req = urllib2.Request(server) response = urllib2.urlopen(req) print response.read()
A better way to do http requests in python is to use the requests module which provides a very simple but very powerful api.
import requests res = requests.get("") print res.text
To build xml response I would recommend lxml with his cool etree modul. There is also a etree modul in the standart lib under
xml.etree. | https://pythonpedia.com/en/knowledge-base/9838806/-python-webservice---how-to-return-and-consume-on-xml-format-on-flask | CC-MAIN-2020-40 | refinedweb | 235 | 69.89 |
PyX — Example: bargraphs/minimal.py
Minimal bar-graph example
from pyx import * g = graph.graphxy(width=8, x=graph.axis.bar()) g.plot(graph.data.file("minimal.dat", xname=0, y=2), [graph.style.bar()]) g.writeEPSfile("minimal") g.writePDFfile("minimal") g.writeSVGfile("minimal")
Description
For a minimal bar plot you have set an bar axis in the graph constructor and provide Xname column data (X stands for the bar axis to be used). Here, we just use column 0 which is automatically filled by
graph.data.file with the line number of the corresponding entry. Furthermore, you need to specify the graph style, since the default graph styles
symbol and
function (depending on the data type) are not appropriate for bar graphs.
Note that bar graphs differ from other xy-graphs in that they use discrete axes for one graph dimension. However, the only affected components of this fundamental change are one of the axes, which needs to be come a discrete one, i.e. a bar axis, and the usage of appropriate graph styles.
A bar graph is fundamentally different from a graph with a histogram style in its usage of a discrete axis in one graph dimension. A histogram instead is created using continuous axes in all graph dimensions and just drawing the data in a specific bar-graph-like presentation. In particular, the discreteness of the bar axis is reflected in the naming of its column name: instead of the continuous "X" it expects an "Xname" (where X stands for the bar axis used) as mentioned above.
As all axes and graph dimensions in the PyX graph system are treated equally all you need to modify to get bar graph with horizontal bars is to assign the bar axis to the y-axis in the graph constructor and change the names of the data columns to
yname and
x.
By using the bar style you implicitly also select a different positioning style, namepy
barpos. This positioning style handles a single-nested axes with sub-axis values of the range going from 0 to 1. | http://pyx.sourceforge.net/examples/bargraphs/minimal.html | CC-MAIN-2017-17 | refinedweb | 348 | 51.58 |
This is a Java Program to Read Two Integers M and N & Swap their Values.
Enter any two integer numbers as input. After that we take a new variable temp and copy first variable to this temp variable. Now we copy second variable to first variable and temp variable into second variable. Hence we get the swapped values as output.
Here is the source code of the Java Program to Read Two Integers M and N & Swap their Values. The Java program is successfully compiled and run on a Windows system. The program output is also shown below.
advertisement
import java.util.Scanner;
public class Swap_Integers
{
public static void main(String args[])
{
int m, n, temp;
Scanner s = new Scanner(System.in);
System.out.print("Enter the first number:");
m = s.nextInt();
System.out.print("Enter the second number:");
n = s.nextInt();
temp = m;
m = n;
n = temp;
System.out.println("After Swapping");
System.out.println("First number:"+m);
System.out.println("Second number:"+n);
}
}
Output:
$ javac Swap_Integers.java $ java Swap_Integers Enter the first number:5 Enter the second number:7 After Swapping First number:7 Second number:5
Sanfoundry Global Education & Learning Series – 1000 Java Programs.
advertisement
Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms. | https://www.sanfoundry.com/java-program-swap-m-n-integer-values/ | CC-MAIN-2019-09 | refinedweb | 213 | 52.66 |
Recently.
The idea was simple enough, though it still feels like magic to me. Out of curiousity, I spent some time partially implementing the algorithm in 2D. There are parts I don’t quite understand, for instance how to prioritize which tree to be used
(This is what separates rockstar engineer and a n00bie like me). I can’t read C++, so the code I am showing below is completely my own (which is why it is not optimized unlike the original library). Also the code only works for point in 2D, simply because this is just a toy for learning.
The most important part is the tree building, which is shown in the code below.
from random import sample, randint, random from math import floor, pow, fabs, sqrt from uuid import uuid4 from numpy import argmin import matplotlib.pyplot as plt from itertools import chain import time def middle(points): return (points[0][0] + points[1][0]) / 2, (points[0][1] + points[1][1]) / 2 def m(points): return (points[1][1] - points[0][1]) / (points[1][0] - points[0][0]) def normal(_middle, _m): normal_m = -pow(_m, -1) def _(point): y = normal_m * (point[0] - _middle[0]) + _middle[1] return point[1] - y return _ def split_points(points): result = sample(points, 1) while(True): point_b = sample(points, 1)[0] if point_b[0] - result[0][0] != 0 and point_b[1] - result[0][1] != 0: result.append(point_b) break return result def tree(points): result = {} if len(points) <= 5: result = { 'type': 'leaf', 'count': len(points), 'uuid': uuid4(), 'children': points } else: split = split_points(points) branching_func = normal(middle(split), m(split)) positive = [] negative = [] for point in points: if branching_func(point) > 0: positive.append(point) else: negative.append(point) result = { 'type': 'branch', 'func': branching_func, 'count': len(points), 'uuid': uuid4(), 'children': [tree(negative), tree(positive)] } return result
So the implementation follows the slide as much as possible. I first randomly pick two points, then I find a perpendicular line in between it to separate all the points. For obvious reason I didn’t select points that ended up being a horizontal / vertical line (parallel to x or y axis). Points that lie on either side of the line will be grouped separately. Keep repeating the process, until the remaining points is no more than 5.
The generated clusters. Each color represents a cluster.
While writing the code above, I did some quick revision to linear algebra because I wasn’t quite sure how to get the slope value (m). I am quite happy with the end product (though it could really use some optimization).
So now that building a tree is possible, next is to attempt searching.
def distance(alpha, beta): return sqrt(pow(alpha[0] - beta[0], 2) + pow(alpha[1] - beta[1], 2)) def leaves_nearest(point, tree, threshold): result = [] if tree['type'] == 'leaf': result.append(tree) else: delta = tree['func'](point) if delta > 0: result = leaves_nearest(point, tree['children'][1], threshold) elif fabs(delta) <= threshold: result = leaves_nearest(point, tree['children'][0], threshold) + leaves_nearest(point, tree['children'][1], threshold) else: result = leaves_nearest(point, tree['children'][0], threshold) return result def search_tree(query, nleaves): candidates = list(chain.from_iterable([leaf['children'] for leaf in nleaves])) distances = [distance(query, point) for point in candidates] idx_min = argmin(distances) return (distances[idx_min], candidates[idx_min])
The way searching works is to first find leaf nodes
(I am bad in using the right term to describe things) containing only points that is nearest to the query point. We do this by following the tree hierarchy, by feeding the point to the branching function. However, it is still possible to have the closest point being assigned to another leaf node. In order to handle that case, I added a threshold parameter, so that if the query point lies slightly below the line, then it passes the check too. Therefore, instead of getting just one leaf node (where the query point is located), it is possible to get a number of neighbouring nodes too.
By using this method, instead of comparing the query point to every point in the space, I only need to compare probably just tens of them (depending on how generous I am on the threshold). For comparison purpose, I also wrote a brutal search function.
def search_brute(query, points):
distances = [distance(query, point) for point in points]
idx_min = argmin(distances)
return (distances[idx_min], points[idx_min])
So finally a quick comparison.
points = [] print('Generating Points') for _ in range(10000): points.append(tuple([randint(0, 999) for __ in range(2)])) print('Building Tree') _tree = tree(points) from pprint import pprint query = tuple([randint(0, 999) for __ in range(2)]) print('Given Query {}'.format(query)) print('Cluster Answer') t0 = time.clock() nleaves = leaves_nearest(query, _tree, 250) canswer = search_tree(query, nleaves) print('Search took {} seconds'.format(time.clock() - t0)) pprint(canswer) print('Global Answer') t0 = time.clock() ganswer = search_brute(query, points) print('Search took {} seconds'.format(time.clock() - t0)) pprint(ganswer)
And the output
Though I needed to traverse the tree to find the leaf nodes before doing actual comparison, but the whole search process is still close to 13 times faster. I am very impressed indeed. Even though my re-implementation is not a faithful 100% port, but I think I know why Annoy is so fast.
One thing I could do better, besides optimizing the code, is probably the threshold part. I should have measured the closest distance from a point to the line instead of calculating how far the point is below the line. However, I am already quite happy with the result. Just a quick visualization on how cool it is.
The query point is denoted by the filled circle. Then the larger cross (X) is the nearest point to the query point. Points that are considered as neighbours to the query points are colour-coded. Each colour represents a cluster. For clarity purposes, points from other irrelevant clusters are in same colour (sorry for my mixed spelling of color/colour throughout the post).
The idea can possibly apply to problems in larger dimensions beyond 2D, but I probably will just stop here. | https://cslai.coolsilon.com/2016/01/13/re-implementing-approximate-nearest-neighbour-search/ | CC-MAIN-2018-51 | refinedweb | 1,018 | 61.97 |
Provided by: libncarg-dev_6.3.0-6build1_amd64
NAME
CGM_open, CGM_close, CGM_lseek, CGM_read, CGM_write, CGM_directory, CGM_freeDirectory, CGM_printDirectory, CGM_getInstr, CGM_flushGetInstr, CGM_putInstr, CGM_flushOutputInstr, CGM_initMetaEdit, CGM_termMetaEdit, CGM_copyFrames, CGM_deleteFrames, CGM_mergeFrames CGM_moveFrames, CGM_readFrames, CGM_valid, CGM_writeFile, CGM_writeFrames, CGM_appendFrames - Computer Graphics Metafile operations
SYNTAX
#include <cgm_tools.h> Cgm_fd CGM_open(metafile, size, flags, mode) char *metafile; unsigned size; int flags; int mode; int CGM_close(cgm_fd) Cgm_fd cgm_fd; int CGM_lseek(cgm_fd, offset) Cgm_fd cgm_fd; int offset; int CGM_read(cgm_fd, buf) Cgm_fd cgm_fd; unsigned char *buf; int CGM_write(cgm_fd, buf) Cgm_fd cgm_fd; unsigned char *buf; Directory *CGM_directory(cgm_fd) Cgm_fd cgm_fd; void CGM_freeDirectory(dir) Directory *dir; void CGM_printDirectory(dir) Directory *dir; int CGM_getInstr(cgm_fd, instr) Cgm_fd cgm_fd; Instr *instr; void CGM_flushGetInstr(cgm_fd) Cgm_fd cgm_fd; int CGM_putInstr(cgm_fd, instr) Cgm_fd cgm_fd; Instr *instr; int CGM_flushOutputInstr(cgm_fd) Cgm_fd cgm_fd; Directory *CGM_initMetaEdit (metafile, size) char *metafile; unsigned int size; int CGM_termMetaEdit() Directory *CGM_copyFrames(start, num, target ) unsigned int start; int num; unsigned int target; Directory *CGM_deleteFrames(start, num) unsigned int start, num; Directory *CGM_mergeFrames(bottom, top) unsigned bottom, top; Directory *CGM_moveFrames (start, num, target) unsigned int start, num, target; Directory *CGM_readFrames(metafile, start, num, target, size) char *metafile; unsigned int start; int num; unsigned int target, size; int *CGM_validCGM(metafile) char *metafile; int CGM_writeFile(metafile) char *metafile; int CGM_writeFrames(metafile, start, num) char *metafile; unsigned start, num; int CGM_appendFrames(metafile, start, num) char *metafile; unsigned start, num;
DESCRIPTION
The argument cgm_fd refers to a valid file descriptor created for reading or writing, as appropriate by CGM_open. CGM_read, CGM_directory, CGM_getInstr and CGM_flushGetInstr require a file descriptor open for reading. CGM_write, CGM_getInstr,CGM_flushGetInstr and CGM_flushOutputInstr require a Cgm_fd open for writing. CGM_close and CGM_lseek will accept any valid Cgm_fd. The size argument refers to the CGM record size in bytes. For an NCAR CGM this value is 1440. buf is a pointer to user allocated memory of size size. This storage will be used for buffering input and output of CGM_read and CGM_write respectively. The dir argument is a pointer to a Directory structure created with CGM_directory or CGM_initMetaEdit. dir is a private resource that should NOT be directly modified by the user. A set of convenience macros is provided for this purpose in cgm_tools.h. The start, num and target arguments are used to address frame numbers in a metafile being edited with one of the commands: CGM_copyFrames, CGM_deleteFrames, CGM_readFrames, CGM_moveFrames, CGM_writeFrames and CGM_mergeFrames. The start argument is the first frame in a sequence of num frame(s) to perform the editing operation on. target is similar to start and is used by commands that require two frame addresses such as copy. Addressing begins at zero. CGM_open This command is modeled after the unix open command. It will open a CGM for reading or writing as specified by the flags argument and return a Cgm_fd file descriptor. The flags and open parameters are passed directly on to the system open command. For a detailed explanation of these two arguments see open(2). CGM_close Delete a file descriptor. The inverse of CGM_open. See close(2). CGM_read CGM_read attempts to read size bytes from the object referenced through the descriptor cgm_fd. size is set at the creation of cgm_fd by CGM_open. CGM_read returns the number of bytes successfully read. A zero is returned on EOF and a negative number implies an error occurred. The unix system call read is called by CGM_read. See read(2). CGM_write Attempts to write a single record of size bytes from buf from the object referenced by cgm_edit where size is the record size parameter provided at the creation of cgm_fd. write returns the number of bytes successfully written. A negative return number implies an error occurred. The unix system call write is called by CGM_write. See write(2). CGM_lseek Advance the file pointer of cgm_fd to offset bytes. Upon successful completion the current file pointer offset is returned. A negative return value is an error. The unix system call lseek is called by CGM_lseek. See lseek(2). CGM_directory Create a table of contents for the metafile referenced by cgm_fd. Return a pointer to this table of type Directory. The contents of the directory include number of metafiles, number of frames, record offset for each frame, frame length in records, optional frame description and metafile status. These fields are meant to be read only and should only be referenced by the convenience macros provided in cgm_tools.h. A NULL pointer is returned on failure. CGM_freeDirectory Free memory allocated to a directory created by CGM_directory or CGM_initMetaEdit. CGM_printDirectory Print the contents of a directory pointed to by dir to the standard output. CGM_getInstr, Fetch the next instruction in file referenced by cgm_edit and convert it into a usable format pointed to by instr. CGM_getInstr provides an interface to the metafile for extracting CGM elements. The user need not be concerned with the binary format of the metafile. The fields of the Instr are as described in cgm_tools.h. The user should note that the maximum allowable data length returned in a single invocation is 32760 bytes. The CGM standard allows upto 32767 bytes to be stored in a single instruction. But 32767 is not a nice number to work with. Should the data length of a CGM instruction exceed 32760 bytes, indicated by the boolean more flag, the next invocation of CGM_getInstr will return the remaining data up to the same limit, etc. CGMgetInstr requires a valid Cgm_fd open for reading. For a description on CGM see the ANSI standard. CGM_flushGetInstr Flush the input buffer used by CGM_getInstr. CGM_getInstr buffers the contents of the CGM and only performs actual reads as necessary. If the user desires other then sequential read access to a CGM it becomes necessary to flush the input buffer before reading from a new location. CGM_putInstr The analog to CGM_getInstr. This function buffers CGM instructions to be written to a CGM referenced by cgm_fd. Again the user need not be concerned with the binary format of the file. Writes are performed sequentially in record size size as specified during the creation of cgm_fd. The same data length constraints that are placed on CGM_getInstr hold for CGM_putInstr. If the user wants to output instructions with a data length greater than 32760 bytes then the data must be broken up into blocks no greater than this size. The user must also set the boolean more flag in the Instr. cgm_fd must be a valid file descriptor open for writing. For a description of the fields of the Instr see the file cgm_tools.h. CGM_flushOutputInstr Flush the output buffer used by CGM_putInstr for the file referenced by cgm_fd. It is necessary to explicitly flush the output buffer used by CGM_putInstr before the file is closed or any random access is performed. Otherwise not all CGM elements will actually get written. CGM_initMetaEdit Initialize a metafile for editing. This is the initialization routine for the higher level editing routines contained in this package: CGM_copyFrames, CGM_deleteFrames, CGM_readFrames, CGM_moveFrames, CGM_writeFile, CGM_writeFrames, and CGM_mergeFrames. These routines only work on one metafile at a time (the one named in CGM_initMetaEdit. Invoking this routine for a second time without explicitly saving any changes will have the effect of loading a new file and discarding all changes made in the previous file. CGM_initMetaEdit and all proceeding editing functions that make changes to the file return a pointer to a Directory as a convenience that allows the user to examine the state of the file. The contents of the directory are private and should NOT be changed by the user. A set of macros is provided in cgm_tools.h to be used for retrieving the directory's contents. Note: no changes are actually made to the edit file unless it is explicitly overwritten with either CGM_writeFile or CGM_writeFrames. CGM_termMetaEdit Terminate the editing session started with CGM_initMetaEdit. This routine should be called after any editing changes have been saved, if desired to save them, and before exiting the editing session. CGM_termMetaEdit frees valuable resources. CGM_copyFrames Copy num frames beginning with start to the frame addressed by target. If target is already occupied then the source frames are inserted in its place while the target frame, and all proceeding frames, are advanced. CGM_copy operates on the file initialized by CGM_initMetaEdit (the edit file). On successful completion a pointer to the current directory is returned. On error a NULL pointer is returned. CGM_deleteFrames Delete num frames from the edit file starting with frame start. On successful completion a pointer to the current directory is returned. On error a NULL pointer is returned. CGM_mergeFrames Overwrite the contents of frame addressed bottom with the union of the frame at location bottom and the frame at location top. The effect of this command is equivalent to drawing the top frame on top of the bottom frame. It is not a union in the true sense of the word. On successful completion a pointer to the current directory is returned. On error a NULL pointer is returned. CGM_moveFrames Move a block of num frames from the edit file starting with with frame start to the position occupied by frame target On successful completion a pointer to the current directory is returned. On error a NULL pointer is returned. CGM_readFrames Read num frames from metafile file starting with frame start. Insert the frames at address target in the edit file. On successful completion a pointer to the current directory is returned. On error a NULL pointer is returned. CGM_validCGM Determine whether a file is a valid NCAR CGM or not. This function performs a few simple diagnostics in an effort to determine whether a given file is in the NCAR CGM format. The tests performed are not rigorous and it is conceivable that the information retrieved is incorrect. A return of 1 indicates a valid NCAR CGM. A return of 0 indicates the file is not a NCAR CGM. A return of -1 indicates an error occurred and the global variable `errno' is set accordingly. CGM_writeFile Write the entire contents of the current edit file to file. CGM_writeFile returns the integer one on success and a negative number on failure. CGM_writeFrames Write a block of num frames starting with frame start to file. The source frames come from the edit file. Note: CGM frames are contained in a wrapper made up of CGM delimiter elements. The file created by CGM_writeFrames will use the wrapper provided by the current edit file. Thus if a file foo contains n frames that are read into an editing session with a file goo and then these same frames are written out to a file zoid, zoid may or may not be the same as the original foo. CGM_writeFrames returns the integer one on success and a negative number on failure. CGM_appendFrames Append a block of num frames starting with frame start to file. file must already exist and be a valid NCAR CGM. CGM_appendFrames returns the integer one on success and a negative number on failure.
SEE ALSO
ANSI X3.122 Computer Graphics Metafile for the Storage and Transfer of Picture Description Information.
BUGS
CGMs with more the one metafile stored in the are not guaranteed to work. Should not have to explicitly flush the output buffer for CGM_getInstr. This should be handled automatically when the file is closed.
Copyright (C) 1987-2009 University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement. | http://manpages.ubuntu.com/manpages/xenial/man1/cgm.1NCARG.html | CC-MAIN-2020-50 | refinedweb | 1,894 | 56.05 |
Custom cursors are something that you don't need to use very often, but when you do need them, they can make a huge difference in the usability of your program. So today we are going to take a look at how to use your own custom cursors in C#/WinForms applications (don't worry, WPF aficionados, we will take care of you at a later date).
Changing the cursor on a WinForms control is extremely easy, as long as you are only trying to change it to one of the other standard cursors. To do that, all you need to do is set the Cursor property on your control to one of the cursors on the Cursors object. However, using a cursor of your own can be a little more difficult.
There are a couple ways to use your own cursors, and they all eventually
create a new
Cursor
object. The simplest way is to just load a cursor file (you know, the
ones with the ".cur" extension) that you created. The constructor for
the
Cursor can take a file path to do just that:
Cursor myCursor = new Cursor("myCursor.cur");
And you can then assign it as the cursor on any of your controls:
myControl.Cursor = myCursor;
So that is easy enough. But say you don't have a ".cur" file you want to use - maybe you are actually creating the cursor on the fly programmatically! Well, that gets a bit more difficult. This is because not everything we need is built into the wonderful world of .NET - we will need to interop in some other methods. In the end it is not a lot of code, it is just knowing what code to call.
The first thing we need to do is create the C# equivalent of the ICONINFO structure. We will need this to define information about the cursor we will be creating:
public struct IconInfo { public bool fIcon; public int xHotspot; public int yHotspot; public IntPtr hbmMask; public IntPtr hbmColor; }
We care about the first three member variables (you can read about the
last two on MSDN if you would like). The first one (
fIcon) defines if
the icon it talks about is a cursor or just a regular icon. Set to
false, it means that the icon is a cursor. The
xHotspot and
yHotspot
define the actual "click point" of the cursor. Cursors are obviously
bigger than 1x1 pixel, but there is really only one pixel that matters -
the one defined by the hotspot coordinate. For instance, the hotspot of
the standard pointer cursor is the tip of the pointer.
There are also two native methods that we will need references to in order to create the cursor. These are GetIconInfo and CreateIconIndirect. We pull them into out C# program using the following code:
);
Now to write the cursor creation function:
public static Cursor CreateCursor(Bitmap bmp, int xHotSpot, int yHotSpot) { IntPtr ptr = bmp.GetHicon(); IconInfo tmp = new IconInfo(); GetIconInfo(ptr, ref tmp); tmp.xHotspot = xHotSpot; tmp.yHotspot = yHotSpot; tmp.fIcon = false; ptr = CreateIconIndirect(ref tmp); return new Cursor(ptr); }
This function takes in a bitmap that will be made into a cursor, and the
hotspot for the cursor. We first create a new
IconInfo struct, which
we are going to populate with the icon info. We do this by calling the
native method
GetIconInfo. This function takes in a pointer to the
icon (which we get by calling
GetHicon() on the bitmap), and a
reference to the
IconInfo struct that we want populated with the
information.
We then set the x and y hotspot coordinates the the values passed in,
and we set
fIcon to false (marking it as a cursor). Finally, we call
CreateIconIndirect, which returns a pointer to the new cursor icon,
and we use this pointer to create a new
Cursor. The function
CreateIconIndirect makes a copy of the icon to use as the cursor, so
you don't have to worry about the bitmap that was passed in being locked
or anything of that nature. So now that we have this function, how do we
use it? It is actually really simple:
Bitmap bitmap = new Bitmap(140, 25); Graphics g = Graphics.FromImage(bitmap); using (Font f = new Font(FontFamily.GenericSansSerif, 10)) g.DrawString("{ } Switch On The Code", f, Brushes.Green, 0, 0); myControl.Cursor = CreateCursor(bitmap, 3, 3); bitmap.Dispose();
Here, we are creating a bitmap, and drawing the string "{ } Switch On
The Code" on that bitmap. We pass that bitmap into the create cursor
function with a hotspot of (3,3), and it spits out a new cursor, ready
to use (in this case on the control
myControl). And, of course, we
dispose the original bitmap once the cursor is created. Here you can see
a screenshot of that cursor in action:
![ Custom Cursor In
Action]()
And here is all the code put together:
using System; using System.Drawing; using System.Windows.Forms; using System.Runtime.InteropServices; namespace CursorTest { public struct IconInfo { public bool fIcon; public int xHotspot; public int yHotspot; public IntPtr hbmMask; public IntPtr hbmColor; } public class CursorTest : Form { public CursorTest() { this.Text = "Cursor Test"; Bitmap bitmap = new Bitmap(140, 25); Graphics g = Graphics.FromImage(bitmap); using (Font f = new Font(FontFamily.GenericSansSerif, 10)) g.DrawString("{ } Switch On The Code", f, Brushes.Green, 0, 0); this.Cursor = CreateCursor(bitmap, 3, 3); bitmap.Dispose(); } )); } } }
Hopefully, this code is a help to anyone out there trying to use custom cursors of their own. The possibilities are endless when you can actually create and modify your cursors on the fly! If you would like the Visual Studio project for the simple form above as a starting point, here it is.
Source Files:
This is an awesome article. Thanks.
Is there a solution to the blurry or bold text? I have created a bitmap with an image and some text, while the image is perfect, the text always appears bold.
This excellent tutorial, Is it possible to save created cursor to disk?
Sorry System reboot was enough!
I'm using the code and after an error which I can't reproduce I only get Cursors with a size of 32x32.
Does any one know a possible solution?
Thank's
how to add this article in my account?
Can you please explain how to apply a bitmap on iconinfo sructure it have a transparent portions of cursor ?
The post helped a lot. God bless you uploader :D
Great article. However, is there a way to still make the default cursor available, with the newly created bitmap appearing underneath the arrow?
Hello guys!
Im dealing with memory leak problems on this code of create custom cursors.
The difference of this code to my code is that my cursor is dynamically generated. Using graphics and others bitmaps.
Should i call dispose on every objects of System.Drawing? because im still getting memory leak problems, even with the Aerdanel solution.
Thanks for your help :)
Thanks tallest.
?1) Can you suggest how to implement a MessageBox? I use the Telerik RadMessageBox which is very nice. Check their website, they have great WPF-like controls for Windows Forms with 5 cool themes out of the box, I prefer to use them.
?2) The Messagebox class exposes almost no properties or methods - how to change its behavior?
Thanks for your replies.
Hi "The Tallest",
I'm tall myself -:)
Thanks for your reply. I call MessageBox.Show from user controls inside my main form. In each user-control I use MessageBox.Show(this, ...). The user control belongs to the Controls collection of a Panel that belongs to the controls collection of the main Form.
Therefore the "this" in the call is the user control, not the main Form. Is there a difference?
Thanks!
There shouldn't be a difference. If thats not working, then I'm not sure how to get it to work - I'm betting the message box is setting the cursor back to the standard one. Your two choices probably are to either figure out how to set the cursor specifically on the MessageBox (which I'm not sure how to do), or roll your own message box (which should be pretty easy).
Question -
The custom cursor works great with this code.
But - When I use MessageBox.Show the cursor returns to the default WinForms arrow cursor until the user messagebox is closed, and then returns to my custom cursor.
How to solve it?
P.S. the solution works on .Net 3.5.
Did you pass in your current window as the "owner" parameter to
MessageBox.Show? That might get it to work.
Hi,
Million Thanks! Exactly what I needed!
I'm happy to contribute my improvements: 1) I drew my custom bitmap file, and made the White color transparent. 2) I made a private generic method and 2 calling methods, in a separate utility class. I think this is the correct object-oriented approach. 3) I discovered that you can multiply the size of the BMP file I have. 4) I too used the image file that is embedded in the project resource file. This way the BMP files are not used in the installation or production directory. Goto the prohect's resource file, Images->Add->Use existing file, add the file, then save the resx file. Then the file name is available in the Resourcers class, see my code.
so, I found it earlier, but the problem is probably the same ...
"System.ArgumentException: Win32 handle passed to Cursor is not valid or is the wrong type. at System.Windows.Forms.Cursor ..ctor(IntPtr handle)" at CreateCursor(...
Qould You check this? Please
My Code.
sorry. I left this* line of code, and this was a problem :)
//* IntPtr ptr = b.GetHicon();
Very nice article, it was precisely what I was looking for !
But I found a little problem... It seems there is some memory leak when calling GetIconInfo, hbmMask and hbmColors aren't destroyed automatically..
I tried to use DeleteObject :
but it didn't change anything... Any idea ?
Here's a function to correct memory leaks :
Aerdanel, thanks a ton!
I'd like to add that not only does your function cure memory leaks, it makes the program rock solid as well.
As a "stress test" of this function, I added a MouseMove() event handler to the form, and then incremented a local integer, and then used that integer in a .ToString() method to make a cursor. 1243 cursors and the program dies every time. With your additions I was well over 50,000 cursors, and it was running just fine.
Thanks again.
-- Pete
thanks Aerdanel, your fix works as a charm, even in 2012 ; i spotted the GDI leak just by observing the number of GDI-objects column in the task manager
Another way is to include the “myCursor.cur” file in the resources.resx. (Right-click “resources.resx” in your project under “Properties”, and select “View Designer”. Select the “Files” view, and drag the “myCursor.cur” file in here).
After that, you can load the cursor through a stream (one line of code):
Thanks a ton for this. You'd think this would be documented somewhere....
please can you explain this much more??
THANKS A LOT for this nice article!!!!!!
[DllImport("user32.dll")] public static extern IntPtr CreateIconIndirect( ref IconInfo icon);
When calling CreateIconIndirect in Vista x64, it does not work but no problem in x86. It the above declaration works in x64?
Any x64 expert please help me. Thanks.
Useful note. A couple of possible resource leaks:
The documentation for the GetIconInfo() says the user must destroy the two bitmaps it creates. Also the Cursor class will not dispose of the handle passed to its constructor when the instance is destroyed.
Now if I could just work out why the text in the cursor is blurry on my machine...
Very nice article.
The text is also blurry on my machine... any ideas?
This example was quite helpful to me, thank you for posting it. I'm wondering if once you have your cursor created, if you can then save that cursor with the ".cur" extension thus giving you the actual cursor to use in say other projects without having to programmatically create it every time? Thanks!!!
It is really interesting code(note). I got many interesting things in it. So carry on providing us with such interesting and usefull notes.
Hi,
I'm making a custom picturebox on which i need a custom cursor. It's to draw a square around an expanded pixel.
in constructor, I init the IconInfo once. On mousemove I only change the x and yhotspot and attach the changed cursor.
This all works as intended and really fast. For a while at least.
after a few hundred relocations of the cursor I get an "numericArgumentException in GDI+" on this line="this.Cursor = new Cursor(CreateIconIndirect(ref IconInfoCursorSquarePixel));"
I have been looken for a solution for a good amount of time but don't find anyting. I tryed reinit of the IconInfo, fixed the ref Iconinfo I cant find anything.
Please some help
Thank u
This was a perfect article :-). But.. Any idea how to replace the system cursors for copy/move? It seems like the system overrides my cursor when drag-operations is being done..
Thats a good question, and the answer is a little more in depth than I can can do in a comment. Look for a tutorial in the near future on how to change the cursor during a Drag&Drop operation.
Very helpfull as created cursors only using GetHicon() always set the hotspot centered. Thanks.
First of all I would like to thank you for the article, it helped me alot, you explain things very easly thus helping us learn easier.
second, from this article I didn't understand how exactly XHotSpot & YHotSpot help us, or better yet in what. second, I'm wondering on if you can help me build this kind of a comment box or even better image box, that the user can chage it size, maybe the class used or something of this kind, thanks in advance gil. | http://tech.pro/tutorial/732/csharp-tutorial-how-to-use-custom-cursors | CC-MAIN-2014-10 | refinedweb | 2,359 | 74.79 |
Drawing a graph shouldn't be difficult and Gnuplot indeed does make it simple to draw excellent graphs of all sorts of data. This little Python snippet shows how simple it is to talk to a Gnuplot subprocess from a Python program to draw a graph.
If you have an elaborate dataset that you want to explore in various ways it is probably easiest to run Gnuplot and refer to this file in the plot command. However in may other situations it is often more convenient to let Python talk to gnuplot via a pipe, writing commands and data directly to a gnuplot subprocess, with the possible added benefit that running Gnuplot as a complete separate process may utilize a multi-core processor better than Python can on its own.
Gnuplot is capable of interacting with another process over a pipe although on Windows you'll need not the main Gnuplot program but the executable
pgnuplot.exe which is part of the distriubution.
Of course the wish to talk to Gnuplot this way is not a unique one. A fairly elaborate module exists but it doesn't look that actively maintained as it does not support Python 3.x. Moreover, it depends on the numpy package which is excellent but a bit heavy handed in many situations.
So what we are looking for is the simplest way to communicate with Gnuplot to draw a graph from an array of x,y values. The snippet below shows how this can be accomplished:
from subprocess import Popen,PIPE gnuplot = r'C:\gp45-winbin\gp45-winbin\gnuplot\binary\pgnuplot' data = [(x,x*x) for x in range(10)] plot=Popen([gnuplot,'-persist'],stdin=PIPE,stdout=PIPE,stderr=PIPE) plot.stdin.write(b"plot '-' with lines\n") plot.stdin.write("\n".join("%f %f"%d for d in data).encode()) plot.stdin.write(b"\ne\n") plot.stdin.flush()
The
gnuplot variable points to the pipe capable gnuplot executable. On windows this is
pgnuplot, on Unix-like operating systems the main executable will work just fine as well. The
data variable hold a list of
(x,y) tuples.
The important bit is in the
Popen() call. The first argument is a list consisting of the program name and any options we wish to pass to this program. In this case we add the
-persist option to prevent closing the window containing the graph. We also indicate that all input and output streams should be pipes.
The final statements show how we pass content to the input stream of the subprocess. Note that the
write() method of such a file like object expects bytes, not a string, so we either provide bytes literals like
b'' or use the
encode() method of a string. The
plot statement we send to Gnuplot is given
'-' as the file name which will cause any subsequent lines sent to it to be interpreted as data, up to a line containing a single
e.
I've been using this, and it's great! One quick question though: I'm now using this code in a loop to make a couple hundred graphs, but it fails a lot. Is there a way to get the pipe to send the stderr somewhere I can read it, like a file?
Thanks!
A useful example! | http://michelanders.blogspot.com/2011/01/talking-to-gnuplot-by-pipes.html | CC-MAIN-2017-22 | refinedweb | 550 | 61.26 |
This is my first post so please bear with me. I am bringing in 2 files into 2 arrays and then trying co compare the arrays to each other. When I compare the arrays, I am trying to find out how many locations have matching letters in both strings. ) For example:
array1[]={AAACCCGTTT} and array2[]={AACCCCGGTT}; locations 0 and 1 match but location 2 is different) My code seems to work on smaller files but when I open the text files that are over 2,200 characters long I get inconsistancies. The output tells me I have more matches that are possible, but when I test with smaller files it seems to work correctly. I am also trying to use this code /*while ((array1[l] != '\0') && (array2[l] != '\0'))*/ to only read the characters and not the empty part of the array. I have tried using this in my for loop, within another for loop to try to get a count of total characters, in a do-while statement, but it just does not seem to work. Any ideas would be appreciated.
#include <iostream> #include <fstream> #include <string> using namespace std; int main(){ char array1[20000]; char array2[20000]; char ch; char compare1; char compare2; int match=0; int nomatch=0; int i=0; int j=0; int counter1=0; ifstream fin; ifstream fin2; fin.open("seq0.txt"); while (fin.get(ch)){ array1[i++] = ch; array1[i]=0;} fin2.open("seq1.txt"); while (fin2.get(ch)){ array2[j++] = ch; array2[j]=0;} for (int k=0; k<20000; k++){ while (array1[k] != '\0'){ counter1++;} } for (int l=0; l<20000; l++){ if (array1[l] = array2[l]){ match++; } else { nomatch++; } } for (int l=0; l<20000; l++) { compare1 = array1[l]; compare2 = array2[l]; if (compare1 = compare2){ match++; } else { nomatch++; } } cout << match << endl; cout << nomatch << endl; fin.close(); return 0; } | https://www.daniweb.com/programming/software-development/threads/244250/comparing-2-arrays-and-counting-differences-and-similarities | CC-MAIN-2017-09 | refinedweb | 308 | 71.44 |
You Platform Console. Note that Google is compensated for customers who sign up for a paid account.
SendGrid libraries
You can send email with SendGrid through an SMTP relay or using a Web API.
To integrate SendGrid with your App Engine project, use the SendGrid client libraries..
For example, for the sample code below add:
env_variables: SENDGRID_API_KEY: your-sendgrid-api-key SENDGRID_SENDER: your-sendgrid-sender
- Add the SendGrid Python client library to your application's
requirements.txt. For example:
Flask==0.12.2 sendgrid==5.3.0 gunicorn==19.7.1
Sending mail
You can create a SendGrid instance and use it to send mail.The following sample code shows how to send an email and specifies some error handling:
@app.route('/send/email', methods=['POST']) def send_email(): to = request.form.get('to') if not to: return ('Please provide an email address in the "to" query string ' 'parameter.'), 400 sg = sendgrid.SendGridAPIClient(apikey=SENDGRID_API_KEY) to_email = mail.Email(to) from_email = mail.Email(SENDGRID_SENDER) subject = 'This is a test email' content = mail.Content('text/plain', 'Example message.') message = mail.Mail(from_email, subject, to_email, content) response = sg.client.mail.send.post(request_body=message.get()) if response.status_code != 202: return 'An error occurred: {}'.format(response.body), 500 return 'Email sent.'
Add your own account details, and then edit the email address and other message content.
For more email settings and examples, see the SendGrid-Python library.
Testing and Deploying
Before running your app locally, you need to:
python main.py
After testingyour application, deploy your project to App Engine:
gcloud app deploy
Getting real-time information
In addition to sending email, SendGrid can receive email or make sense of the email you’ve already sent using webhooks.
Event API
Once you start sending email from your application, you can view statistics collected by SendGrid to assess your email program. You can use the Event API to see this data. For example, whenever a recipient opens or clicks an email, SendGrid can send a descriptive JSON to your Google App Engine app that can react to the event or store the data for future use.
The Event API documentation shows how to set up the webhook, outlines the nine event types and shows the fields included in event callbacks.
Inbound Parse API
SendGrid can receive email. The Inbound Parse API can be used for interactive applications, such as automating support tickets.
The Parse API is a webhook that sends data to your application when something new is available. In this case, the webhook is called whenever a new email arrives at the domain you've associated with incoming email.
Emails are sent to the application structured as JSON, with sender, recipients, subject, and body as different fields. Attachments of up to 20MB are allowed.
The Parse API documentation has more details, including additional fields sent with every email, as well as instructions for DNS setup and usage. | https://cloud.google.com/appengine/docs/flexible/python/sending-emails-with-sendgrid | CC-MAIN-2018-43 | refinedweb | 482 | 58.58 |
Accounts with zero posts and zero activity during the last months will be deleted periodically to fight SPAM!
#include <vector>using namespace std;class AAA{public: int aaa; int bbb;};int myfunction(){ AAA a; //a. std::vector<AAA> b; b[1]. return 0;}
code completion demo project with lib clang support!!! asmwarriorollydbg from codeblocks forumit will bydefault open the test.cpp file!* to show the code completion list, please enter the commandcc line column [ENTER]* to exit, just enterexit [ENTER]cc 20 8ClassDecl:{TypedText AAA}{Text ::} (75)FieldDecl:{ResultType int}{TypedText aaa} (35)FieldDecl:{ResultType int}{TypedText bbb} (35)CXXMethod:{ResultType AAA &}{TypedText operator=}{LeftParen (}{Placeholder const AAA &}{RightParen )} (34)CXXDestructor:{ResultType void}{TypedText ~AAA}{LeftParen (}{RightParen )} (34)
Used to indicate that the translation unit should be built with an implicit precompiled header for the preamble.An implicit precompiled header is used as an optimization when a particular translation unit is likely to be reparsed many times when the sources aren't changing that often. In this case, an implicit precompiled header will be built containing all of the initial includes at the top of the main file (what we refer to as the "preamble" of the file). In subsequent parses, if the preamble or the files in it have not changed, clang_reparseTranslationUnit() will re-use the implicit precompiled header to improve parsing performance.
The SDK is LGPL 3.0. And I think these two licenses are compatible, so you can use BSD libs in GPL/LGPL code. Because BSD is the less restrictive license, the other way round is not possible (all code becomes GPL'ed).But you should ask the FSF or some other competent organization. Also read the GPL FAQ.
The big problem is: currently no one was interested on this plugin.
Ollydbg: looks like you're interested, so go for it
Ask the llvm/clang guys, they know for sure...
Hi, this is pretty interesting.I would like to help but i have no experience in such work But if u can direct me i could be of help.... | https://forums.codeblocks.org/index.php/topic,14046.msg94556.html?PHPSESSID=7ebb94bf0124c550217a27682c6c44d3 | CC-MAIN-2022-05 | refinedweb | 343 | 54.02 |
Beta3 upgrade woes...
Beta3 upgrade woes...
Well,
i also have some problems to follow
If i look to the nested loading examples, there are some magics inside, like:
Code:
me.getBookSideBar()
Code:
me.getBookView().bind(record); me.getReviewList().bind(record, me.getReviewsStore());
Also the new syntax from the guide:
Code:
this.control({ 'viewport > panel': { render: this.onPanelRendered } });
The nested loading use "selectors" in "refs", also not mentioned anywhere.
Hard to get a complete picture without debugging example apps.
westy, i think the control function is the one to use, also to change the views. it would be nice to have at least one example which uses different views in viewport, exchanging them etc.
Thanks for the reply.
Yeah, suspect need to have a look at the control stuff, had enough for today though...
Could those getters be like the automagic ones created by initConfig but for views and bbars perhaps?
A quick look into the sources and your good
In the controller definition you have
PHP Code:
Ext.define('Books.controller.Books', {
extend: 'Ext.app.Controller',
models: ['Book'],
stores: ['Books', 'Reviews'], // <-- Reviews store => getReviewsStore()
refs: [
{ref: 'bookSideBar', selector: 'booksidebar'},
{ref: 'bookView', selector: 'bookview'}, //<--- ref: bookView => getBookView()
{ref: 'reviewList', selector: 'reviewlist'}
],
/...
PHP Code:
constructor: function(config) {
this.mixins.observable.constructor.call(this, config);
Ext.apply(this, config || {});
this.createGetters('model', this.models);
this.createGetters('store', this.stores);
this.createGetters('view', this.views);
if (this.refs) {
this.ref(this.refs); // <-- generate magic component query view ref
}
},
createGetters: function(type, refs) {
type = Ext.String.capitalize(type);
Ext.Array.each(refs, function(ref) {
var fn = 'get',
parts = ref.split('.');
// Handle namespaced class names. E.g. feed.Add becomes getFeedAddView etc.
Ext.Array.each(parts, function(part) {
fn += Ext.String.capitalize(part);
});
fn += type;
if (!this[fn]) {
this[fn] = Ext.Function.pass(this['get' + type], [ref], this);
}
// Execute it right away
this[fn](ref);
},
this);
},
ref: function(refs) {
var me = this;
refs = Ext.Array.from(refs);
Ext.Array.each(refs, function(info) {
var ref = info.ref,
fn = 'get' + Ext.String.capitalize(ref);
if (!me[fn]) {
me[fn] = Ext.Function.pass(me.getRef, [ref, info], me);
}
});
},
For the ComponentQuery: You can still use the xtype (or alias)
PHP Code:
this.control({".xtype_name": render: {}});
this.control('.gridpanel')
Ok, after another hour of head scratching have got something on screen... changing a panel from a JSON object with an xtype to an Ext.create got around the layout string issue, although I have no idea why!
Unfortunately nothing is laid out correctly now; two panels that should be in a border layout, with a split have only the top one showing; a very simple form panel for login now has no height!! Quite a mess!
I really think the changes in beta3 are far too sweeping to warrant the fact we ever hit beta.
I seem to remember the API was meant to be static once we got to beta?
Oh well, guess I have to live with it and try and sort it out. Going to waste the best part of a day though where I should be wrestling with the new tree (or is that a table
)!
Frustrating....
Seems that setting bodyPadding on the form panel gives a height to it content... very odd...
- Join Date
- Nov 2007
- Location
- Pijnacker, the Netherlands
- 75
- Vote Rating
- 8
- 8! | http://www.sencha.com/forum/showthread.php?130543-Beta3-upgrade-woes...&p=592225&viewfull=1 | CC-MAIN-2015-11 | refinedweb | 551 | 60.72 |
With more than 19 years of Mac support under his belt in higher education, government, medical research and advertising environments, Rich Trouton has more than enough experience with the Apple platform to discuss the basics of Apple’s new file system, APFS. In today’s session, he did just that by deciphering how this new file system impacts a Mac deployment, as well as how to use Jamf Pro to mitigate risks in this new world.
Trouton began the presentation with a look at HFS Plus – the file system most Mac administrators used throughout their careers. “The HFS Plus file system was introduced with Mac OS 8.1 in 1998 and was designed to fix a block allocation issue in HFS, Apple’s previous file system,” he explained. He added that over the years, Apple added new features to HFS Plus, which was used on macOS, iOS, tvOS and watchOS. Trouton dug deeper into the capabilities and limitations of HFS Plus before moving to what is the new norm for Apple users – APFS.
Trouton said, “With HFS Plus showing its age and its legacy roots, Apple made the judgement that continuing to maintain and evolve this 19-year-old file system is no longer tenable. Apple needs a new file system, and Apple File System is being born from that need.” He went on to explain a number of new APFS features that aren’t available in HFS Plus. They include:
64-bit block allocation: This is an improvement over the 32-bit support in HFS Plus and allows APFS to support more than nine quintillion files on a single APFS volume. (In place of HFS Plus’s four billion.)
Nanosecond time stamps: APFS supports one nanosecond timestamp granularity, which improves on HFS Plus’s one-second time stamp granularity.
Sparse file support: APFS supports the use of spare files, which allows it to handle empty space in files more efficiently than HFS Plus.
Snapshot support: APFS includes support for capturing file system snapshots. Currently, only Time Machine has entitlements to use snapshots.
Atomic Safe-Save: “This capability performs saves in a single transaction that, from the user’s perspective, either completes successfully or the save doesn’t happen. This addresses the problem of a file write only partially completing, which may cause more issues than the write not happening at all,” Trouton explained.
Extended Attributes support: Both APFS and HFS Plus include Extended Attribute support, but APFS includes native support for Extended Attributes. HFS Plus had to be retrofitted to allow this capacity.
After describing the aforementioned features that come with APFS, Trouton took a look at its structure. “Containers are the base storage unit of APFS. They are pools of storage, which are conceptually similar to CoreStorage’s logical volume groups,” he explained. “Each volume then generally maps to a matching namespace, which Apple is defining as meaning sets of files and directories.”
So what next? Trouton described the not-so-distant Mac admin future. “When Macs are upgraded to High Sierra, their boot drives may or may not be converted automatically to APFS,” he said. “Apple has stated that the conversion criteria is based on the type of drives.” He added that there is no opt-out from the conversion process. But do you need to use APFS on High Sierra?
“No,” Trouton said. “HFS Plus remains fully supported as of macOS High Sierra.” He added that he does believe Apple will phase out HFS Plus support in the future. With that said, is imaging dead?
While Trouton said it lives on on HFS Plus, the future of imaging on APFS is unclear. He explained saying, “AutoDMG supports building APFS images on 10.13. However, there are EFI updates, which will only be available from the High Sierra OS installer. And it must be applied before a Mac will be able to boot from APFS volumes. That means the OS installer will need to run at least once on a particular Mac before you’ll be able to start applying APFS images to it.”
Trouton spent the remainder of the session demoing ways to successfully manage the new file system, including how to work with APFS volumes and containers and navigating cloning. He discussed how to convert saying, “This conversion process is designed to non-destructively convert drives formatted with HFS Plus to now use the APFS file system. For this to work, however, drives will have to be unmounted.”
While AFPS supports multiple encryption models, such as no encryption, per-volume encryption and per-file encryption, it will use AES-XTS or AES-CBC encryption. AES-XTS will be used for macOS devices and iOS devices with A8 processors; iOS devices without A8 processers will use AES-CBC encryption.
Trouton concluded the session with a key piece of information: “Because bad things can happen to good drives, if you want to fix a malfunctioning APFS drive, run the FSCK APFS command with root privileges.” | https://www.jamf.com/blog/apfs-and-the-jamf-admin-what-you-need-to-know/ | CC-MAIN-2019-51 | refinedweb | 828 | 61.87 |
I am running an application with SpringBoot 2.1.1.RELEASE.
I am running an application with SpringBoot 2.1.1.RELEASE.
I have a yml file with list of elements configured in the default profile and also in a "local" profile
listOfSimpleObjects: one: oneOne, oneTwo three: nzerjpeojr
listOfObjects:
spring:
profiles: local
listOfSimpleObjects:
two: twoOne,twoTwo
listOfObjects:
I want to map that configuration into a properties file whose definition is
@ConfigurationProperties
public class MyProperties {
private Map<String, List<String>> listOfSimpleObjects = new HashMap<String, List<String>>();
private List<SubConfig> listOfObjects = new ArrayList<>();
public Map<String, List<String>> getListOfSimpleObjects() {
return listOfSimpleObjects;
}
public void setListOfSimpleObjects(Map<String, List<String>> listOfSimpleObjects) {
this.listOfSimpleObjects = listOfSimpleObjects;
}
public List<SubConfig> getListOfObjects() {
return listOfObjects;
}
public void setListOfObjects(List<SubConfig> listOfObjects) {
this.listOfObjects = listOfObjects;
}
}
public class SubConfig {
private String id;
private String name;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Running with the profile "local" I was expecting to have a MyProperties object with three elements in the listOfSimpleObjects and two in the listOfObjects but it is not the case.
Below a Junit test that tells me that there is only one element in the listOfObjects.
@RunWith(SpringRunner.class)
@ActiveProfiles("local")
@SpringBootTest
public class MyPropertiesTest {
@Autowired
private MyProperties props;
@Test public void testOnListOfStrings() { // this assertion is ok :) assertThat(props.getListOfSimpleObjects()).hasSize(3); } @Test public void testOnListOfObjects() { // this assertion fails :( assertThat(props.getListOfObjects()).hasSize(2); }
}
I asked a colleague of mine who that it was all about the key of the elements as the yml file is at first represented in a big HashMap.
So I guess there is no real answer to the question I could ask, but anyway:
Thanks for any kind of answer :). You will also see how Spring Boot auto-configuration helps to get data source configurations done, hassle free.
In our tutorial Spring Boot Rest Service, we created a DogService, which included a simple CRUD service based on the Mock Data Provider. We will use the same DogService and replace the Mock Data Provider with the actual MySQL database along with Spring Data and JPA.Datasource Configuration
We now have dependencies configured. It is not time to tell which data source to connect to. Here is my application.yml, with Spring Boot data source the specified JDBC URL, username, password, and driver class name (MySQL).
Apart from this, there are JPA specific configurations. First is the database-platform_,_ which is tells us the underlying Hibernate features to consider under the MySQL query dialect. This is so that all the database operations will be handled in MySQL specific syntax. The second JPA configuration is ddl-auto, which tells Hibernate to create the respective database and table structure, if not already present.
When this option is turned on, Hibernate will create the database structure based on the Entity Beans and the data source.Entity Bean
The first code level thing we will do is write an Entity Bean. Here is what the theDog, there are three fields that represent the datable table columns. Field id is our Primary Key and, hence, marked as @Id.
The field id is also marked with @GeneratedValue, which denotes that this is an Auto-Increment column and Hibernate will take care of putting in the next value. Hibernate will first query the underlying table to know the max value of the column and increment it with next insert. This also means that we don’t need to specify any value for the Id column and can leave it blank.Repository Interface
The Repository represents the DAO layer, which typically does all the database operations. Thanks to Spring Data, who provides the implementations for these methods. Let’s have a look at our DogsRepoisitory*,* which extends the.
This class has simple CRUD methods. It also converts the; } }
The Dogs Controller is a standard REST controller with simple CRUD endpoints. The job of the controller is to handle the HTTP requests and invoke the — that’s it.Conclusion
This is the end of our Spring Boot with Spring data and JPA tutorial. We saw how to use Spring Data’s abstraction for the Data Access Layer. We saw how to represent a database table in the form of Entity Bean and how to Use Spring Data’s autogenerated repository the full source code and examples used here, please visit.(); } }
The world looks much simpler with dependency injection. You let the Spring Framework do the hard work. We just use two simple annotations: @Component and @Autowired.
@Component, we tell Spring Framework: Hey there, this is a bean that you need to manage.
!)
What Else Does Spring Framework Solve?What Else Does Spring Framework Solve?
@Component public class WelcomeService { //Bla Bla Bla } @RestController public class WelcomeController { @Autowired private WelcomeService service; @RequestMapping("/welcome") public String welcome() { return service.retrieveWelcomeMessage(); } }
Does Spring Framework stop with Dependency Injection? No. It builds on the core concept of Dependency Injection with a number of Spring Modules
For example, you need much less code to use a JDBCTemplate or a JMSTemplate compared to a traditional JDBC or JMS.
The great thing about Spring Framework is that it does not try to solve problems that are already solved. All that it does is to provide a great integration with frameworks which provide great solutions.> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency>
The following screenshot shows the different dependencies that are added into our application
Dependencies can be classified into:
Any typical web application would use all these dependencies. Spring Boot Starter Web comes pre-packaged with these. As a developer, I would not need to worry about either these dependencies or their compatible versions.
As we see from Spring Boot Starter Web, starter projects help us in quickly getting started with developing specific types of applications.
There are a few starters for technical stuff as well
Spring Boot aims to enable production ready applications in quick time.
Learn Spring Framework and Spring Boot - Build a Spring Web Application: If you're new to the Spring Framework, this is the course you want to start with. This course covers the core of the Spring Framework, the foundation which all of the other Spring Framework projects are built from.
Build a web application using Spring Framework.
By the time we reach the end of this course, you will be able to build a functioning Spring Web Application.
In this course, you will learn about: | https://morioh.com/p/9ab612948627 | CC-MAIN-2020-05 | refinedweb | 1,084 | 54.22 |
web framework support customing file upload processing
Project description
NAME
x100http, web framework support customing file upload processing
SYNOPSIS
from x100http import X100HTTP app = X100HTTP() def hello_world(request): remote_ip = request.get_remote_ip() response = "<html><body>hello, " + remote_ip + "</body></html>" return response app.get("/", hello_world) app.run("0.0.0.0", 8080)
DESCRIPTION
x100http is a lite webframework designed for processing HTTP file upload.
CLASS X100HTTP
X100HTTP()
return a instance of x100http which wrapped below functions.
run(listern_ip, listen_port)
run a forking server on address listern_ip:listern_port
get(url, handler_function)
set a route acl of HTTP “GET” method.
handler_function will be called when url be visited.
handler_function must return a string as the HTTP response body to the visitor.
struct request (will explain below) will be passed to the handlder function when it is called.
post(url, handler_function)
set a route acl of HTTP “POST” method with header “Content-Type: application/x-www-form-urlencoded”.
handler_function will be called when HTTP client submit a form with the action url.
handler_function must return a string as the HTTP response body to the visitor.
struct request (will explain below) will be passed to the handlder function when it is called.
static(url_prefix, file_path, cors=allow_domain)
set a route acl for static file
Static file request with url_prefix will be routing to the file in file_path.
Default value of cors is “*”, allow all CORS request matching this route rule.
upload(url, upload_handler_class)
set a route acl of HTTP “POST” method with header “Content-Type: multipart/form-data”.
A new instance of class upload_handler_class will be created when file upload start.
struct “request” (will explain below) will be passed to upload_handler_class.upload_start().
upload_handler_class.upload_process() will be called every time when the buffer is full when file uploading.
two args will be passed to upload_handler_class.upload_process().
first arg is the name of the input in the form, second arg is the content of the input in the form.
the binary content of the upload file will be passed by the second arg.
struct “request” (will explain below) will NOT be passed to upload_handler_class.upload_finish().
upload_handler_class.upload_finish() will be called when file upload finished, this function must return a string as the HTTP response body to the visitor.
struct “request” (will explain below) will be passed to upload_handler_class.upload_finish().
set_upload_buf_size(buf_size)
set the buffer size of the stream reader while file uploading.
the unit of buf_size is byte, default value is 4096 byte.
upload_handler_class.upload_process() will be called to process the buffer every time when the buffer is full.
ROUTING
x100http route accept a url and a function/class/path.
There are three four of routes - get, post, static and upload.
app.get("/get_imple", get_simple) app.post("/post_simple", post_simple) app.upload("/upload_simple", UploadClass) app.static("/static/test/", "/tmp/sta/")
routing for HTTP GET can be more flexible like this:
app.get("/one_dir/<arg_first>_<arg_second>.py?abc=def", regex_get)
allow all domain for CORS like this:
app.static("/static/test/", "/tmp/sta/", cors="*")
CLASS X100REQUEST
A instance of class X100Request will be passed into every handler function.
get_remote_ip()
Return the IP address of the visitor.
get_body()
Return the body section of the HTTP request.
Will be empty when the HTTP method is “GET” or “POST - multipart/form-data”.
get_query_string()
Return the query string of the page was accessed, if any.
get_arg(arg_name)
args parsed from query_string when the request is sent by “GET” or “POST - multipart/form-data”.
args parsed from body when the request is sent by “POST - application/x-www-form-urlencoded”.
get_header(header_name)
Return the header`s value of the header_name, if any.
CLASS X100RESPONSE
set_body(content)
Set the response data to visitor.
Type ‘str’ and type ‘bytes’ are both accepted.
set_header(name, value)
Set the HTTP header.
HTTP ERROR 500
visitor will get HTTP error “500” when the handler function of the url he visit raise an error or code something wrong.
SUPPORTED PYTHON VERSIONS
x100http only supports python 3.4 or newer, because of re.fullmatch and os.sendfile.
EXAMPLES
get visitor ip
from x100http import X100HTTP app = X100HTTP() def hello_world(request): remote_ip = request.get_remote_ip() response = "<html><body>hello, " + remote_ip + "</body></html>" return response app.get("/", hello_world) app.run("0.0.0.0", 8080)
post method route
from x100http import X100HTTP app = X100HTTP() def index(request):" \ + "<input type="text" name="abc" />" \ + "<input type="submit" name="submit" />" \ + "</form>" \ + "</body></html>" return response def post_handler(request): remote_ip = request.get_remote_ip() abc = request.get_arg('abc') response = "hello, " + remote_ip + " you typed: " + abc return response app.get("/", index) app.post("/form", post_handler) app.run("0.0.0.0", 8080)
process file upload
from x100http import X100HTTP, X100Response class UploadHandler: def upload_start(self, request): self.content = "start" def upload_process(self, key, line): self.content += line.decode() def upload_finish(self, request): return "upload succ, content = " + self.content app = X100HTTP() app.upload("/upload", UploadHandler) app.run("0.0.0.0", 8080)
set http header
from x100http import X100HTTP, X100Response def get_custom_header(request): remote_ip = request.get_remote_ip() response = X100Response() response.set_header("X-My-Header", "My-Value") response.set_body("<html><body>hello, " + remote_ip + "</body></html>") return response app = X100HTTP() app.upload("/", get_custom_header) app.run("0.0.0.0", 8080)
more flexible routing
from x100http import X100HTTP def regex_get(request): first = request.get_arg("arg_first") second = request.get_arg("arg_second") abc = request.get_arg("abc") return "hello, " + first + second + abc app = X100HTTP() app.get("/one_dir/<arg_first>_<arg_second>.py?abc=def", regex_get) app.run("0.0.0.0", 8080)
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/x100http/ | CC-MAIN-2019-51 | refinedweb | 927 | 52.56 |
Currently cinder supports volume migration with backend assistance. Some backends like IPSANs support migrating volume in backend level, but Ceph does not. Though Ceph volume migration has already been supported through cinder generic migration process in Liberty, the migration efficiency is not so good. It is necessary to support volume migration in the Ceph driver to improve the efficiency of migration.
Suppose there are two cinder volume backends which are in the same Ceph storage cluster.
Now volume migration between two Ceph volume backends is implemented by generic migration logic. Migration operation can proceed with file operations on handles returned from the os-brick connectors, but the migration efficiency is limited on file I/O speed. If we offload migration operation from cinder-volume host to Ceph storage cluster, we would get the following benefits:
Improve the volume migration efficiency between two Ceph storage pools, especially when the volume’s capacity and data size is big.
Reduce the IO pressure on cinder-volume host when doing the volume migration.
There are three cases for volume migration. The scope of this spec is for the available volumes only and targets to resolve the issues within the following migration case 1:
Within the scope of this spec:
1.Available volume migration between two pools from the same Ceph cluster.
Out of the scope of the spec:
2.Available volume migration between Ceph and other vendor driver.
3.In-use(attached) volume migration using Cinder generic migration.
Solution A: use rbd.Image.copy(dest_ioctx, dest_name) function to migrate volume from one pool to another pool.
To offload migration operation from cinder-volume host to ceph cluster, we need to do the following changes in migrate_volume routine in RBD driver.
Check whether source volume backend and destination volume backend are in the same ceph storage cluster or not.
If not, return (False, None).
If yes, use rbd.Image.copy(dest_ioctx, dest_name) function to copy volume from one pool to another pool.
Delete the old volume.
Solution B: use ceph’s functions of clone image and flatten clone image to migrate volume from one pool to another pool.
Solution B contains the following steps: * Create source volume snapshot snap_a and protect the snapshot snap_a. * Clone a child image image_a of snap_a to the destination pool. * Flatten the child image image_a, thus snap_a has not been depended on. * Unprotect the snapshot snap_a and delete it.
Using a volume which’s capacity and data size is 12GB to show the time-consuming comparison between solution A and solution B.
[Solution-A]Copy volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52 from “volumes1” pool to “volumes2” pool.
root@2C5_19_CG1# time rbd cp volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52 volumes2/test1
Image copy: 100% complete…done.
real 2m3.513s user 0m9.983s sys 0m25.213s
[Solution-B-step-1]Create a snapshot of volume 777617f2-e286-44b8-baff-d1e8b792cc52 and protect it.
root@2C5_19_CG1# time rbd snap create –pool volumes1 –image volume-777617f2-e286-44b8-baff-d1e8b792cc52 –snap snap_test
real 0m0.465s user 0m0.050s sys 0m0.016s
root@2C5_19_CG1# time rbd snap protect volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52@snap_test
real 0m0.128s user 0m0.057s sys 0m0.006s
[Solution-B-step-2]Do clone operation on volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52@snap_test.
root@2C5_19_CG1# time rbd clone volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52@snap_test volumes2/snap_test_clone
real 0m0.336s user 0m0.058s sys 0m0.012s
[Solution-B-step-3]Flatten the clone image volumes2/snap_test_clone.
root@2C5_19_CG1# time rbd flatten volumes2/snap_test_clone Image flatten: 100% complete…done.
real 1m58.469s user 0m4.513s sys 0m17.181s
[Solution-B-step-4]Unprotect the snap volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52@snap_test and delete it.
root@2C5_19_CG1# time rbd snap unprotect volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52@snap_test
real 0m0.150s user 0m0.058s sys 0m0.013s
root@2C5_19_CG1# time rbd snap rm volumes1/volume-777617f2-e286-44b8-baff-d1e8b792cc52@snap_test
real 0m0.418s user 0m0.054s sys 0m0.011s
By the above test results of solution A and solution B, solution A needs (real:2m3.513s, user:0m9.983s, sys:0m25.213s) to finish the volume copy operation and solution B needs (real:1m59.966s, user:0m4.790s, sys:0m17.239s) to do that. The time-consuming for the both two solutions are not much difference, but solution A is more simpler than solution B. So we intend to use solution A to offload volume migration from cinder-volume host to ceph cluster.
The performance of volume migration between two Ceph storage pools in the same Ceph cluster will be improved greatly.
chen-xueying1<chen.xueying1@zte.com.cn>
ji-xuepeng<ji.xuepeng@zte.com.cn>
Add location info of back-end:
Add location_info in state of Ceph volume service, it should include the Ceph cluster name(or id) and storage pool name.
Implement volume migration:
1.Check whether the requirements of volume migration are met. If source back-end and destination back-end are in the same Ceph cluster and volume status is not ‘in-use’ state, the volume can be migrated.
2.Copy volume from one pool to another pool and keep it’s original image name.
3.Delete the old volume.
Unit tests will be added. Volume migration test case will be added.
Both unit and Tempest tests need to be created to cover the code change that mentioned in “Proposed change” and ensure that volume migration feature works well while introducing the feature of RBD volume migration.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | http://specs.openstack.org/openstack/cinder-specs/specs/newton/ceph-volume-migrate.html | CC-MAIN-2019-26 | refinedweb | 939 | 52.15 |
I am trying to make a program that needs to compare a string. here is a basic program of what i mean, i want it to say "it worked" but it seems to only read as not true. anyone know how to do this? thanx for any help.
#include <iostream> #include <conio> #pragma hdrstop #include <condefs.h> //--------------------------------------------------------------------------- #pragma argsused int main(int argc, char **argv) { char *name = new char [100]; cout << "the word is silver (all lower case)." << endl; cout << "enter name:"; cin >> name; if (name == "silver") { cout << "it worked." << endl; } else if (name != "silver"){ cout << "it didn't work." << endl; } cout << name; getch(); return 0; } | https://www.daniweb.com/programming/software-development/threads/18555/comparing-strings | CC-MAIN-2017-17 | refinedweb | 106 | 94.86 |
Hello,
I was having some problems with my Java code. The program writen below is for a class. The question asks:
In this exercise, you are asked to rewrite the Programming Example Classify Numbers...rewrite the program to incorporate the following requirements:
a. Data to the program is input from a file of unspecified length
b. Save the output of the program to a file
c. Write the method getNumber so that it reads a number from the input file (opened in the method main), outputs the number to the output file (opened in the method main), and sends the number read to the method main. Print only 10 numbers per line.
d. Have the program find the sum and average of the numbers
e. Write the method printResults so that it outputs the final results to the output file (opened in the method main). Other than outputting the appropriate counts, the definition of the method printResults should also output the sum and the average of the numbers
The program that you are asked to rewrite pretty much take in numbers from the keyboard (20 at a time) and then says how many of the numbers are even, odd, or zero.
For this program to work, I wrote my own class IntClass so the variables would pass from method to method, but I'm still getting errors, please help! Thanks!
Code for IntClass:
public class IntClass{
private int x;
public IntClass(){ x= 0;}
public IntClass(int num) { x = num;}
public void setNum(int num){ x = num;}
public int getNum() { return x;}
public void addToNum(int num) {x = x +num;}
public void multiplyToNum(int num) { x = x *num;}
public int compareTo(int num){ return (x-num);}
public boolean equals (int num) { if (x == num)
return true;
else return false;}
public String toString(){ return (String.valueOf(x));}
}
My current code:
import java.util.*;
import java.io.*;
public class Evensoddszeros{
public static void main (String [] args)throws IOException{
int evenscount =0;
int oddscount=0;
int zeroscount =0;
int count =0;
IntClass number = new IntClass (0);
IntClass zeros = new IntClass (0);
IntClass odds = new IntClass (0);
IntClass evens = new IntClass (0);
int sum=0;
int average =0;
Scanner inFile = new Scanner( new FileReader ("F:\\Ch7_Ex11Data.txt"));
PrintWriter outFile = new PrintWriter("F:\\classifynumbersoutput.txt");
while(inFile.hasNext()){
getNumber(number, inFile, outFile);
classifyNumber(number.getNum(), zeros, odds, evens);
count ++;
if (count % 10 == 0)
outFile.println();
sum = sum + number.getNum();
average = sum/count;
outFile.close();
inFile.close();
}
}
public static void getNumber(IntClass number, Scanner inFile, PrintWriter outFile ){
number.setNum(inFile.nextInt());
outFile.print(number.getNum() + " ");
}
public static void printResult (IntClass Number, IntClass evens, IntClass odds, IntClass zeros, int sum, int average){
outFile.println("There are " + evens + " evens, " + " there are " + odds + " odds, " + "and there are " + zeros + " zeros");
outFile.println("The sum of the numbers is: " + sum);
outFile.println("The average of the numbers is: " + average);
}
public static void classifyNumber(int number, IntClass zeroscount, IntClass oddscount, IntClass evenscount){
switch (number %2){
case 0: IntClass evens.addToNum(1);
if ( number == 0);
IntClass zeros.addToNum(1);
break;
case 1:
case -1: IntClass odds.addToNum(1);
}
}
} | http://www.javaprogrammingforums.com/whats-wrong-my-code/12201-help-please.html | CC-MAIN-2015-18 | refinedweb | 514 | 54.02 |
Copyright ©2006 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
Remote Events for XML (REX) 1.0 is an XML [XML] grammar.
This document is governed by the 5 February 2004 W3C Patent Policy. W3C maintains the SVG Working Group's public list of patent disclosures and the Web API Working Group's public list of patent disclosures made in connection with the deliverables of these groups;
timeRef' attribute
timeStamp' attribute. REX assumes that the transport provides for reliable, timely and in sequence delivery of REX messages. REX does not cover the process of session initiation and termination which are presumed to be handled by other means.
The first version of this specification deliberately restricts itself to the transmission of mutation events (events which notify of changes to the structure or content of the document).
<rex xmlns=''> <event target='id("spot")' attrName= eighth) in the second table contained in the document's body.
<rex xmlns='' xmlns: <event target='/x:html/x:body/x:table[2]' name='DOMNodeInserted' position='7'> <tr xmlns=''> <td>Rover</td> <td>Alpine Labrador</td> <td class='food'>bone</td> </tr> </event> </rex>
The following REX message removes the first circle
that is a child of the element with id
poodle-stylist-location-layer.
<rex xmlns='' xmlns: <event target='id("poodle-stylist-location-layer")/svg:circle' name='DOMNodeRemoved'/> </rex>
Note that while the target path may match multiple elements, REX limits the list to the first one that matches. Future versions may include a flag to target multiple elements.
The following REX message replaces an element with ID
femur
with the new one provided in the payload.
.
<rex xmlns='' xmlns: ='minimal-version'> <value>1.0</value> </attribute> </optional> <optional> <attribute name='target-document'> <text/> </attribute> </optional> <ref name='ns'/> <ref name='rex.AT'/> URI component is the empty string. URI component.
If the 'ns' attribute contains a non-empty string, then it SHOULD be a valid IRI. However, given the complexity involved in checking IRI validity it is not RECOMMENDED that user-agents validate them. If however IRI validation is performed and the IRI value of the 'ns' attribute is found to be false, then> <ref name='timeStamp.AT'/> <ref name='timeRef.AT'/> unless a superset of the minimal syntax is used.
If the size of the node-set is superior to one, only the first item in that node-set MUST be considered for processing, and the rest MUST be discarded. only of the child
axis, and only element name tests. The final step in the list is more
powerful and may also reference text the first text node that is a child of the the
fooelement.
/xhtml:html/xhtml:body/svg:svg[3]will match the third
svgelement that is a child of the root
bodyand then
htmlelements.
id("dahut")will match the element with ID 'dahut'.
id("dahut")/svg:circlewill match the first
circleelement that is a child.
Events that do not have a time stamp MUST be dispatched in the same order in which they occur in the REX message. If the user-agent supports the Streaming Module then events that have a time stamp MAY be caused to be processed in an order different from the one in which they occur in the REX message, as described below.
Because user-agents are encouraged to process events that do not have a time stamp as soon as possible, a REX message containing a well-formedness error SHOULD still cause previous <event> elements to be fully processed. Upon encountering a well-formedness error, a REX user-agent MUST stop processing immediately such that the remainder of the XML document MUST be ignored and any event scheduled to be dispatched at a later time due to its time stamp MUST be discarded..
Note that given the presence of time stamps, it is possible for content to require of the user-agent that it queue up a potentially unlimited number of events, which may lead to exhausting available memory. In order to avoid this, servers SHOULD arrange to deliver events in a sequence that minimises memory requirements for user-agents. Also, user-agents MAY process events earlier than indicated if not doing so would result in exhausting memory resources. If doing so, then user-agents MUST process these events in the order in which the timing would have required that they process them.
'minimal.
This chapter of the specification contains aspects of REX that are only useful within streaming environments in which features such as timing, synchronisation, or tune-in are relevant. As such, it is optional for implementations not intended to operate in such environments, and constitutes a separate conformance level.
The reference event is an event, of any type, that is defined to serve
as the reference point for time stamps on events following this one.
Any <event> element that has a
'timeRef'
attribute set to
anchor becomes a reference event. All
events that follow it in the stream and have a time stamp have that time
stamp defined as an offset from when the reference event became active.
If an <event> element has both a
'timeStamp'
attribute
and a
'timeRef'
attribute set to
anchor, then the
'timeStamp'
attribute MUST be ignored, and the event processed
as if it hadn't been specified.
When no reference event has yet been seen, the time from which time stamps are offset defaults to the beginning of the session.
timeRef' attribute
The
'timeRef'
attribute is used to mark an event as a
reference event for the timing for time
stamps. It has two values,
anchor and
implicit.
When set to
anchor, the corresponding event MUST be used as
a reference event, as defined above. When set to
implicit
the behaviour of the <event> element is untouched. The absence of
this attribute is equivalent to it being set to
implicit.
Attribute timeRef
<define name='timeRef.AT' combine='interleave'> <optional> <attribute name='timeRef'> <choice> <value>implicit</value> <value>anchor</value> </choice> </attribute> </optional> </define>
timeStamp' attribute
The 'timeStamp' attribute contains a non-negative integer which defines the time in ticks relative to the current reference event at which the event is to be triggered. A tick is defined against the synchronisation clock, which defaults to making a tick last exactly one millisecond, but MAY be set to another value in a user-agent dependent manner (e.g. because the user-agent is synchronising with an audio or video stream that has a different time, or because the user has requested that time run slower or faster).
Attribute timeStamp
<define name='timeStamp.AT'> <optional> <attribute name='timeStamp'> <data type='nonNegativeInteger'/> </attribute> </optional> </define>
In continuous streaming and broadcast scenarios it is important that a user-agent be able to tune into a stream at an arbitrary moment and nevertheless be able to start using it as early as possible. This requires three distinct but related pieces of functionality:
The above functionality is obtained through the use of two attributes on the <rex> element, 'seq' and 'target' :
Attributes seq and target
<define name='rex.AT' combine='interleave'> <optional> <attribute name='seq'> <data type='integer'/> </attribute> </optional> <optional> <attribute name='target'> <data type='integer'/> </attribute> </optional> </define>
This functionality can be used as follows. A message containing enough of the document for tune-in would be transmitted with a 'seq' and no 'target' :
<rex xmlns='' seq='1'> <event target='/' name='DOMNodeInserted'> <svg xmlns=''> <!-- more content --> </svg> </event> </rex>
The above can be processed immediately, whether or not the user-agent has received previous messages in the stream or not. The following one on the other hand is a message that is only relevant if the user-agent has already received the previous message. If it hasn't, then it will ignore it.
<rex xmlns='' seq='2' target='1'> <event target='id("dahut")' name='DOMAttrModified' attrName='fill' newValue='red'/> </rex>
Therefore, in a broadcast or multicast environment one may see sequences of events similar to the following:
In the above, if a user-agent starts receiving the stream of REX messages from
the first
seq=3, it will ignore all the following ones until it
receives
seq=4 which provides it with sufficient context to
display something.
All mutations events are in no SHOULD.
If there is more than one.
Please note that the payload of 'DOMNodeInserted' events MUST NOT be normalised prior to insertion and therefore that indentation, if undesirable in the targeted result, has to be absent from the event payload as well. For instance, given the following target tree:
<document> ... <section xml: <title></title> ... </section> ... </document>
And given the following 'DOMNodeInserted' event:
<rex xmlns=''> <event target='id("poodle-mania")/title' name='DOMNodeInserted'> Poodles are for maniacs! </event> </rex>
Will produce the following result:
<document> ... <section xml: <title> Poodles are for maniacs! </title> ... </section> ... </document>
And not what some may have expected:
<document> ... <section xml: <title>Poodles are for maniacs!</title> ... </section> ... </document>
In order to obtain the above, the white space before and after "Poodles are for maniacs!" would have had to be removed.. When this event is transmitted, the <event> element MUST allow arbitrary XML as its content. This is not specified in the schema fragment below but left to compounding schemata (e.g. using NVDL) to define.
DOMNodeRemoved
<define name='DOMNodeRemoved'> <empty/> </define>
DOMNodeRemovedFromDocument' event
The DOMNodeRemovedFromDocument is not supported. Implementations SHOULD process it as an unknown event. It is nevertheless dispatched after a DOMNodeRemoved event as described in DOM 3 Events [DOM3EV].
DOMNodeInsertedIntoDocument' event
The DOMNodeInsertedIntoDocument is not supported. Implementations SHOULD point to an element node. That element node is the one on which the attribute pointed to by 'attrName' is to be modified, added, or removed.
If the
'attrChange'
attribute is set to
removal
then the attribute pointed to by the target path and
'attrName'
attribute MUST exists, and if it does not the event MUST be ignored. If however it is set to
modification or
addition then there are two possible options:
modification;
addition;
Note that as specified in DOM 3 Events [DOM3EV] this event MUST be dispatched after the node has been modified.
The following attributes are added to the <event> element.
DOMAttrModified
<define name='DOMAttrModified'> <group> <attribute name='attrName'> <text/> </attribute> <optional> <attribute name='attrChange'> <choice> <value>modification</value> <value>addition</value> <value>removal</value> </choice> </attribute> </optional> <optional> <attribute name='newValue'> <text/> </attribute> </optional> </group> </define>
null. If it does have a prefix, then its namespace URI component MUST be that corresponding to that prefix in the namespace declarations in scope for the <event> element being processed; inclusive of the implicit declaration of the prefix
xmlrequired by Namespaces in XML [XMLNS] and exclusive of the default namespace. If the QName is not namespace well-formed [XMLNS] either because the prefix cannot be resolved to a namespace declaration in the current scope, or because it is syntactically invalid, then the entire event MUST be ignored. This attribute is required.
modification,
addition, or
removal. The default value is
modification. Note that the distinction between
additionand
modificationis largely for documentation purposes as the user-agent changes one into the other depending on the presence or not of the attribute. SHOULD section is informative.:
This section is informative.
The editor would like to thank Dave Raggett for his excellent input and comments. This specification is a collective work that derives from other solutions in the domain. Notable amongst its initial influences are an input document from Nokia the ideas from which were incorporated into this specification, XUpdate which has been a major de facto solution since the year 2000 and was influential in REX's technical design, and the XUP W3C Member Submission (now OpenXUP) from which much inspiration was drawn.
The IETF WiDeX Working Group has been a pleasure to collaborate with and has provided a substantial amount of input, much of which has made its way directly into this specification. Many thanks are also due to the MWI and XML Core WG for their excellent comments and suggestions. Takanari Hayama designed the tune-in part of the specification.
Many thanks to Karl Dubost and the QA WG for their very helpful review.
The following individuals, in no particular order, have provided valuable comments that have helped shape REX: Ola Andersson, Stéphane Bortzmeyer, Gerd Castan, John Cowan, Elliotte Rusty Harold, Takanari Hayama, Ian Hickson, Björn Hörhmann, Philippe Le Hégaret, Paul Libbrecht, Chris Lilley, Cameron McCormack, David Rivron, Elin Röös, Dave Singer, Maciej Stachowiak, Ruud Steltenpool, and Jin Yu..
This section is informative.
This sections lists the changes that were made to this specification since it was last published. Many thanks too all those who have provided feedback.
namespaceURIis defined to be its URI equivalent (John Cowan). | https://www.w3.org/TR/rex/ | CC-MAIN-2017-30 | refinedweb | 2,118 | 52.49 |
curs_initscr(3) UNIX Programmer's Manual curs_initscr(3)
initscr, newterm, endwin, isendwin, set_term, delscreen - curses screen initialization and manipulation routines
#include <curses.h> WINDOW *initscr(void); int endwin(void); bool isendwin(void); SCREEN *newterm er- ror and exits; otherwise, a pointer is returned to stdscr. A program that outputs to more than one terminal should use the newterm routine for each terminal instead of initscr. A program that needs to inspect capabilities, so it can con- tinue in- put escap- ing other- MirOS BSD #10-current Printed 19.2.2012 1 curs_initscr(3) UNIX Programmer's Manual curs_initscr(3) wise. suc- cessful completion. Routines that return pointers always return NULL on error. X/Open defines no error conditions. In this implementation endwin returns an error if the terminal was not initialized.. | http://mirbsd.mirsolutions.de/htman/sparc/man3/newterm.htm | crawl-003 | refinedweb | 131 | 58.28 |
The short answer is that a pair of modules did all the heavy
lifting. The first is HTML
Tidy, originally by Dave Raggett and now maintained by "a quorum
of developers" on SourceForge. The second is Mark Pilgrim's universal feed
parser which, as it turns out, neatly encapsulates HTML Tidy
with the help of Marc-Andre Lemburg's mxTidy.
The complete loader script is here.
First, the setup:
import string, sys, urllib, re, traceback, pickle, os, \
feedparser, mx.DateTime, libxml2
from dbxml import *
Of the non-standard modules, feedparser is trivial to install:
just drop it in the current directory. Installers for mx.DateTime
and mx.Tidy (used by feedparser) are available at
egenix.com. Creating the Python bindings for libxml2 was, as
mentioned last time, not quite a no-brainer. Likewise the Python
bindings for DB XML on Windows and Mac OS X, my two everyday
environments. For OS X, see Kimbro
Staken's recipe. For Windows, I realized that Chandler contains the bits I needed, so I cheated and
used those. Mea culpa.
More laziness: my aggregator, like everyone's first aggregator,
initially ignored the two popular ways to check if a feed has
changed since last fetched: ETag and "conditional GET". But
feedparser makes it easy to use both methods. You just need to
remember the values of the ETag and Last-Modified headers sent from
the server, and hand them to feedparser's parse() method when you
ask it to fetch a feed. I could have stored the headers in DB XML,
but it seemed easier to just pickle a Python dictionary:
try:
os.stat(dictEtagFile)
except:
print "CreatingDictEtag"
f = open(dictEtagFile,'wb')
dictEtag = {}
pickle.dump(dictEtag,f)
f.close()
try:
f = open(dictEtagFile, 'rb')
dictEtag = pickle.load(f)
f.close()
except:
raise "CannotLoadDictEtag"
The list of feeds to fetch is avaiable from an OPML
file that's rewritten whenever I add or remove an RSS
subscription using my Radio UserLand feedreader. The combination of
Python and libxml2 makes quick work of pulling the URLs for those
feeds into a Python list:
opmldata = urllib.urlopen(opml).read()
opmlxml = libxml2.parseDoc(opmldata)
urls = opmlxml.xpathEval('//@xmlUrl')
urls = map ( libxml2.xmlNode.getContent, urls)
Using the ETag and Last-Modified headers stored in the pickled
dictionary, you can avoid fetching feeds that haven't changed. But
when a feed has changed, which of its items should be added to the
database? RSS 2.0 provides a "guid" that signals item-level changes,
but RSS .9x and 1.0 feeds don't. Yet more laziness: for now I'm just
hashing the item content and keeping track that way. Items are
stored in the database like so:
<item channel="Jon's Radio" hash="-125385335">
<title>...</title>
<date>...</date>
<link>...</link>
<content>...</content>
</item>
To fetch these into a Python dictionary, the script opens the
database and runs an XPath query:
container = XmlContainer(None, './dbxml/blog.dbxml')
container.open(None,DB_CREATE)
context = XmlQueryContext(1,0)
result = container.queryWithXPath(None, '/item/@hash', context)
hashValues = {}
try:
for i in range(r.size()):
val = result.next().asString(None)
hashValues[val] = val
except:
print "CannotCreateHashDict"
container.close()
DB XML supports Berkeley DB transactions, but I'm not using them
here, so the first argument to container.open() is None. Queries, by
default, return whole matching documents, and we'll use that mode
later in the search engine. But here, we just want to extract hash
values, so the first argument to XmlQueryContext() -- the
returnType -- is 1. (The second argument -- evaluationType --
defaults to "eager" but can be switched to "lazy," though I haven't
found a reason to do that yet.)
Now here's the main loop. For each URL, it looks up the ETag and
Last-Modified headers, calls the feedparser, calls a helper method
to unpack the Python data structure returned by the feedparser,
and -- if any new items showed up -- it plugs them into the
database.
for url in urls:
print "%s " % url
try:
try:
dictEtag[url]
except KeyError:
dictEtag[url] = {}
try:
etag = dictEtag[url]['etag']
except:
etag = None
try:
mod = dictEtag[url]['mod']
except:
mod = None
result = feedparser.parse(url, etag, mod)
try:
etag = result['etag']
dictEtag[url]['etag'] = etag
except:
etag = None
pass
try:
mod = result['modified']
dictEtag[url]['mod'] = mod
except:
mod = None
pass
if ( etag == None and mod == None ):
print "%s: no etag or mod" % url
if ( result['status'] == 304 ):
continue
items = unpack ( result )
if len ( items ):
container.open(None,DB_CREATE)
for item in ( items ):
doc = XmlDocument()
doc.setContent(item)
print container.putDocument(None, doc)
container.close()
except:
print formatExceptionInfo()
continue
The unpack() method bails on items found in the dictionary of item hashes,
and normalizes the data in a couple of ways. For example, the content
can show up in a couple of different ways in the Python dictionary returned
by feedparser.parse(). Likewise, there are a few different date formats.
For now I'm converting all content to this style:
<content><body>...</body></content>
And all dates to this style:
<date>2004/02/04></date>
Finally, the script pickles the dictionary of item hashes for next
time.
As before, the server is based on Python's BaseHTTPServer. Last
month, the server's query() method used libxslt's XPath engine. This
month it uses DB XML's queryWithXPath(). The complete search script
is here.
The default DB XML search context, which returns documents rather
than just matching nodes, is the right strategy here. Each found
document provides the channel, title, date, and link used to
contextualize one or more hits within the document.
The hits still need to be isolated from their containing
documents, though. So libxml2's xpathEval() does subqueries, within
each found document, to yield one or more contextualized
fragments.
The HTML output is accumulated as a Python list of date-content
pairs, and then sorted by date. The reason for this is that XPath
itself can't sort results.
Here's the query() method:
def query(self,q):
css = getCss()
script = getScript()
preamble = getPreamble(q)
try:
container = XmlContainer(None, db)
container.open(None)
xmlResults = container.queryWithXPath(None, q)
except:
print "QueryFailure"
container.close()
return "bad query: %s" % q
if ( xmlResults.size() == 0 ):
return """<html><head><title>XPath query of Jon's feeds</title>
<style>%s</style>
<script>%s</script></head>
<body>%s <b>no results</b></body></html> """ % (css, script, preamble)
l = []
for i in range(xmlResults.size()):
result = xmlResults.next().asString(None)
try:
xmlFragment = libxml2.parseDoc( result )
except:
print "NotWellFormed"
xmlFragment.freeDoc()
container.close()
return "bad query: %s" % q
xpathChannel = "//item/@channel"
xpathTitle = "//title"
xpathDate = "//date"
xpathLink = "//link"
xpathHits = q
try:
channel = xmlFragment.xpathEval(xpathChannel)[0].content
title = xmlFragment.xpathEval(xpathTitle)[0].content
date = xmlFragment.xpathEval(xpathDate)[0].content
link = urllib.unquote(xmlFragment.xpathEval(xpathLink)[0].content)
hits = xmlFragment.xpathEval(xpathHits)
except:
print "CannotSearchWithinFoundDocument"
xmlFragment.freeDoc()
container.close()
return "bad query: %s" % q
try:
frags = ''
for j in range(len(hits)):
frags = frags + '<div>' + hits[j].serialize() + \
'</div><hr align="left" width="20%"/>'
except:
print "CannotSerializeHits"
xmlFragment.freeDoc()
container.close()
return "bad query: %s" % q
xmlFragment.freeDoc()
xhtml = '<p><a href="%s">%s</a> (<b>%s</b>, <b>%s</b>)</p> %s' % (
link,
title,
channel,
date,
frags
)
l.append ( ( date, xhtml ) )
container.close()
l.sort( lambda x, y: cmp ( y[0], x[0] ) )
xhtml = ''
for i in range(len(l)):
xhtml = xhtml + l[i][1]
page = """
<html><head><title>XPath query of Jon's feeds</title><style>%s</style>
<script>%s</script></head><body>%s %s</body></html>
""" % (css, script, preamble, xhtml)
return page
More from Jon Udell
The Beauty of REST
Lightweight XML Search Servers
The Social Life of XML
Interactive Microcontent
Language Instincts
Not shown in this example is DB XML's indexing.
I've tinkered with it, but the database of blog items I've built so
far -- about 4000 documents, and 7MB -- doesn't really require
indexes yet. As the database grows, I plan to learn more about DB
XML's (somewhat bewildering) set of indexing strategies.
I'm also looking forward to using this growing repository to try
out other XML-aware databases, and to explore XQuery as well as
XPath implementations. It's been a bit of a revelation to see how
easily random HTML scooped up from RSS feeds can be XHTML-ized and
exposed to structured search. Can this approach scale to the level
required to create network effects? I'd love to see Technorati,
Feedster, and eventually Google give it a try.
Comment on this article in our forums.
(* You must be a member of XML.com to use this feature.)
Comment on this Article python:
Interactive Debugging in Python (191 tags)
Untwisting Python Network Programming (169 tags)
Introducing del.icio.us (166 tags)
Test-Driven Development in Python (94 tags)
A Bright, Shiny Service: Sparklines (77 tags)
Articles that share the tag search:
MySQL FULLTEXT Searching (93 tags)
Find What You Want with Plucene (22 tags)
Building a Vector Space Search Engine in Perl (18 tags)
Google Your Desktop (14 tags)
Dreaming of an Atom Store: A Database for the Web (14 tags)
Articles that share the tag tutorial:
Rolling with Ruby on Rails (1417 tags)
A Simpler Ajax Path (135 tags)
Ajax on Rails (88 tags)
Rolling with Ruby on Rails, Part 2 (66 tags)
Very Dynamic Web Interfaces (66 tags)) | http://www.xml.com/pub/a/2004/02/18/udell.html | crawl-001 | refinedweb | 1,549 | 57.16 |
#include <kcreddb.h>.
Identity of the credential. Set to NULL if not specified.
Type of the credential. Set to KCDB_CREDTYPE_INVALID if not specified.
Name of the credential. Set to NULL if not specified.
If non-NULL, instructs whoever is handling the request that the credential thus obtained be placed in this credential set in addition to whereever it may place newly acquired credentials. Note that while this can be NULL if the new credential does not need to be placed in a credential set, it can not equal the root credential set.
An unspecified parameter. Specific credential types may specify how this field is to be used.
Incremented by one when this request is answered. Only one credential provider is allowed to answer a KMSG_KCDB_REQUEST message. Initially, when the message is sent out, this member should be set to zero. | http://secure-endpoints.com/netidmgr/developer/structtag__kcdb__cred__request.html | CC-MAIN-2017-13 | refinedweb | 140 | 60.72 |
How to H/V scrollable table with A LOT of items (ListView of ListViews)
Hi,
I'm trying to design a qml item that will contain thousands of items with the layout of a grid.
I tried several approaches (TableView, GridView) but everytime I had issues trying to resize my Component using a rows * columns format or with memory.
The best approach i found so far is the following:
import QtQuick 2.4 import QtQuick.Controls 1.3 import QtQuick.Window 2.2 import QtQuick.Dialogs 1.2 ApplicationWindow { title: qsTr("Hello World") width: 800 height: 600 visible: true Item { id: grid property int numRows: 1000 property int numColumns: 1000 property int cellSize: 35 property int cellSpacing: 1 anchors.fill: parent ScrollView { id: scrollView anchors.fill: parent anchors.centerIn: parent contentItem: columnsList ListView { id: columnsList anchors.fill: parent anchors.centerIn: parent orientation: ListView.Vertical spacing: grid.cellSpacing clip: true interactive: false cacheBuffer: 1 model: grid.numRows delegate: ListView { id: row width: columnsList.width height: grid.cellSize orientation: ListView.Horizontal spacing: grid.cellSpacing clip: true interactive: false cacheBuffer: 1 model: grid.numColumns delegate: Rectangle { width: grid.cellSize height: grid.cellSize color: "green" } } } } }
It would be perfect if it wasn't for the fact that the horizontal scrollbar isn't shown!
I trid also to set height and width manually, but this causes all the cells to be created and it's not acceptable.
Is it a good approach?
Is there a workaround to make the horizontal scrollbar appear? | https://forum.qt.io/topic/53367/how-to-h-v-scrollable-table-with-a-lot-of-items-listview-of-listviews | CC-MAIN-2018-43 | refinedweb | 247 | 54.08 |
Opened 7 years ago
Closed 6 years ago
Last modified 6 years ago
#2510 closed defect (fixed)
BatchModifyPlugin 0.2.0 fails to load in Trac-0.11
Description
From the log:
2008-02-01 15:53:48,108 Trac[loader] DEBUG: Loading batchmod.web_ui from /usr/local/lib/python2.5/site-packages/BatchModify-0.2.0-py2.5.egg 2008-02-01 15:53:48,111 Trac[loader] ERROR: Skipping "batchmod.web_ui = batchmod.web_ui": (can't import "No module named transform")
Is there a version mismatch or missing dependency from this?:
from genshi.filters.transform import Transformer
Attachments (0)
Change History (4)
comment:1 Changed 7 years ago by anonymous
- Cc robert.nadler@… added; anonymous removed
comment:2 Changed 7 years ago by garth@…
I've updated the BatchModifyPlugin page with a warning, and pointed back here.
comment:3 Changed 6 years ago by dgynn
- Resolution set to fixed
- Status changed from new to closed
Genshi 0.5 is now a dependency for Trac.
comment:4 Changed 6 years ago by anonymous
It may be a dependency now, but some of have been running Trac 0.11 with genshi 0.4.4 with no problems. This information was too hard for me to find, please put the notice back on the BatchModifyPlugin page.
Note: See TracTickets for help on using tickets.
You probably need to install Genshi 0.5dev... I had to do the following: | http://trac-hacks.org/ticket/2510 | CC-MAIN-2014-52 | refinedweb | 236 | 60.11 |
In this Instructable, together we will undertake the journey of programming the ESP8266-12E WIFI Development Board as an HTTP server. Using a web browser we will send instructions to the ESP8266-E12 to change it's behavior. Throughout the process, at each step we will also look under the hood to see what is going on inside. While doing this we will control the on board LED to turn ON and OFF with commands issued via a web browser.
In my previous Instructable Programming the ESP8266-12E using Arduino software/IDE I have described how to add the ESP8266-E12 board to the Arduino software/IDE and write your first sketch to blink the on board LED. The code is hardwired, hence on power up the ESP-8266-12E does what it is suppose to do - blink the LEDs. One cannot change the behavior of the board without reprogramming it.
In an ideal world of IoT the ESP8266-E12 should communicate with us over WiFi and over the the internet. We should be able to control the ESP-8266-E12 over Wifi using protocols specific to the internet. We should be able to instruct the ESP-8266-12E to do "Things" without having to reprogram.
Step 1: Opening the "Connect to a Network" Window
In your Windows environment open the Network and Sharing Center. You get to the Network and Sharing Center via the Control Panel.
Click on Connect to a network to open theConnect to a networkwindow. Leave it open, we will be referring to this window often. You may shut the Network and Sharing Center.
Step 2: Writing the SimpleWebServer Sketch
Connect your ESP8266-12E to your computer.
Open your Arduino program/IDE and paste the following code:
#include <ESP8266WiFi.h>
WiFiServer server(80); //Initialize the server on Port 80
void setup() {
WiFi.mode(WIFI_AP); //Our ESP8266-12E is an AccessPoint
WiFi.softAP("Hello_IoT", "12345678"); // Provide the (SSID, password); .
server.begin(); // Start the HTTP Server
}
void loop() { }
This boiler plate code will be a part of every ESP8266 sketch we write. This code does the following:
- Includes the ESP8266 library ESP8266WiFi.h.
- Creates the instance "server" of the class "WiFiServer" listening on port 80. Notice "server" is a global instance.
- Set the mode of our ESP8266 to be an Access Point (AP).
- Provide the SSID and password. The password / passphrase has to be at least 8 characters long.
- Fire-up our server by calling the begin() method.
Save as SimpleWebServer sketch. Compile and upload the sketch to your ESP8266-12E.
Step 3: Checking Our Server
Open the "Connect to network" window. You should see our server with SSID "Hello_IoT" in the list. Select the Hello_IoT network, provide the password/passphrase and save it.
Hurrah our ESP8266-12E is operating as a HTTP server listening on port 80.
Step 4: Looking Under the Hood
Launch the Serial MonitorTools | Serial Monitor or with the shortcut keys Crtl-Shift-M.
Step 5: Get HTTP Server Information From the ESP8266-12E
Add the following piece of code in the end of the setup() function:
//Looking under the hood
Serial.begin(115200); //Start communication between the ESP8266-12E and the monitor window
IPAddress HTTPS_ServerIP= WiFi.softAPIP(); // Obtain the IP of the Server
Serial.print("Server IP is: "); // Print the IP to the monitor window
Serial.println(HTTPS_ServerIP);
Compile and load the sketch and watch the Monitor Window. You should see the default IP address of the ESP8266-12E as 192.168.4.1.
Step 6: Web Browser Connects/Talks to Server
It is time for a web browser to connect to our HTTP server.
Enter the following code within the loop() function:
WiFiClient client = server.available();
if (!client) {
return;
}
//Looking under the hood
Serial.println("Somebody has connected :)");
Compile and load to the ESP8266-E12.
Open a browser window and enter and and hit enter.
Observe your Monitor window to check for a connection.
Step 7: Listen to What the Browser Is Sending to the HTTP Server
The web browser connects to the HTTP server and sends the request. The server receives the request and does something with it. Rather it can do a lot of different things.
Enter the following code in the loop() function.
//Read what the browser has sent into a String class and print the request to the monitor
String request = client.readString();
//Looking under the hood
Serial.println(request);
Compile and upload to the ESP8266-E12. Enter the following in the address field of your browser.
The browser sends a GET request to the server. Notice "/PARAM" following the GET request. Off all the text sent we are only interested in the first line of the request. Thus we replace the code
String request = client.readString();
with
String request = client.readStringUntil('\r');
Step 8: Turning the LED ON/OFF Through the Web Browser
We are ready to turn the LED on GPIO16 ON/OFF through commands given via the web browser.
Firstly initialize the digital port GPIO16 as an output port and keep the initial state of the LED ON.
At the bottom of setup() function add the following lines of code:
pinMode(LED_PIN, OUTPUT); //GPIO16 is an OUTPUT pin;
digitalWrite(LED_PIN, LOW); //Initial state is ON
At the bottom of loop() function add the following lines of code:
// Handle the Request
if (request.indexOf("/OFF") != -1){
digitalWrite(LED_PIN, HIGH); }
else if (request.indexOf("/ON") != -1){
digitalWrite(LED_PIN, LOW);
}
In the address bar of your browse type the following URL:
The LED on the ESP8266-E12 will turn OFF.
Then type the following URL:
The LED on the ESP8266-E12 will turn ON.
Step 9: Let Us Get a Little Fancy
It is cumbersome to change the URL each time we need to turn the LED ON or OFF. Let us add some HTML code and some buttons.
Copy this code and add it at the bottom of the Loop() function:
// Prepare the HTML document to respond and add buttons:
INSTRUCTABLES MANGLES THE HTML CODE
//Serve the HTML document to the browser.
client.flush(); //clear previous info in the stream
client.print(s); // Send the response to the client
delay(1);
Serial.println("Client disonnected"); //Looking under the hood
Compile and load.
HAPPY IoTing!!! | http://www.instructables.com/id/Programming-a-HTTP-Server-on-ESP-8266-12E/ | CC-MAIN-2017-17 | refinedweb | 1,037 | 66.33 |
[Update: see below]. A few years ago, Eric van der Vlist put together a proof of concept XML schema language called Examplotron. The clever part of Examplotron is that the schema for a given XML document is that document itself; a document is its own schema. This allows schemas to be designed by writing down example documents (examplotron, get it?) which can then be generalised automatically to produce a RELAX NG schema for those documents and other documents like them. Clever. Now, what if XPath worked like that?
XPath can be used to write patterns or selectors that match or select parts of an XML document for further processing. However, XPath itself is not XML, it is a textual query language that combines the syntax of UNIX file paths with SQL-like predicates. Taking inspiration from Examplotron, we can try defining patterns to match fragments of XML by writing literal fragments of XML. For example, the pattern equivalent to the XPath
foo/bar/baz would be:
<foo> <bar> <baz/> </bar> </foo>
This pattern seems fairly obvious, although it is a bit more verbose than the original XPath. How about a pattern to match a
<h1> element followed by a
<p> element? We might use this to find the first paragraph of an XHTML document, perhaps to apply some special formatting to it.
<h1/> <p/>
In this case, the pattern looks a little clearer than the equivalent XPath,
h1/following-sibling::p. On a brief tangent, CSS3 actually has a direct sibling selector to express this:
h1 + p. We have implemented this selector in Prince and it is quite handy for situations like this.
While the above pattern is easy to understand, it’s not clear what we can do with it once it matches some elements. The XPath and the CSS selector match only one element, the
<p>, which can then be transformed by an XSLT template or styled with CSS rules. The pattern on the other hand matches a fragment of XML consisting of two consecutive elements, just like the regular expression
ab matches two consecutive characters. As with regular expressions, the obvious thing to do once you’ve found something that matches the pattern is to replace it with something else:
<ex:search> <h1/> <p/> </ex:search> <ex:replace> <h1/> <p class="first"/> </ex:replace>
Now we have two patterns: we search the document for an instance of the first pattern, and replace it with an instance of the second pattern. This is exactly like the regular expression substitution
s/ab/ac/ which will replace
ab with
ac, except that instead of matching characters we’re matching XML elements.
There are some details that I’ve skimmed over in this example, such as what if the elements have existing content or attributes, do we copy them across implicitly? What if we want to match an element that doesn’t have a particular attribute? Some patterns cannot be written explicitly, and we would need to introduce some helper elements and attributes in a separate namespace to express what we want. For example, we might want to throw away paragraph elements that have no content:
<ex:search> <p> <ex:empty/> </p> </ex:search> <ex:replace> <!-- nothing --> </ex:replace>
To finish off, here is an example of a pattern for reversing author names in a citation, so that “Smith John” becomes “John Smith”:
<ex:search> <lastName/> <firstName/> </ex:search> <ex:replace> <firstName/> <lastName/> </ex:replace>
When I tried writing this in XSLT, this is the first solution that occurred to me:
<xsl:template <xsl:copy-of <xsl:copy-of </xsl:template> <xsl:template <!-- nothing --> </xsl:template>
Can anyone think of a simpler way to express this transformation in XSLT?
Update
Boy, did I get that wrong. As Andrew Houghton points out below, the XPath that I gave for selecting a
<p> element immediately following an
<h1> element is insufficiently specific, as it will match all of the
<p> elements that occur after the
<h1>, not just the first. This is equivalent to the CSS indirect sibling selector,
h1 ~ p, not the direct sibling selector,
h1 + p.
The obvious but incorrect solution is to add a position predicate to restrict the XPath to select only the first paragraph, like this:
h1/following-sibling::p[1]. However, this will select a
<p> element that does not directly follow the
<h1>, for example the paragraph in this document, which follows a table:
<h1>Heading</h1> <table>...</table> <p>This paragraph does not immediately follow the heading.</p>
To select only paragraphs that immediately follow headings we need to select the first element that follows the heading and then check that it is a paragraph, like this:
h1/following-sibling::*[1]/self::p. Great! It is a lot more verbose than the CSS
h1 + p selector, but at least it should work now.
Except that in XSLT, it doesn’t work. You see, I forgot that XSLT pattern syntax is a restricted subset of XPath, and can only use the child and attribute axes. This means that you cannot write an XSLT template that matches this XPath:
<xsl:template
Instead you need to write it the other way around and place the sibling axis inside a predicate:
<xsl:template
Finally, the XSLT templates that I wrote to swap the order of
lastName and
firstName elements also need similar adjustments to their XPaths:
<xsl:template <xsl:copy-of <xsl:copy-of </xsl:template> <xsl:template <!-- nothing --> </xsl:template>
As always, mistakes are the portals of discovery. So, the challenge stands: can anyone write an XSLT transform to swap the order of two elements that is simpler, cleaner, and more obvious than this monstrosity?
Two points on your article. The first is on your h1 p example. I believe that you have to be careful with the XPath expression h1/following-sibling::p. This XPath expression will return multiple p nodes which you might not have expected. For example if you had the following siblings: h1 p table p, the XPath expression would return both p nodes. You probably wanted to constrain that XPath expression to h1/following-sibling::*[1]/self::p, to insure that you only match an h1 node and its immediately following sibling that is a p node.
The second point follows along the same lines with your XSLT solution:
match="lastName[following-sibling::*[1]/self::firstName]"
select="following-sibling::*[1]/self::firstName"
match="firstName[preceding-sibling::*[1]/self::lastName]"
FYI, in case you might be thinking you could replace h1/following-sibling::*[1]/self::p with h1/following-sibling::p[1] consider the following sibling node set: h1 table p
h1/following-sibling::*[1]/self:p returns an empty node set
h1/following-sibling::p[1] return p
Here is my take on representing Xpath expressions in XML.
If the goal is to improve readability then I belive that a better approach would be to provide a graphical representation of the expression, akin to those used for XML schema in various editors. Of course, translating xpath notation into XML would provide flexibility in this area.
The downside of using an XML version of an xpath as documentation is verbosity (because a "complete" XML schema for xpath would be fairly complex).
Footnote: As an example, the Axis diagrams in Michael Kays book are IMHO worth more than a thousand words.
<xsl:template
<xsl:element
<xsl:copy-of
<xsl:copy-of
</xsl:element>
</xsl:template>
CKnell, that's a good solution, although it might be better to use xsl:copy to create the element, in case it is in a namespace:
?
Using the name() function, namespace prefixes are copied (unlike local-name()), so I don't understand the objection..
I'd like to ammend my last comment. Please change, "In the proferred source document, firstName and lastName are children of ex:search." to, "In the proferred source document, firstName and lastName are children of ex:search and ex:replace."
Sorry, I wasn't clear. The search/replace XML is an example of a pattern that could be applied to another XML document, one like this perhaps:
. | http://www.oreillynet.com/xml/blog/2007/03/pattern_matching_with_xml_1.html | crawl-002 | refinedweb | 1,351 | 58.01 |
#include <GEO_PrimMesh.h>
Definition at line 23 of file GEO_PrimMesh.h.
NOTE: The constructor should only be called from subclass constructors.
Definition at line 28 of file GEO_PrimMesh.h.
NOTE: The destructor should only be called from subclass destructors.
Definition at line 34 of file GEO_PrimMesh.h.
Definition at line 88 of file GEO_PrimMesh.h.
Evaluate the position for the given parametric coordinates (with the given derivatives). Return 0 if successful, or -1 if failure. The default implementation returns {0,0,0,0};
Reimplemented from GEO_Primitive.
This method returns the JSON interface for saving/loading the primitive If the method returns a NULL pointer, then the primitive will not be saved to geometry files (and thus cannot be loaded).
Implements GA_Primitive.
All subclasses should call this method to register the mesh intrinsics.
Definition at line 94 of file GEO_PrimMesh.h.
Definition at line 135 of file GEO_PrimMesh.h. | http://www.sidefx.com/docs/hdk/class_g_e_o___prim_mesh.html | CC-MAIN-2018-43 | refinedweb | 148 | 60.82 |
On Wed, 2009-01-07 at 17:41 -0800, David Daney wrote:
> This is a preliminary patch to allow the kernel to run in mapped
> address space via a wired TLB entry. Probably in a future version I
> would factor out the OCTEON specific parts to a separate patch.
Yes, please do the factoring.
> diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
> index 780b520..d9c46a4 100644
> --- a/arch/mips/Kconfig
> +++ b/arch/mips/Kconfig
> @@ -1431,6 +1431,23 @@ config PAGE_SIZE_64KB
>
> endchoice
>
> +config MAPPED_KERNEL
> + bool "Mapped kernel"
> + Select this option if you want the kernel's code and data to
> + be in mapped memory. The kernel will be mapped using a
> + single wired TLB entry, thus reducing the number of
> + available TLB entries by one. Kernel modules will be able
> + to use a more efficient calling convention.
This is currently only supported on 64-bit processors, so this should
depend on CONFIG_64BIT.
> diff --git a/arch/mips/Makefile b/arch/mips/Makefile
> index 0bc2120..5468f6d 100644
> --- a/arch/mips/Makefile
> +++ b/arch/mips/Makefile
...
> @@ -662,7 +670,7 @@ OBJCOPYFLAGS += --remove-section=.reginfo
>
> CPPFLAGS_vmlinux.lds := \
> $(KBUILD_CFLAGS) \
> - -D"LOADADDR=$(load-y)" \
> + -D"LOADADDR=$(load-y)" $(PHYS_LOAD_ADDRESS) \
> -D"JIFFIES=$(JIFFIES)" \
> -D"DATAOFFSET=$(if $(dataoffset-y),$(dataoffset-y),0)"
It seems more consistent to just eliminate PHYS_LOAD_ADDRESS entirely
and add a line here reading:
-D"PHYSADDR=0x$(CONFIG_PHYS_LOAD_ADDRESS)" \
>
> diff --git a/arch/mips/include/asm/mach-cavium-octeon/kernel-entry-init.h
> b/arch/mips/include/asm/mach-cavium-octeon/kernel-entry-init.h
> index 0b2b5eb..bf36d82 100644
> --- a/arch/mips/include/asm/mach-cavium-octeon/kernel-entry-init.h
> +++ b/arch/mips/include/asm/mach-cavium-octeon/kernel-entry-init.h
> @@ -27,6 +27,56 @@
> # a3 = address of boot descriptor block
> .set push
> .set arch=octeon
> +#ifdef CONFIG_MAPPED_KERNEL
> + # Set up the TLB index 0 for wired access to kernel.
> + # Assume we were loaded with sufficient alignment so that we
> + # can cover the image with two pages.
This seems like a pretty big assumption. Possible ways to handle this:
o Generalize to handle n pages.
o Hang in a loop here if the assumption is not met
o Check later on whether the assumption was true and print a message.
I'm not really sure how to do this last one, though.
> + dla v0, _end
> + dla v1, _text
> + dsubu v0, v0, v1 # size of image
> + move v1, zero
> + li t1, -1 # shift count.
> +1: dsrl v0, v0, 1 # mask into v1
> + dsll v1, v1, 1
> + daddiu t1, t1, 1
> + ori v1, v1, 1
> + bne v0, zero, 1b
> + daddiu t2, t1, -6
> + mtc0 v1, $5, 0 # PageMask
> + dla t3, 0xffffffffc0000000 # kernel address
I think this should be CKSSEG rather than a magic constant.
> + dmtc0 t3, $10, 0 # EntryHi
> + bal 1f
> +1: dla v0, 0x7fffffff
Another magic constant; don't know if there is already a define that
really applies, though. Perhaps add something to asm-mips/inst.h?
> diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
> index 1055348..b44bcf8 100644
> --- a/arch/mips/kernel/traps.c
> +++ b/arch/mips/kernel/traps.c
> @@ -49,6 +49,8 @@
> #include <asm/stacktrace.h>
> #include <asm/irq.h>
>
> +#include "../mm/uasm.h"
This looks like it would be a good idea to consider moving uasm.h to
include/asm-mips, or possibly splitting it into two header files, one of
which would move to include/asm-mips.
> +
> extern void check_wait(void);
> extern asmlinkage void r4k_wait(void);
> extern asmlinkage void rollback_handle_int(void);
> @@ -1295,9 +1297,18 @@ void *set_except_vector(int n, void *addr)
>
> exception_handlers[n] = handler;
> if (n == 0 && cpu_has_divec) {
> - *(u32 *)(ebase + 0x200) = 0x08000000 |
> - (0x03ffffff & (handler >> 2));
> - local_flush_icache_range(ebase + 0x200, ebase + 0x204);
> + unsigned long jump_mask = ~((1 << 28) - 1);
The 28 is a magic constant specifying the number of bits of the offset
in a jump instruction. Perhaps define jump_mask in asm-mips/inst.h since
it is related to the instruction format?
> + u32 *buf = (u32 *)(ebase + 0x200);
> + unsigned int k0 = 26;
You are using k0 as a constant by defining it as a variable. You could
just have a #define here, but my suggestion is that it would be better
to add defines to asm-mips/inst.h (something like "#define REG_K0 26"
might be suitable for meeting this particular need)
> + if((handler & jump_mask) == ((ebase + 0x200) & jump_mask)) {
> + uasm_i_j(&buf, handler & jump_mask);
> + uasm_i_nop(&buf);
> + } else {
> + UASM_i_LA(&buf, k0, handler);
> + uasm_i_jr(&buf, k0);
> + uasm_i_nop(&buf);
> + }
> + local_flush_icache_range(ebase + 0x200, (unsigned long)buf);
> }
> return (void *)old_handler;
> }
> /*
> @@ -1670,9 +1683,9 @@ void __init trap_init(void)
> return; /* Already done */
> #endif
>
> - if (cpu_has_veic || cpu_has_vint)
> + if (cpu_has_veic || cpu_has_vint) {
> ebase = (unsigned long) alloc_bootmem_low_pages(0x200 +
> VECTORSPACING*64);
> - else {
> + } else {
Checkpatch will complain about this, and it doesn't really add value to
make the change.
> diff --git a/arch/mips/mm/page.c b/arch/mips/mm/page.c
> index 1417c64..0070aa0 100644
> --- a/arch/mips/mm/page.c
> +++ b/arch/mips/mm/page.c
> @@ -687,3 +687,9 @@ void copy_page(void *to, void *from)
> }
>
> #endif /* CONFIG_SIBYTE_DMA_PAGEOPS */
> +
> +#ifdef CONFIG_MAPPED_KERNEL
> +/* Initialized so it is not clobbered when .bss is zeroed. */
> +unsigned long phys_to_kernel_offset = 1;
> +unsigned long kernel_image_end = 1;
> +#endif
Clearly there is some magic happening here, but the such wizardry needs
more documentation. I can deduce that these must be overwritten before
we get to kernel_entry; who sets these?
I don't know for sure what kernel_image_end is, but I am guessing that
it is the physical address of the end of the kernel. If so, you can
eliminate it as a piece of magic by calculating it at run time as the
sum of the address of the start of the kernel and the size.
--
David VomLehn, dvomlehn@cisco.com
- - - - -. | http://www.linux-mips.org/archives/linux-mips/2009-01/msg00026.html | CC-MAIN-2014-42 | refinedweb | 936 | 55.84 |
gmock is a unit testing and mocking framework available from Google
. Getting it set up and working correctly with a VS 2013 project takes a little bit of ceremony, however–and the errors one can stumble upon are not always the most helpful.
Get gmock compiled
Download the latest version of gmock (here we use v1.7.0
)
- Open up the Visual Studio solution (
gmock-1.7.0msvc2010gmock.sln
)
- Right click
gmock > Properties > C/C++ > Code Generation > Runtime Library > Multi-threaded Debug
- Build the solution
- Ensure that
gmock-1.7.0msvc2010Debug
contains both
gmock.lib
and
gmock_main.lib
. You will link these into your test project.
Configure a test project
- Open your Visual Studio solution (or create a new one). You should not
create a new project within
gmock-1.7.0msvc2010gmock.sln
Add New Project > Visual C++ > Win32 > Win32 Console Application
. Name the project whatever you like, but I suggest
XXXXXTest
, where XXXXX is the name of the project you are testing.
- Right click on your newly created test project
> Properties > Configuration Properties > VC++ Directories
Include Directories >
Add the full path to your gmock include and gtest include (e.g.
C:Usersjalospinosogmock-1.7.0gtestinclude
and
C:Usersjalospinosogmock-1.7.0include
). Note: of course, you will want to add the /include folder for your project under test as well. Add this project as a dependency to your test project.
Library Directories > Add the full path to your gmock/gtest artifacts (e.g.
C:Usersjalospinosogmock-1.7.0msvc2010Debug`)
Configuration properties > Linker > Input > Additional Dependencies
- Add
gmock_main.lib
and
gmock.lib
. Note also that you may want to include the name of the project under test here.
C/C++ > Code Generation > Runtime Library > Multi-threaded Debug
Add a test
In your test project, create a new source file called
HelloTest.cpp
with the following contents:
#include "gmock/gmock.h" TEST(HelloTest, AssertsCorrectly) { int value = 42; ASSERT_EQ(42, value); }
Compile your solution. In your output folder, you will see a new artifact with a name corresponding to your new test project. This executable is a console application that runs all of the tests in your source:
> MyTest.exe Running main() from gmock_main.cc [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from MyTest [ RUN ] HelloTest.AssertsCorrectly [ OK ] HelloTest.AssertsCorrectly(0 ms) [----------] 1 test from MyTest(0 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (0 ms total) [ PASSED ] 1 test. | http://colabug.com/1681708.html | CC-MAIN-2017-47 | refinedweb | 410 | 60.01 |
opencv2/core/ultility.hpp not found
Hello, I have been very interested in studying Facial Recognition and a good friend of mine had me look into openCV. I can get most of the tutorials to work while learning to use openCV but when I need to use
#include <opencv2/core/utility.hpp>
I receive an error saying it cannot be found, I looked into the folder and its not there? and this is coming from a fresh copy of openCV off of github. Where or how can I fix not having the utility.hpp?
which opencv version ? (lots of changes going on on master)
opencv/modules/core/include/opencv2/core/utility.hpp, not there ? | https://answers.opencv.org/question/14755/opencv2coreultilityhpp-not-found/?sort=votes | CC-MAIN-2021-43 | refinedweb | 114 | 67.25 |
#include <StelFileMgr.hpp>.
Constructor..
Destructor."
Set a set of all possible files/directories in any Stellarium search directory.
Get a vector of strings which describes the current search paths.
Set the search paths.
Check if a path exists.
Note it might be a file or a directory..
Check if a path exists and is a directory.
Return the size of the file at the path.
Make a directory.
Convenience function to find the parent directory of a given path May return relative paths if the parameter is a relative path.
Sets the user directory.
This updates the search paths (first element)
This is the directory into which screenshots will be saved It is $HOME on Linux, BSD, Solaris etc.
It is the user's Desktop on MacOS X (??? - someone please verify this) It is ??? on Windows
Sets the screenshot directory.
This is set to platform-specific values in the StelFileMgr constructor, but it is settable using this function to make it possible to implement the command-line option which specifies where screenshots go.
get the directory for locate files (i18n) | http://www.stellarium.org/doc/0.10.2/classStelFileMgr.html | CC-MAIN-2014-52 | refinedweb | 180 | 60.41 |
Content-type: text/html
#include <sys/table.h>
int table(id, index, addr, nel, lel)
long id;
long index;
void *addr;
long nel;
u_long lel;
The ID of the system table that contains the element or elements. The index of an element within the table. The address of a struct (or a struct array) of the appropriate type to copy the element values to (on examine) or from (on update). The various structure layouts are described in /usr/include/sys/table.h. A signed number that specifies how many elements to copy and in which direction. A positive value copies the elements from the kernel to addr. A negative value copies the elements from addr to the kernel. The expected size of a single element.
The table() interface is used to examine or update one or more elements in the system table. The system table is specified by id and the starting element is specified by index.
The table() interface copies the element value or values to or from the specified addr. The nel parameter specifies the number of elements to copy, starting from index. A positive value indicates an examine operation. The elements are copied from the kernel to addr. A negative value indicates an update operation. The elements are copied from addr to the kernel.
The lel parameter specifies the expected element size. If multiple elements are specified, successive addresses are calculated for addr by incrementing it by lel for each element copied. If the size of a given element is larger than lel, table() truncates excess data on an update (from addr to the kernel) and stores only the expected size on an examine (from the kernel to addr). If the size of a given element is smaller than lel, table() copies only the valid data on an update and pads the element value on an examine.
The table() interface guarantees that an update operation will not change the offset and size of any field within an element. New fields are added only at the end of an element.
The table() interface returns a count of the elements examined or updated. The id TBL_PROCINFO allows you to determine the actual number of elements in a table before requesting any data; call table() with lel set to zero (0) and nel to the maximum positive integer.
The id parameter must specify one of the following tables, each of which has a structure in <sys/table.h> unless otherwise noted. The controlling terminal device number table. The index is by process ID and exactly one element can be requested. If the process ID is zero (0), the current process is indexed. Only 0 and the current process ID are supported. The element is of type dev_t as defined in <sys/types.h>. This table is examine only. It cannot be updated. The U-area table. The index is by process ID. See the user.h header file for the (pseudo) struct user that is returned. The system load average vector (pseudo) table. The index must be zero (0) and exactly one element can be requested.
A positive return value indicates that the call succeeded for that number of elements. A return value of -1 indicates that an error occurred, and an error code is stored in the global location errno.
The addr parameter specifies an invalid address. The table specified by id is not defined. The index value is not valid for the specified table. The specified table allows only an index of the current process ID with exactly one element. Some other index or element number was specified. An element length of zero (0) was supplied for the TBL_ARGUMENTS table. An attempt was made to update an examine-only table. An attempt was made to change the maximum number of processes or the account ID, and the caller was not the superuser. The process specified by a process ID index cannot be found.
acct(2) delim off | http://backdrift.org/man/tru64/man2/table.2.html | CC-MAIN-2017-22 | refinedweb | 659 | 58.99 |
A decorator function allows you to add or modify an existing function without actually modifying it's structure.
This is a very basic explanation of a decorator function.
But the question here is why should we use a decorator function when we can just change a function as required?
Well, it's not always you have one single function that you might want to modify. Suppose you have a bunch of functions in your project where you want to make a specific change to all the function as a part of your requirement.
Now it would be very tedious to find and modify each and every function and also test each one of them to make sure it does not break your application.
For that you have decorator function using which you can modify it without actually altering any of the code inside the function. Decorator function can have many use cases but is typically used when you want to make minor changes to your existing set of functions.
Let's have a look at the simple example:
def deco_func(val): def wrapper(): print("Trigger") print(val) print("Kill") print("------------------------------") return wrapper holder = deco_func("Hello Python") print(holder) holder()
Output:
<function deco_func.<locals>.wrapper at 0x7efcdc4e8550> Trigger Hello Python Kill ------------------------------
The above example is possible in python because the functions here are treated as first class objects, which means that functions can be passed as or used as parameters/arguments.
Here's some quick pointers to keeps in mind:
- A function is an instance of the Object type.
- A function can be stored in a variable.
- A function can be passed as a parameter.
- A function can return a function.
Now, let's make some minor changes in the above code:
def deco_func(func): def wrapper(): print("Trigger") func() print("Kill") print("------------------------------") return wrapper def func1(): print("This is Function 1") def func2(): print("This is Function 2") def func3(): print("This is Function 3") func1 = deco_func(func1) func2 = deco_func(func2) func3 = deco_func(func3) print(func1) func1() func2() func3()
Output:
<function deco_func.<locals>.wrapper at 0x7f6960526820> Trigger This is Function 1 Kill ------------------------------ Trigger This is Function 2 Kill ----------------------------------- Trigger This is Function 3 Kill -----------------------------------
In the above code we have updated the deco_func and now it accepts a function as an argument.
We have also created three functions that just print a statement.
Now, the line
func1 = deco_func(func1)
allows us to store the deco_func which accepts a parameter func1, all that in a variable func1. Hence, we can now call func1() to get the desired results.
By seeing the above piece of code, you can now figure out a bit, how decorator function works behind the scenes.
So, whenever you are creating a decorator function you have to create this wrapper function/functionality.
The outer function takes the function itself as an argument and the inner wrapper function calls the actual function upon which you are making the modifications.
Below is an example of the above code as a decorator function syntax in Python:
def deco_func(func): def wrapper(): print("Trigger") func() print("Kill") print("------------------------------") return wrapper @deco_func def func1(): print("This is Function 1") @deco_func def func2(): print("This is Function 2") @deco_func def func3(): print("This is Function 3") func1() func2() func3()
Now, these are sort of dumb functions that just print a statement.
What happens when one of the functions requires one or more parameters to be passed and some doesn't. Or, some functions return a value and some doesn't??
Below is an example when one or more functions requires and arguments/parameters to be passed or one or more functions happen to return some value back when they are called. The whole point of decorator functions is to be able to use in any functions use-case, hence for that we use the unpack or splat operator i.e. * args and ** kwargs:
def deco_func(func): def wrapper(*args, **kwargs): print("Trigger") res = func(*args, **kwargs) print("Kill") print("------------------------------") return res return wrapper @deco_func def func1(val): print("This is Function 1") print("Function 1 value: ", val) @deco_func def func2(val1, val2): print("This is Function 2") return val1 + val2 @deco_func def func3(): print("This is Function 3") func1(20) result2 = func2(45, 40) func3() print("Function 2 sum value: ", result2)
Output:
Trigger This is Function 1 Function 1 value: 20 Kill ----------------------------------- Trigger This is Function 2 Kill ----------------------------------- Trigger This is Function 3 Kill ----------------------------------- Function 2 sum value: 85
As you can see that the splat operator helps the decorator function to accept none or any number of parameters and the res variable stores the returned value of the decorated function to serve it's purpose.
So, That's all about decorator function. I know it's a bit too much to take at a single reading, but I suggest open your python console or jupyter notebook and follow each line of the code snippets to understand the functionality and it will be a piece of cake for you!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/debojyotichatterjee9/decorator-functions-in-python-1ioa | CC-MAIN-2022-33 | refinedweb | 837 | 53.55 |
In the previous part of this series I looked at how to use mocking effectively when testing parent-child component relationships.
But that isn’t the only way of dealing with parent-child components, and component hierarchies in general. In this part I’ll look at testing two components in the same test suite. So far I’ve found this useful when dealing the Svelte’s context API.
All of the code samples in the post are available in this demo repo:
dirv
/
svelte-testing-demo
A demo repository for Svelte testing techniques
An example
Suppose you create a
Menu component and a
MenuItem component.
Menu is responsible for opening and closing a list of items, and
MenuItem represents a single item in that list. Crucially, it’s the
MenuItem’s responsibility to close the
Menu when it is selected.
Here’s
Menu. I’ve simplified this by removing styles and by including only functionality that’s relevant to this post.
<script context="module"> export const key = {}; </script> <script> import { setContext } from 'svelte'; let open = false; const toggleMenu = () => open = !open; setContext(key, { toggleMenu }); </script> <button on:click={toggleMenu}Menu</button> {#if open} <div on:click={toggleMenu}> <slot /> </div> {/if}
And here’s
MenuItem (again, this is a simplified implementation).
<script> import { getContext, tick } from "svelte"; import { key } from "./Menu.svelte"; export let action; const { toggleMenu } = getContext(key); const closeMenuAndAct = async (event) => { event.stopPropagation(); toggleMenu(); await tick(); action(); }; </script> <button on: <slot /> </button>
Both of these components are coupled in two ways.
First,
Menu uses
<slot> to display all its children and it’s expected that some of these children will be instances of
MenuItem.
Second, both components use the context API to share the
toggleMenu function.
MenuItems can communicate with the parent by invoking the
toggleMenu function, which tells the
Menu it’s time to close.
Could we programmatically call the context API to test
Menu and
MenuItem independently?
As far as I can tell, no we can’t. In order to do these we’d need to manipulate the context API. For example, for the
MenuItem we’d need to make available a spy
toggleMenu function that we could then assert on to check it was invoked.
it("invokes the toggleMenu context function", () => { // ? set up context here ? });
Trouble is, there’s no supported way of calling the context API outside of components themselves. We could probably do it by using the
component.$$ property in the way we did with bound values in the last part, but that’s at risk of breaking in future.
Besides, these two components are meant to be used together, so why not test them together?
This is one place where React has Svelte beat!
Because React allows inline JSX, we could simply write a test like this:
const menuBox = () => container.querySelector(".overlay"); it("closes the menu when clicking the menuItem", () => { mount(<Menu><MenuItem /></Menu>); click(menuItem()); expect(menuBox()).not.toBeNull(); });
Unfortunately Svelte components must be defined in their own files, so we can’t do little inline hierarchies like this.
Solution: define a test component for each test
In the test repo I have a directory
spec/components where I keep little hierachies of components for specific tests. Sometimes the same test component can be used for multiple tests.
Here’s
spec/components/IsolatedMenuItem.svelte:
<script> import Menu from "../../src/Menu.svelte"; </script> <Menu> <img alt="menu" slot="icon" src="menu.png" /> </Menu>
There are a couple of tests I can write with this. First, the test that checks the menu is closed.
Here’s
spec/Menu.spec.js with just the first test—notice that I named the file after the parent component, but it’s testing both the parent and child.
import { tick } from "svelte"; import { mount, asSvelteComponent } from "./support/svelte.js"; import Menu from "../src/Menu.svelte"; import IsolatedMenuItem from "./components/IsolatedMenuItem.svelte"; const menuIcon = () => container.querySelector(".icon"); const menuBox = () => container.querySelector("div[class*=overlay]"); const click = async formElement => { const evt = document.createEvent("MouseEvents"); evt.initEvent("click", true, true); formElement.dispatchEvent(evt); await tick(); return evt; }; describe(Menu.name, () => { asSvelteComponent(); it("closes the menu when a menu item is selected", async () => { mount(IsolatedMenuItem); await click(menuIcon()); await click(menuBox().querySelector("button")); expect(menuBox()).toBe(null); }); });
Notice how similar this is to the React version above. The difference is just that the component exists within its own file instead of being written inline.
(By the way, I think this is the first time in the series that I’ve shown any DOM events...
click is something that could be cleaned up a little. We’ll look at that in the next post!)
The second test uses the
spy prop of
IsolatedMenuItem.
it("performs action when menu item chosen", async () => { const action = jasmine.createSpy(); mount(IsolatedMenuItem, { spy: action }); await click(menuIcon()); await click(menuBox().querySelector("button")); expect(action).toHaveBeenCalled(); });
For this test component I named the prop
spy, which is used to set the
action prop on
MenuItem. Perhaps I should have kept its name as
action. The benefit of naming it
spy is that it’s clear what its purpose is for. But I'm still undecided if that’s a benefit or not.
Use with
svelte-routing
I’ve also used this with
svelte-routing when I defined my own wrapped version of
Route. These classes also use the context API so it’s similar to the example shown above.
In the next (and final!) post in this series, we’ll look at raising events like the one we saw here, the
click event, and how we can test components together with more complex browser APIs like
setTimeout.
Discussion (1)
Awesome stuff bud :) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/d_ir/testing-svelte-context-with-component-hierarchies-2lp2 | CC-MAIN-2021-31 | refinedweb | 942 | 58.08 |
Generic storage class template for an RGBA color representation storing four floating points elements. More...
Generic storage class template for an RGBA color representation storing four floating points elements.
Used as base class for the mi::math::Color class.
Use the mi::math::Color class in your programs and this storage class only if you need a POD type, for example for parameter passing.
All elements are usually in the range [0,1], but they may lie outside that range and they may also be negative.
This class provides storage for four elements of type mi::Float32. These elements can be accessed as data members named
r,
g,
b, and
a. For array-like access of these elements, there order in memory is
rgba.
This class contains only the data and no member functions. See mi::math::Color for more.
#include <mi/math/color.h>
Alpha value, 0.0 is fully transparent and 1.0 is opaque; value can lie outside that range.
Blue color component.
Green color component.
Red color component. | https://raytracing-docs.nvidia.com/mdl/api/structmi_1_1math_1_1Color__struct.html | CC-MAIN-2021-49 | refinedweb | 172 | 61.43 |
DMelt:General/3 DMelt IDE
DataMelt IDE
DataMelt is an environment for scientific computation, data analysis and data visualization designed for scientists, engineers and students. The program incorporates many open-source software packages into a coherent interface using the concept of dynamic scripting It includes many components. First, which is most obvious, is DataMelt IDE (Integrated development environment), which can be used as a programming powerful editor for desktops running Windows,Linux, OS/2 or any other system which can run Java.
This section describes DataMelt IDE for desktops and any other computers with large screens. If you need a version of DataMelt IDE for small devices with a typical screen size 600x400, you can use an alternative IDE optimised for small screens. In this case, please go to the section DMelt:General/4_Small_Screen_Devices.
DataMelt examples
You can get some ideas about the capabilities of DataMelt using DataMelt examples ([Help]->[Online examples]). Go to YouTube how to work with DataMelt:
- DataMelt introduction (YouTube video)
- DataMelt Python/Jython Examples (YouTube video)
- DataMelt Java help system (YouTube video)
The dialogue with online examples after selecting [Help]->[Online examples] is shown below:
The examples are created for all supported languages, which are Java, Python/Jython, Octave/Matlab, BeanShell, JRuby, Groovy. Icones that read read "F" shows free examples that does not require DataMelt activation.
Jython scripts in the DataMelt
Use the DataMelt IDE in a similar way as any editor. It supports many programming languages: C/C++, JAVA, PHP, FORTRAN and many more. It is also specially designed for editing LaTeX files. It has several unique features, such as:
- Java-based editor with on-fly spell checking
- Color syntax highlighting for all classes and methods of ROOT
- Color syntax highlighting for many programming languages
- Multiple clipboards
- Multiple Eclipse-like bookmarks
- Linux/Unix - like commands cp, mv, rm, cat etc. are supported.
- Extensive LaTeX support: a structure viewer, build-in Bibtex manager, LatexTools
- LaTeX equations can be inserted into Jython scripts, mixing articles with Jython code
- A document structure viewer for fast navigation
Getting started with DataMelt
The script dmelt.sh (Linux/UNIX/Mac) or dmelt.bat (any Windows) starts the DataMelt IDE from the file jehep.jar, called jeHEP. Run one of these scripts depending on your platform. You will see DMelt IDE window.
Now you have a choice: either DataMelt actually can do is to run JHPlot examples. Go to [Help] and then [examples]. Select any Jython script from the categories and run it. You may also open a file and then click on the icon
to execute it. The examples are located in "macros/examples" folder of your installations. The number of examples is about 20.
For you continence, one can run any script using a "smart" button located in the bottom-right corner of the IDE.
If some file is already loaded, just click on the green button to process this file.
For advanced users, you can access more than 600 examples from a online database. Select [Help]-[Online examples]. Some examples are marked with red letter "F", i.e. "free" (GNU licensed) examples. In order to access all online examples, you should activate DataMelt as [Help]-[Activate]. Here you should enter your DMelt username and password. Activation should be done via [1] link.
Of course one can use the editor to work with Java, NetBeans or even LaTeX files
Updating DataMelt
Users of DataMelt-Pro (professional edition) receive separate updated jar files regularly. The community edition requires re-installation of the entire program. Use [Help]-[Activate] to activate DataMelt-Pro.
Configuration of DataMelt
You might want to activate on-fly spelling for a particular language. Copy OpenOffice dictionaries to the directory "dic" of the main DataMelt directory where the jehep.jar file is located. Then go to menu Tools - On-fly spelling and select active dictionary. To activate spelling, press the Start button spelling from the main menu. Note: English dictionary is already included to the downloaded package. Use double-click to replace a wrong word or to view alternative proposals To reload either the File Browser or Jython/Bean shell consoles, you should use the reload buttons located directly on small blue tabs. For bookmarks, the user should click on the right border of the jeHEP editor window. You should see a blue mark there, if the bookmark is set. You can click on it to come back to a specific text location. All preference files are located in the directory:
$HOME/.dmelt
directory (Linux/Mac) or
$HOME/dmelt
for windows, they are: the user dictionary file, JabRef preference files and other initialization files
Jython and Bean shell consoles
DataMelt DataMelt Code Assist for the Editor class. Analogously, one can print all such variables using the BeanShell commands (but using the BeanShell syntax).
LaTeX equations in comments
One important feature of this IDE is a "CodeView" feature (look at "View->Code]->[ViewCode]
The HTML version of the Jython/Python code is normally generated in a background. You can access HTML code from the directory "cachedir".
LaTeX equations can be inserted using DragMath (See the Menu [Tools]->[DragMath]. This brings us a window where one can write a LaTeX equations. The one can insert equations using the menu [File]-[push to jeHEP editor], which will insert a LaTeX equations under the cursor.
Executing scripts using the IDE
To run a Jython script, open a Jython file inside the IDE editor and click on the *run* button (indicated with the icon [[File:Running_dmelt.png] from the ToolBar of DataMelt. This executes the script from top to the bottom. The "print" outputs are redirected to the JythonShell (at the bottom of the IDE editor).
One can also use the [F8] key for fast execution of Jython scripts.
In case of run-time error, the DataMelt.
For you continence, one can run any scripts using a "smart" button located in the bottom-right corner of the IDE.
If some file is already loaded, just click on the green button to process this file.
Code assist of DataMelt editor
The code assist of DataMelt is based on Java serialisation mechanism and Python methods. Look at YouTube video on DataMelt Java help system. DataMelt IDE editor:
from jhplot import * f1 = F1D("x*sin(x)",-3.0,3.0) f1. # and press [F4]
Alternatively, one can click the icon
instead of [F4].
One can get a detailed description of this class in a sortable method and also can insert a necessary method to the place right after the dot. The table will be show as a sortable table.
If the object belongs to the jhplot package, you can get a detailed API information by selecting the necessary method and clicking on the menu "describe".
There is another approach to view documentation, Use this method:
from jhplot import * c=HPlot() # create a canvas help.doc(c)
This method brings up a web browser with Java API. This approach will work with all Java classes of DataMelt. You can also look at the help system itself.
from jhplot import * help.doc(help())
It will show the API of the help class. Also look at any class supported by the standard Java platform. For example:
from jhplot import * from java.util import ArrayList a=ArrayList() help.doc(a) # Look at Javadoc of ArrayList()
Finally, many core classes of DataMelt has a method called "doc()". Execute it as:
from jhplot import * f1 = F1D("x*sin(x)",-3.0,3.0) f1.doc()
You will on-line help in a Java WWW browser
Code assist in JythonShell DataMelt source-code editor, use the method dir(f1) to print its methods.
Lookup Java API
To find methods associated with a given class is easy:
In Jython script, find a statement that contains "from" and "import", i.e. such as:
from jhplot import H1D, H2D
Navigate the cursor to this like and select "Lookup API" using the mouse pop-up menu. This brings up a web browser with the displayed Java API. This works for most DMelt classes and for the standard Java classes.
For Java, Groovy and BeanShell, use a similar approach. Find the import statement
import jhplot.H1D;
Navigate the cursor to this line, and select with the mouse "Lookup API".
In the cases when the regular expression ("*") is used to import all classes, this method will attempt to open the package summary page of Java API, if it is available.
DMelt search
You can also look-up for a given word in DMelt Web search. When working with the DMelt IDE, highlight a given word. With the right mouse button, bring up a pop-up menu and select "Search DMelt project". It will bring a Web page with found examples, Java api and Wiki.
Here is an example how to search for the word "HChart":
Code assist for other IDEs
If you are using Java instead of Jython and working with Eclipse or NetBeans IDE, use the code assist of these IDEs. The description of these IDEs is beyond the scope of this book.
Scripting with DataMelt IDE
The main idea of DataMelt is that you can use Jython scripts to access any Java library. This also means that you can access the TextArea of the scripts itself and all internal variables of the jeHEP IDE.
<note> In fact,a script can modify itself at runtime, or even design itself while executing itself. This offers a tremendous power in programming. </note>
Let us consider how to access internal variable of the DataMelt DataMelt DataMelt DataMelt. Your HTML file with the equation is ready! The file is located in the directory "cachedir" together with the image of this equation.
Read more about the text processing using Java and Python in Section Text processing | https://handwiki.org/wiki/DMelt:General/3_DMelt_IDE | CC-MAIN-2021-49 | refinedweb | 1,623 | 64.91 |
Something odd I noticed today:
#include <CoreServices/CoreServices.h> int main( int argc, char** argv ) { UInt32 a = 0; a += (double) -1; printf( "%d\n", a ); return a; }
this code prints 0 on an Intel Mac, but prints -1 on a PowerPC. So, essentially the PPC seems to just overflow, but the Intel CPU seems to refuse to go below 0.
I guess it’s a matter of definition, but does anyone know the official reasoning behind this? I.e. know whether the C standard or Intel/IBM/Motorola specify that this should be so?
Just curious…
Yeah, this is an undefined behavior in the C family of languages because you’re casting into a type that is incapable of representing the value. Here’s the text from C99, just to pick a standard:
.”
The value of the integral part is negative, so it can’t be represented in a UInt32… thus the behavior is undefined. It just so happens that the compiler’s PowerPC backend does it one way (overflow), and the x86 backend does it another (pinning to zero). Neither is wrong; they’re just different.
Interesting, though. I knew about the problem but not that OSX did it differently for the two platforms. That’s sure to be a ‘gotcha’ for porting code over.
I told GCC to generate assembler for this code snippet, and wow! the assembler for PowerPC is *dense*. I’m long out of practice reading assembler, and most of that was x86, so I guess I have some brushing up to do. :-)
If you get an opportunity, can you feed this code snippet through gcc -S on an Intel Mac? I’m curious to see if this behavior you report is due to differences in the generated code, or differences in how the two architectures behave.
@Drew: Thanks, I’d suspected it was one of those undefined spots, but offhand I couldn’t find anything.
@Craig: I’ll see if I get around to it. | http://orangejuiceliberationfront.com/intelppc-oddity/ | CC-MAIN-2013-20 | refinedweb | 331 | 71.65 |
GETAUXVAL(3) Linux Programmer's Manual GETAUXVAL(3)
getauxval - retrieve a value from the auxiliary vector
#include <sys/auxv.h> unsigned long getauxval(unsigned long type); pointer to a string (PowerPC and MIPS only). On PowerPC, this identifies the real platform; may differ from AT_PLATFORM. On MIPS, this identifies the ISA level (since Linux 5 A pointer to a string containing the pathname used to execute.
On success, getauxval() returns the value corresponding to type. If type is not found, 0 is returned.
ENOENT (since glibc 2.19) No entry corresponding to type could be found in the auxiliary vector.
The getauxval() function was added to glibc in version 2.16.
This function is a nonstandard glibc extension..
Before the addition of the ENOENT error in glibc 2.19, there was no way to unambiguously distinguish the case where type could not be found from the case where the value corresponding to type was zero.
secure_getenv(3), vdso(7), ld-linux.so(8)
This page is part of release 5.08 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. GNU 2020-06-09 GETAUXVAL(3)
Pages that refer to this page: getunwind(2), getenv(3), secure_getenv(3), proc(5), procfs(5), glibc(7), libc(7), random(7), vdso(7), ld-linux(8), ld-linux.so(8), ld.so(8) | https://man7.org/linux/man-pages/man3/getauxval.3.html | CC-MAIN-2020-40 | refinedweb | 237 | 67.86 |
Solution for Programmming Exercise 2.6
This page contains a sample solution to one of the exercises from Introduction to Programming Using Java.
Exercise 2.6:
Suppose that a file named "testdata.txt" contains the following information: The first line of the file is the name of a student. Each of the next three lines contains an integer. The integers are the student's scores on three exams. Write a program that will read the information in the file and display (on standard output) a message the contains the name of the student and the student's average grade on the three exams. The average is obtained by adding up the individual exam grades and then dividing by the number of exams.
TextIO can be used to read data from a file; this is discussed in Subsection 2.4.5. To read data from a file named "testdata.txt", all you need to do is say TextIO.readFile("testdata.txt"). From then on, the input functions in TextIO will read from the file instead of reading data typed in by the user. (Note that this assumes that the file is in the same directory with the program.) In this case, we can use TextIO.getln() to read the student's name from the first line of the file, and then we can read the exam grades by calling TextIO.getln() three times.
The average should be computed as a value of type double. Don't forget that if you divide an integer by an integer in Java, the result is an integer and the remainder of the division is discarded. To get the correct average in this case, the program divides the sum of the three grades by 3.0 rather than by 3.
One final technicality is that simply outputting a double value might print out something like 83.333333333333333. By default, all significant digits in the number are output. In this case, one digit after the decimal point is probably sufficient. The program uses formatted output to achieve this. The format string "The average grade for %s was %1.1f" is used to format the name and the average. The name is substituted for the format specifier %s, which means that the name is printed as a string, with no extra spaces. The average is substituted for %1.1f, which means that the average is printed as a floating point number with no extra spaces and with 1 digit after the decimal point.
You might want to run this program with no data file, or with a data file that is not in the correct format, to see what happens. (The program will crash and print an error message.)
public class FindAverage { public static void main(String[] args) { String name; // The student's name, from the first line of the file. int exam1, exam2, exam3; // The student's grades on the three exams. double average; // The average of the three exam grades. TextIO.readFile("testdata.txt"); // Read from the file. name = TextIO.getln(); // Reads the entire first line of the file. exam1 = TextIO.getlnInt(); exam2 = TextIO.getlnInt(); exam3 = TextIO.getlnInt(); average = ( exam1 + exam2 + exam3 ) / 3.0; System.out.printf("The average grade for %s was %1.1f", name, average); System.out.println(); } } | http://math.hws.edu/javanotes/c2/ex6-ans.html | crawl-001 | refinedweb | 544 | 67.35 |
scikit 'distance'.
>>> all.shape (60, 638) >>> all array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]])
We have a 60 (episodes) x 638 (characters) array which we can now plug into the K-means clustering algorithm:
>>> from sklearn.cluster import KMeans >>> n_clusters = 3 >>> km = KMeans(n_clusters=n_clusters, init='k-means++', max_iter=100, n_init=1) >>> cluster_labels = km.fit_predict(all) >>> cluster_labels array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=int32)
cluster_labels is an array containing a label for each episode in the all array. The spread of these labels is as follows:
>>> import numpy as np >>> np.bincount(cluster_labels) array([19, 12, 29]):
from sklearn import metrics for n_clusters in range(2, 10): km = KMeans(n_clusters=n_clusters, init='k-means++', max_iter=100, n_init=1) cluster_labels = km.fit_predict(all) silhouette_avg = metrics.silhouette_score(all, cluster_labels, sample_size=1000) sample_silhouette_values = metrics.silhouette_samples(all, cluster_labels) print n_clusters, silhouette_avg 2 0.0798610142955 3 0.0648416081725 4 0.0390877994786 5 0.020165277756 6 0.030557856406 7 0.0389677156458 8 0.0590721834989 9 0.0466170527996. | https://markhneedham.com/blog/2016/08/25/scikit-learn-trying-to-find-clusters-of-game-of-thrones-episodes/ | CC-MAIN-2020-24 | refinedweb | 247 | 70.29 |
Hey. I'm trying to simply input items into a list, then be able to sort them and do whatever. I've done this before for my previous classes, but we have we have to use class files out of the book and modify them to make this work and I'm having way too much trouble. If somebody could point me in the right direction, it would be much appreciated.
I created the .cpp file. The unsorted.h is from the book. There are a couple of more files, let me know if they are needed.
// This is the .cpp file. #include "unsorted.h" #include <iostream> using namespace std; int main() { UnsortedType list; ItemType item; list.MakeEmpty(); item.Initialize(); cout << "Enter number (-999 to quit): "; cin >> item; list.InsertItem(item); }
// This is the unsorted.h file. #include "itemtype.h" class UnsortedType { public: UnsortedType(); bool IsFull() const; int LengthIs() const; void RetrieveItem(ItemType& item, bool& found); void InsertItem(ItemType item); void DeleteItem(ItemType item); void ResetList(); void MakeEmpty(); void GetNextItem(ItemType& item); private: int length; ItemType info[MAX_ITEM]; int currentPos; }; | https://www.daniweb.com/programming/software-development/threads/175147/input-items-to-a-list | CC-MAIN-2018-22 | refinedweb | 180 | 64.81 |
I love playing race games and at some point I got the idea to create a functioning dashboard for these games with Arduino and Processing (speedometer, RPM, etc.). After some searching on the internet I found out that games by Codemasters (F1 2012, F1 2013, GRID, GRID2) allow you to send the in-game data to external programs via UDP. This means that you can read the game data, like the cars speed, with processing. With use of Firmata you can then create a dashboard. The code is below but I’ll show what I’ve done first.
So this is how it works:
First you need to modify the configuration file of the game in order to receive the data. Head to …/Documents/My Games/F1 2012/hardwaresettings (or any other Codemasters game) and open hardware_settings_config.xml. At the bottom of the file make sure to change the motion entry to this: . What this does is it creates a UDP server in the game with an IP address and a port and it transmits extradata to this server.
Now head over to processing and make sure you have the UDP library installed (sketch >> import library >> add library >> search for UDP). For testing purposes I’ll use the ControlP5 library in processing as well because it allows me to create dials etc. It’s not required but makes life a lot easier.
Now to the fun part. In the Setup we first create a canvas (set size and background) and a connection to the arduino, which is running standardFirmata. Next we’ll create 3 outputs on the screen with ControlP5, the speedometer, RPM and gear indicators. Finally we start the UDP connection on the port and IP that we configured in the game configuration file (IP: 127.0.0.1 and port 20777). If you want you can turn on the log function of the UDP receiver by setting it to true but there’s not much interesting stuff there.
The draw function will be empty because everything is being handled by the UDP receiver. This function receives an array of bytes from the game and stores this array in the data variable. However, everything in the game is stored in 4 byte floats. This means that we have to read 4 bytes at a time and combine these to a float. The function fullOutput() will output all the received data from the game and will show the starting position of the bytes in the array. After some testing I found that the speed, for example, is stored in bytes 28, 29, 30 and 31. By combining these 4 bytes with the function Float.intBitsToFloat we get the speed as a float. This number can be send to the speedometer we created with ControlP5. Furthermore we can map the number and send it to an Arduino in order to create a working speedometer with a servo. My Servo can handle 180 degrees so I map the cars speed to a 0->180 scale.
I’ve kept it quite simple here but you can go crazy with this code. You could wire up a 7 segment display and show the current gear or lap or maybe you can hook up some LEDs to create an RPM indicator. The game outputs 38 different values so there is enough data available. Please post your examples here for inspiration!
Special thanks to Robert Gray, who did something similar in C# and has been an inspiration for this project.
Processing:
import controlP5.*; import hypermedia.net.*; import processing.serial.*; import cc.arduino.*; Arduino arduino; int arduinoPos = 0; ControlP5 cp5; UDP udpRX; String ip="127.0.0.1"; int portRX=20777; int pos; void setup(){ size(280,280); background(255); // Arduino connection and Servo output arduino = new Arduino(this, Arduino.list()[0], 57600); // Your offset may vary arduino.pinMode(9,5); // Create some dials and gauges on screen cp5 = new ControlP5(this); cp5.addSlider("rpm") .setSize(200,50) .setPosition(10,10) .setValue(0) .setRange(0,19000); cp5.addNumberbox("gear") .setSize(50,50) .setPosition(220,10) .setValue(0); cp5.addKnob("speed") .setRadius(100) .setPosition(10,70) .setValue(0) .setRange(0,350); // Create new object for receiving udpRX=new UDP(this,portRX,ip); udpRX.log(false); udpRX.listen(true); } void draw(){ // Nothing happens here } void receive(byte[] data, String ip, int portRX){ // Function to output all the game data received // fullOutput(data); // Time elapsed since game start pos = 0; float tTime = Float.intBitsToFloat((data[pos] & 0xff) | ((data[pos+1] & 0xff) << 8) | ((data[pos+2] & 0xff) << 16) | ((data[pos+3] & 0xff) << 24)); // Lap time pos = 4; float lapTime = Float.intBitsToFloat((data[pos] & 0xff) | ((data[pos+1] & 0xff) << 8) | ((data[pos+2] & 0xff) << 16) | ((data[pos+3] & 0xff) << 24)); // Speed, *3.6 for Km/h pos = 28; float speed = Float.intBitsToFloat((data[pos] & 0xff) | ((data[pos+1] & 0xff) << 8) | ((data[pos+2] & 0xff) << 16) | ((data[pos+3] & 0xff) << 24))*3.6; // Gear, neutral = 0 pos = 132; float gear = Float.intBitsToFloat((data[pos] & 0xff) | ((data[pos+1] & 0xff) << 8) | ((data[pos+2] & 0xff) << 16) | ((data[pos+3] & 0xff) << 24)); // Current lap, starts at 0 pos = 144; float cLap = Float.intBitsToFloat((data[pos] & 0xff) | ((data[pos+1] & 0xff) << 8) | ((data[pos+2] & 0xff) << 16) | ((data[pos+3] & 0xff) << 24)); // RPM, requires *10 for realistic values pos = 148; float rpm = Float.intBitsToFloat((data[pos] & 0xff) | ((data[pos+1] & 0xff) << 8) | ((data[pos+2] & 0xff) << 16) | ((data[pos+3] & 0xff) << 24))*10; // Debug the received values gameDataOutput(tTime, lapTime, speed, gear, cLap, rpm); // Output the values to the dashboard cp5.getController("rpm").setValue(rpm); cp5.getController("gear").setValue(gear); cp5.getController("speed").setValue(speed); // Send the speed to the Servo arduinoPos = (int)map(speed, 0, 350, 1, 180); // Note that I've set the max speed to 350, you might have to change this for other games arduino.servoWrite(9, 180-arduinoPos); } void gameDataOutput(float tTime, float lapTime, float speed, float gear, float cLap, float rpm){ println("Total time: " + tTime); println("Lap time: " + lapTime); println("Speed: " + speed); println("Gear: " + gear); println("Current lap: " + cLap); println("RPM: " + rpm); } // Function that outputs all the received game data void fullOutput(byte[] data){ // Loop all the received bytes for(int i=0; i <= data.length-1; i++){ // Values consist of 4 bytes if(i % 4 == 0){ // Combine 4 bytes to the value float val = Float.intBitsToFloat((data[i] & 0xff) | ((data[i+1] & 0xff) << 8) | ((data[i+2] & 0xff) << 16) | ((data[i+3] & 0xff) << 24)); // Output the 'raw' value println("Value received at position " + i + " = " + val); } } } | https://forum.arduino.cc/t/physical-dashboard-for-race-games/206762 | CC-MAIN-2022-27 | refinedweb | 1,086 | 62.38 |
So you have been working for a while with CSLA .NET by Rockford Lhotka and now you want – like me – jump on the Silverlight bandwagon. So you want to reuse the business objects in Silverlight. There are some nice samples in the cslalight samples but when I started to try to use my own business objects, things did not run so smoothly as the samples suggest.
The samples by Rockford show the general outline:
- You make a second assembly into which you add the exisiting business class source files as a link
- You start adding specific Silverlight functionality surrounded by
#if SILVERLIGHT (…) #endif preprocessor directives
- You make sure that the server stuff like data access is not active in the Silverlight configuration (#if !SILVERLIGHT)
- You add a Silverlight-specific factory method to load an object, that looks a bit like this:
public static void Get(int id, EventHandler<DataPortalResult<MyBusinessClass>> callback) { var dp = new DataPortal<MyBusinessClass>( ); dp.FetchCompleted += callback; dp.BeginFetch( new IdCriteria(id) ); }
If you try to call this from your Silverlight client the result is - unless you are very lucky - most likely that the sky starts caving in. Turns out there are a few 'hidden requirements' –or at least some less apparent ones. Maybe there are more, but this was what I found so far:
- Both the Silverlight and the full framework assemblies must have the same name, so even if your projects are called MyLib.Server and MyLib.Client, the resulting dll’s must have the same name, for example MyLib.dll.
- If your full framework assembly is signed, your Silverlight assembly should be signed as well. They then also must have the same version number – all the way to the build number. This is important – and it took me the most time before the penny dropped.
- All the Silverlight classes must have public constructors. So you add
#if! SILVERLIGHT private MyBusinessClass() { } #else public MyBusinessClass() { } #endif
- Properties should be defined in the ‘modern’ format. You have still properties in this format?
private string _oldProp = string.Empty; public string OldProp { get { return _oldProp; } set { if (value == null) value = string.Empty; if (!_oldProp.Equals(value)) { _oldProp = value; PropertyHasChanged("OldProp"); } } }Tough luck. Change that into the 'new' form, e.g.
private static PropertyInfo
NewPropProperty = RegisterProperty (c => c.NewProp); public string NewProp { get { return GetProperty(NewPropProperty); } set { SetProperty(NewPropProperty, value); } }
- Criteria objects should have public constructors as well, and should be public classes – that is, if you have defined them as private classes inside your business object, you should make them public
- Your Criteria should implement IMobileObject. The easiest way is to let your class descend from CriteriaBase, but then you will find out that although the class is serialized to the server, the properties are not. Turns out that for Criteria objects the property format has changed too. In the past you could just make a simple class with a few getters and setters, now you have to make something along this line:
[Serializable] public class IdCriteria : CriteriaBase { public static PropertyInfo<int> IdProperty = RegisterProperty(typeof(IdCriteria), new PropertyInfo<int>("Id")); public int Id { get { return ReadProperty(IdProperty); } set { LoadProperty(IdProperty, value); } } public IdCriteria() { } public IdCriteria(int id) { Id = id; } }
So, although CLSA ‘light’ promises a lot of reuse (which is true of course, in the case of business and validation rules) you need a lot of extra plumbing to get going. And mind you, this is a simple single object that I only read – I haven’t covered lists yet, nor updates and deletes. The power of CSLA can come to Silverlight – but certainly for existing libraries it is not quite a free ride. But then again - this is Silverlight, so it should run on Windows Phone 7 series as wel... which will be my next experiment. I will keep you posted!
1 comment:
Interesting. You've been busy indeed. (Btw, since when is that "Rogue R&D hacker" in your blog bio? ;-)) | http://dotnetbyexample.blogspot.com/2010/04/caveats-when-migrating-existing-clsanet.html | CC-MAIN-2016-44 | refinedweb | 653 | 53.92 |
I have to write the methods for a program and the first one, a boolean method called openAccount, takes first name, last name and opening amount [positive or negative] as a starting amount. This method sets all the class variables properly upon call and returns a true (unless the account was already marked as being opened in which case it returns a false. Note: if any operation on an account makes the dollars go into the negative range, then the account should be flagged as being overdrawn. The class variable isOpened is not the same as what is being returned from this method. I was wondering what would be a simple method that sets all the class variables properly? I have tried a lot of different true, false problems, but cannot get it. If someone can help, I would greatly appreciate it. The main method was provided and cannot be changed.
Code :
import javax.swing.*; public class Program4 { // Class Variables. public static String firstName = ""; public static String lastName = ""; public static double dollars = 0.0; public static boolean isOpened = false; public static boolean isOverdrawn = false; // Replace "CIS Faculty" with your name. private final static String TITLE_BAR = "Program 4 (My Name)"; // Class Methods // Add your methods below here. // You should leave the main method below ALONE!! public static void main(String[] args) { showAccount(); // Will show error message about account not open // and will then continue. // Open new account. Give error message if it fails! if (!openAccount("Bill", "Smith", 100.50)) { reportErrorAndBomb("Coding error 1. Open account failed on a new account!"); } showAccount(); // Check that accountOwner works correctly - bomb if not. if (!accountOwner().equals("Bill Smith")) { reportErrorAndBomb("Coding error 2. Call to accountOwner failed."); } if (balance() != 100.50) // Verify balance is correct! { reportErrorAndBomb("Coding error 3. Balance is wrong!"); } // Confirm that I cannot reopen it! if (openAccount("Bogus", "Try", 55.00)) { reportErrorAndBomb("Coding error 4. You allowed an account " + "to be re-opened!"); } deposit(50.00); showAccount(); if (balance() != 150.50) { reportErrorAndBomb("Coding error 5. Balance is wrong!"); } if (isNowOverdrawn()) { reportErrorAndBomb("Coding error 6. Reports overdrawn when it should not."); } // Confirm correct workings of approveCheckFor method. if (approveCheckFor(150.51)) { reportErrorAndBomb("Coding error 7. Approved a check for too much."); } if (!approveCheckFor(150.50)) { reportErrorAndBomb("Coding error 8. Failed to approve a check " + "for a good amount!"); } withdraw(25.00); if (balance() != 125.50) { reportErrorAndBomb("Coding error 9. Balance is wrong!"); } withdraw(125.75); if (balance() != -0.25) { reportErrorAndBomb("Coding error 10. Balance is wrong!"); } showAccount(); // Should show a deficit of 25 cents. // and that account is now overdrawn. if (!isNowOverdrawn()) { reportErrorAndBomb("Coding error 11. Should respond as overdrawn now."); } // Well... if you made no calls above to reportErrorAndBomb it might not // be working... so let us end the program with its use. reportErrorAndBomb("No Errors reported after testing all methods. " + "\nThis tests reportErrorAndBomb. \nTesting complete with no errors!"); } // main private static boolean isNowOverdrawn() { if (balance() < dollars) // To check if the account is now overdrawn // after a transaction. return true; else return false; } private static boolean approveCheckFor(double cash) { if (cash <= balance()) // To check if incoming check is less than or // more than balance. return true; else return false; } private static double balance() { return dollars; // Enter dollars into balance. } private static void withdraw(double moneyInBank) { if (moneyInBank >= dollars) // Withdrawn amount. dollars -= moneyInBank; } private static void deposit(double amount) { dollars += amount; // Deposited amount. } private static void reportErrorAndBomb(String message) { JOptionPane.showMessageDialog(null, message, TITLE_BAR, // Prints the // error message // to // JOptionPane. JOptionPane.ERROR_MESSAGE); // and exit. System.exit(0); } private static String accountOwner() { return firstName + " " + lastName; // Return first name, space, last name. } private static void showAccount() { String.format("%.2f", dollars); // Format output of dollars two decimal // places. if (isOpened) // If account already exists. JOptionPane.showMessageDialog(null, "Account Owner: " + firstName + lastName + "\n" + "Account Balance: " + dollars + "\n" + "Account Overdrawn: " + isOverdrawn); else JOptionPane.showMessageDialog(null, "You have attempted to display an accont which is not " // Displayed if account does not exist. + "opened yet. ", TITLE_BAR, JOptionPane.ERROR_MESSAGE); } private static boolean openAccount(String name, String name2, double money) { return false; } | http://www.javaprogrammingforums.com/%20java-theory-questions/11598-question-about-program-printingthethread.html | CC-MAIN-2015-06 | refinedweb | 669 | 54.08 |
.
I love your website
gonna use it regularly
Hi Alex,
I saw people talking about "inl" files to write the template class definitions and include them into the header file. I have actually made it and it looks working fine, but how you didn't mention it as one of the solutions for avoid writing all the code in the header files, im just wondering, if its actually not recommended for any particular reason(efficienty,insecure...) and if you can explain a little bit how the" inl" files actually work and if you think it is a good idea to make it on that way.
Thanks!! for your time and the tutorials, Aitor
I've updated the article to mention this method. I personally don't do this, because having long header files doesn't bother me, but some people do prefer the additional separation between the template declaration and implementation. If you prefer that, there's no real downside to using the .inl file method, so feel free.
What is the difference between template <typename X> and template <class X>?
Do you use class before class definitions, and typename before functions?
In the context of defining template parameters, class and typename are equivalent. Which one you use is a matter of preference.
why there is a need to write <int> or <double> in these lines?
So the array knows what data type to make its elements. intArray has int elements, doubleArray has double elements.
when class object is created erase function is called by itself?
erase() is never called in that code.
Hi, in the "Splitting up Template Classes" section, Array.cpp has 2 misspellings - getLength (G is uppercase) and m_length (it is written there as m_nLength).
Thanks as always.
Thanks for pointing these out. Fixed!
Hi Alex,
Thank you very much for a great tutorial!! Really appreciate it !
I had a question regarding
shouldn't this actually be,
since m_data is a pointer ?
Either works, but nullptr better reflects the intent. I'll update the lesson. Thanks!
Could you please suggest some resources to learn multi threading in c++ ?
Thank you
I'm not aware of any good multi-threading tutorials, mostly because I haven't looked recently. My advice is to do a google search and read stuff until you find something that is written for your level of technical comprehension.
Hi Alex.
I have a problem with the following program. How we should write the definition of the constructor (it has to be outside of the class)?
indexList.h
Main.cpp
I cover this in the next lesson. :) But in short:
Typo: should be "executable" in "shouldn’t bloat your exectuable"
hi Alex! Maybe I'm wrong but when you wrote the class DoubleArray, in the private section you put "double m_length;" shouldn't be always "int m_length"?
Yes, for sure. Fixed. Thanks for pointing that out.
Dear Alex,
On the sentence "Each templated member function declared outside the class declaration needs its own template declaration.", did you mean "function defined" instead of "declared"? You're referring to the need of a template declaration right before function definition, right? Not sure if it's a typo, or I misunderstood it.
Best regards.
Mauricio
No, I did mean declared, because this applies to template function forward declarations as well as full template function definitions (which are both definitions and declarations).
For the 3 files solution mentioned at the end, would it be acceptable to instantiate the template classes inside Array.cpp rather than templates.cpp?
You could. But then you'd need to modify Array.cpp for every program, which you probably don't want to do (you want to write it once, and use it over and over without touching it).
what is the point of the three files and the include .cpp file approach if we are going to include .cpp file, isn't the compiler gonna compile it and since it was included then it is copied in another file" also compile" this way it is gonna be compiled twice, we dont we do all the code in the .h file, I'm confused ??? thanks
why dont we do all the code in the .h file **
You can, that's the recommended solution. However, some people like to keep their declarations in the .h and definitions in the .cpp, and this provides a solution for doing so.
There shouldn't be any issues with the 3 file solution -- templates don't get instantiated unless needed. So a .cpp file full of template definitions won't get turned into any code if compiled by itself (since nobody is using it).
In the three-file solution, we avoid this issue by explicitly including the .cpp file and putting it in the same place as where it's needed, so the compiler knows to instantiate the template classes and compile them.
Minor typo maybe.
Above in the definition of class DoubleArray, line 22:
m_data = new int[length];
Isn't it supposed to be:
m_data = new double[length];
Yup. Lesson updated. Thanks!
Thanks, I read this prior to posting above. This is 7 years old, using VC++ different than 2015. The ideas still apply: "template can't be compiled into a library because it doesn't know what the template will be. ... you don't actually need ANY library to be compiled or included if you distribute header files with template definition". Others (like me) have said they have folders full of functions needed by Projects under many "Solutions". Putting the functions into a lib allows them to be added as a group rather than like a giant list of files. I posed this question to the msdn forum too but haven't gotten an answer I like. I'm asking them and you to try to build my code above (a few lines of code) and you'll quickly see the problem. Is an empty cpp file required, or is there a Property setting to tell the compiler "build a library like you do with an empty cpp file"?
Not that I'm aware of, but I'm certainly not an expert in regards to how to use Microsoft IDEs.
I've read many such suggestions but they only address "simple" errors (like wrong path to lib). VS fails to build a lib (I believe because MS designed it to require a cpp file even if empty), so obviously you get LNK1104. Ignore this LNK error, and answer how to build a static lib Project containing a function template that is called from a second Project under a Solution using MSVC 2015, without a "meaningless" (do nothing) cpp file.
Seems like you might be trying to do the same thing this guy was. Read that thread and see if it's helpful.
1>------ Rebuild All started: Project: StaticLib, Configuration: Debug Win32 ------
2>------ Rebuild All started: Project: Top, Configuration: Debug Win32 ------
2> top.cpp
2>LINK : fatal error LNK1104: cannot open file '...MondayDebugStaticLib.lib'
========== Rebuild All: 1 succeeded, 1 failed, 0 skipped ==========
The lib file doesn't get made (and the previous .lib gets erased).
Not sure what the issue is here. Googling on the error (LNK1104) yields this thread that contains multiple suggestions on how to resolve.
I'm using Visual Studio 2015 writing C++ 11 console app and a "static lib" I made having only a "function template" to return max value of two numbers.
Solution: Moday
Project #1: StaticLib
Project #2: Top (has main program that calls 'function template' called GetMax)
It builds, runs, and produces 11 as max of 9 and 11. There's only three files: top.cpp, mylib.h, and mylib.cpp (which is empty).
If mylib.cpp is removed, Build fails to produce a lib.
Why?
Is there a Property setting to fix this (so that all template code can be in a header file, with no cpp file, and have it output a .lib file)?
I searched for weeks, and hate having dozens of empty cpp files in a large "real Solution" that uses existing code located in folders throughout the hard drive.
>> I deleted Microsoft precompiled headers and fixed Project Properties to indicate No Precompiled headers in either Project.
>> "Top Project" Property page has C/C++ > General > Additional Include
Directories: C:\Users\...\Monday\StaticLib (path to mylib.h) No other changes to Property pages.
Under Top: References, I added a Reference to StaticLib.
There is only one Solution, called 'Monday'. There's only two Projects: StaticLib and Top.
StaticLib Project:
// mylib.h
#ifndef MYLIB_H
#define MYLIB_H
#include <iostream>
template <class myType>
myType GetMax(myType a, myType b) {
return (a>b ? a : b);
}
#endif // inc. guard...
// mylib.cpp
// empty!!
Top Project:
// top.cpp
#include "mylib.h"
using namespace std;
int main(){
int i = 9;
int j = 11;
cout << GetMax(i, j) << endl;
cout << "Done! Press Enter to exit..." << endl;
cin.ignore();
What error are you getting when you remove mylib.cpp?
If I have a template class as follows:
And then a specialize this class for a type say int"
Now is there anyway I could use both the default template foo and the "specialized for int" foo?
What I mean is if I don't include the file where the int specialization is defined and template instantiate foo for int:
Can I still print "Template class"
Sure, if you don't include the file where the specialization is defined, then your templated variable will use the generic version.
Thanks for your reply Alex. I am trying to test this scenario before making similar changes to my production code. So to sum it up, we can use the generic version of the template class in one compilation unit and the specialized version of the template class in a different compilation unit, both in the same program session. And this should not cause "multiple definition" linkage error?
Also, where do you recommend learning template programming from?
I'm not aware of any reason it would cause multiple definition linkage errors -- though I'd advise you to try and it and see for sure. :)
Template programming really isn't much different than normal programming outside of the crazy syntax and the parameterization of types. I don't have any other specific resources I'd recommend for learning at this time, but I'm sure a quick google search would turn up plenty of other sites of interest.
Thank you for your help. :)
You said the easiest way is to put all of the template class code in the header file, but that will bloat our code if the template class is used in many places.
Wont the (good) compiler optimize (remove) all the identical copies of the template class, and use always the same code?
My understanding is that most compilers will happily compile multiple copies of the instantiated template class, and rely on the linker to remove the redundant definitions. So this would have the impact of increasing your compile and link times.
Ok, i understand. Thanks for the reply.
And btw, thank you so much for this awesome website. Its really great. <typename T>".
Hey, why this?
This causes the compiler to stencil out the class Array<int>, which then gets compiled. Because functions are extern by default, references to these functions in other files will link to these..
I have done what you suggested and obtain the following compiler errors:
what do you suggest next?.
The template version you provied if instatiated by a 'char' it may cause problems becaue incounstructor you gave "new T[length] but for char it needs 'new T[length+1]' ( beacues of "").
So changing the consturctor to the following may be appropriate.
Also we need to include
typeid determine sthe data type at runtime.
What do you say?
You could do as you suggest, or you could do a class template specialization for type char, or you could simply tell the users of the class to make sure that when they pass in the length, they need to account for any null-terminators. I'd probably just do the last one, as C-style arrays require specifying a size that includes the null terminator, so programmers would probably expect a class to adhere to the same assumptions.
Name (required)
Website | https://www.learncpp.com/cpp-tutorial/133-template-classes/comment-page-1/ | CC-MAIN-2019-13 | refinedweb | 2,046 | 73.47 |
I have a react component manages the entire app in react. It looks like:
var Page = React.createClass({ displayName: 'Page', render: function() { return ( React.DOM.div(null /*, stuff here ..*/ ) ); } });
And html looks like
<body id="content"></body>
And I render it like:
React.renderComponent(Page(null), document.getElementById('content'));
But instead, I would like react to manage the entire
body tag, so there is no redundant nested
div. How can I render such a thing?
I’m not completely sure if this is what you’re asking, but you can run
React.renderComponent(Page(null), document.body);
to mount a component directly into body. | https://exceptionshub.com/manage-the-entire-body-in-reactjs.html | CC-MAIN-2018-22 | refinedweb | 104 | 53.17 |
More!
Ahem.
Yeah. Well, you’ve got to start small. Anyway, I have this LED array that I’m going to build a little something on soon. This time, we’ll just use a single line for that simple effect.
The LED array can be found here: for $1.80, which is ridiculous. It’s the LEDMS88R. A LED array is a very simple piece of hardware: it’s just LEDs at each intersection of line and column wires:
To turn on one of the LEDs, just send some juice between the right line and column. You can even turn on an arbitrary combination of lights on any line or any column. But of course if you try to do any more than that, you’ll have extra pixels lit in most cases. What you do instead is use persistence of vision and scan the matrix line by line very fast. This way you can display any 8x8 image at all. But today, we only need a single line.
Before we connect the controller to the matrix though, we need to deal with a small problem that will be very common as we build more hardware prototypes. Before you use any component at all, you need to RTFM. Let me repeat that:
You need to READ THE EFFING MANUAL.
For real. I know you’re a super geek and make a point of never ever doing that. Well, hardware is not like software. The equivalent of an API, for hardware, is the pin repartition but it doesn’t have names, it doesn’t have comments, and it’s not designed to be logical.
Just take another look at that LED matrix schema above. Notice anything? Yes, pin number 1 is for row 5, pin 2 is for row 7, pin 3 is for column 2, and so on until your sanity’s just a confused and fading memory. You don’t want to have to figure this out by trial and error. Just read the damn thing.
In order to get halfway back from that dark pit of madness, we’ll have to build a rational façade of sorts before that matrix. That façade can be either software on the microcontroller or it can be hardware. I opted for hardware, which I think deals with the problem once and for all and allows for more flexibility in design going forward. I went ahead and mapped the pins of the matrix to the rows 1 to 8 of two columns on my breadboard, one for rows and one for columns:
One thing I might do eventually is to order a custom circuit that hosts the matrix, does the mapping and is easy to plug into the breadboard: those wires are ugly.
Another thing to keep in mind is that the consequences of a hardware mistake is not going to be a nice exception. There will be smoke. You’re going to fry a chip, or your microcontroller, or both. All the more reason to READ THE EFFING MANUAL. And check everything ten times before you switch on the power. Oh wait, make it twelve.
For example, when getting current flowing through LEDs like those in that matrix, remember a passing LED is like a wire, so it’ll short-circuit if you don’t put a resistor in front of it. If the LED doesn't fry before, which it will. But you don't want to use LED matrices as fuses.
For our Knight Rider effect, we only need one line, so I connected the 3V power line of the Netduino to one of the + columns of the breadboard. I then pulled a 150 Ohms resistor between that and the column 1 of the matrix. Then I pulled a wire between the pins for each row and the digital ports 0 to 7 of the Netduino:
Now all we have to do is write the small bit of code that will turn the LEDs on and off in the right order:
using System.Threading; using Microsoft.SPOT.Hardware; using SecretLabs.NETMF.Hardware.Netduino; namespace HelloNetduino { public class Program { private static readonly OutputPort[]
ColPorts = new [] { new OutputPort(Pins.GPIO_PIN_D0, false), new OutputPort(Pins.GPIO_PIN_D1, false), new OutputPort(Pins.GPIO_PIN_D2, false), new OutputPort(Pins.GPIO_PIN_D3, false), new OutputPort(Pins.GPIO_PIN_D4, false), new OutputPort(Pins.GPIO_PIN_D5, false), new OutputPort(Pins.GPIO_PIN_D6, false), new OutputPort(Pins.GPIO_PIN_D7, false) }; public static void Main() { var switchPort = new InputPort(
Pins.ONBOARD_SW1,
false,
Port.ResistorMode.Disabled); var dir = 1; var i = 0; while (switchPort.Read()) { for (; i >= 0 && i < 8 &&
switchPort.Read(); i+= dir) { ColPorts[i].Write(false); Thread.Sleep(50); ColPorts[i].Write(true); } dir *= -1; i += dir; } } } }
Now that wasn’t too hard. Here’s the result:
Next time, I’ll really give that shopping list I promised last time, and we’re going to see how to address any of the 8 rows and 8 columns on the matrix even though the Netduino only has 14 digital ports. | http://weblogs.asp.net/bleroy/more-netduino-fun | CC-MAIN-2015-27 | refinedweb | 833 | 74.08 |
But there could be many. require "obj_adams" :) require "obj_barebones" require "obj_pythonese"The point being, instead of building a whole system from the ground up (and doing all the mistakes) a newcomer could simply pick-and-place the one system she wants. And even, if these packages were centrally orchestrated, their API approaches could be at least similar to each other.
Some document could then be used to highlight the differences & pros/cons of each of them.Some document could then be used to highlight the differences & pros/cons of each of them.
-ak 16.1.2005 kello 12:23, Adam D. Moss kirjoitti:
Personally I think that self.something() / self.super.something() is clearer than 'something' implicitly coming out of the class's namespace. A standardised object mechanism isn't likely to address such personal preferences. --Adam | http://lua-users.org/lists/lua-l/2005-01/msg00378.html | CC-MAIN-2019-22 | refinedweb | 137 | 57.57 |
Category:Vox
From Rosetta Code
This programming language may be used to instruct a computer to perform a task. Listed below are all of the tasks on Rosetta Code which have been solved using Vox.
Your Help Needed
If you know Vox, please write code for some of the tasks not implemented in Vox.
If you know Vox, please write code for some of the tasks not implemented in Vox.
The Vox programming language is multi-purpose language, designed to be used in embedded environments, as well as for general purposes.
Vox started out as a fork of the language, but has moved in a different direction ever since. While Squirrel is mostly intended to be used in embedding, Vox tries to tackle this problem by providing a larger standard library (based on Boost), extended object manipulation (borrowing many ideas from languages like Python), and an
import function, that is largely borrowed from Lua (only the idea, not the code). Despite that, Vox is not compatible with Squirrel.
Pages in category "Vox"
This category contains only the following page. | http://rosettacode.org/wiki/Category:Vox | CC-MAIN-2017-30 | refinedweb | 179 | 59.94 |
Is there any limit to record size with data in memory mode ? What if data is stored in ram as well as disk ?
Record Size Limit in Memory Mode
Record size limit comes from write-block-size (1 Mb max, including all overhead) which applies to persistent storage, raw blocks on SSD or file based storage on HDD. If you store records purely in RAM then you are limited only by the various buffers in the food chain from client to Aerospike. If you do storage on disk with data in memory also, the 1MB limitation will apply.
I also observed that with data in memory mode as the size of a map object increases, write into map(insert/update) slows down. I was not expecting this with data in memory mode. What can be the reason for this ?
any update is full rewrite. so with map type bin, if you are inserting new key-value pairs in the map, entire record has to be re-written to a new location. there is no in-situ modification. entire record is stored contiguously in memory.
- Is this storage-engine memory or storage-engine device with data-in-memory true?
- What is the replication factor for the namespace?
I think prior to 3.8.3 sorted maps, maps were serialized using msg_pack. Storage space difference between bins and maps - i think they are still serialized with msg_pack but added info is added for key/index based sorting. Since msg_pack offers compaction, I would think any change will be a full rewrite regardless of storage being memory or disk. So I would think it is a full rewrite - but perhaps kporter may shed more light on the topic.
You are doing add() operations. Refer to the performance table in the map documentation (link above). Your performance is likely dominated by O(N) for unsorted map types and O(log N) for sorted.
There are a number of O(1) operations in the table. Those are still affected by the record size because "Every modify op has a +C for copy on write for allowing rollback on failure." memcpy performance should scale linearly with size, but it is orders of magnitude smaller than the map traversing operations performance.
I am referring to storage-engine memory only. Replication factor is 2.
Basically I am trying to compare this with hset of redis where hset can be as big as 100MB to 1GB. Can such a record be kept as single record (map) in aerospike with similar performance or it is not feasible for such type of records ? | https://discuss.aerospike.com/t/record-size-limit-in-memory-mode/4159 | CC-MAIN-2018-30 | refinedweb | 432 | 65.42 |
CHAPTER 9Stocks and Their Valuation Features of common stock Determining common stock values Efficient markets Preferred stock by Donglin Li
Facts about common stock • Represents ownership • Ownership implies control • Stockholders elect directors • Directors elect management • Management’s goal: Maximize the stock price by Donglin Li
Types of stock market transactions • Initial public offering market (“going public”) (Company sells shares to the public for the 1st times.) • Secondary market (stockholders sell shares to each other) by Donglin Li
Stock Market Transactions • Apple Computer decides to issue additional stock with the assistance of its investment banker. An investor purchases some of the newly issued shares. Is this a primary market transaction or a secondary market transaction? • Since new shares of stock are being issued, this is a primary market transaction. • What if instead an investor buys existing shares of Apple stock in the open market – is this a primary or secondary market transaction? • Since no new shares are created, this is a secondary market transaction. by Donglin Li
Different approaches for valuing common stock • Dividend growth model • Corporate value model • Using the multiples of comparable firms by Donglin Li
Dividend growth model • Value of a stock is the present value of the future dividends expected to be generated by the stock. by Donglin Li
Constant growth stock • A stock whose dividends are expected to grow forever at a constant rate, g. D1 = D0 (1+g)1 D2 = D0 (1+g)2 Dt = D0 (1+g)t • If g is constant, the dividend growth formula converges to: by Donglin Li
$ 0.25 0 Years (t) Future dividends and their present values by Donglin Li
What happens if g > rs? • If g > rs, the constant growth formula leads to a negative stock price, which does not make sense. • The constant growth model can only be used if: • rs > g • g is expected to be constant forever by Donglin Li
If rRF = 7%, rM = 12%, and b = 1.2, what is the required rate of return on the firm’s stock? • Use the SML to calculate the required rate of return (rs): rs = rRF + (rM – rRF)b = 7% + (12% - 7%)1.2 = 13% by Donglin Li
0 1 2 3 g = 6% 2.12 2.247 2.382 D0 = 2.00 1.8761 rs = 13% 1.7599 1.6509 If D0 = $2 and g is a constant 6%, find the expected dividend stream for the next 3 years, and their PVs. by Donglin Li
What is the stock’s intrinsic value? • Using the constant growth model: by Donglin Li
What is the expected market price of the stock, one year from now? • D1 will have been paid out already. So, P1 is the present value (as of year 1) of D2, D3, D4, etc. • Could also find expected P1 as: by Donglin Li
What are the expected dividend yield, capital gains yield, and total return during the first year? • Dividend yield = D1 / P0 = $2.12 / $30.29 = 7.0% • Capital gains yield = (P1 – P0) / P0 = ($32.10 - $30.29) / $30.29 = 6.0% • Total return (rs) = Dividend Yield + Capital Gains Yield = 7.0% + 6.0% = 13.0% by Donglin Li
0 1 2 3 rs = 13% ... 2.00 2.00 2.00 What would the expected price today be, if g = 0? • The dividend stream would be a perpetuity. by Donglin Li
Supernormal growth:What if g = 30% for 3 years before achieving long-run growth of 6%? • Can no longer use just the constant growth model to find stock value. • However, the growth does become constant after 3 years. by Donglin Li
0 1 2 3 4 rs = 13% ... g = 30% g = 30% g = 30% g = 6% D0 = 2.00 2.600 3.380 4.394 4.658 2.301 2.647 3.045 4.658 = = $66.54 46.114 3 - 0.13 0.06 54.107 = P0 Valuing common stock with nonconstant growth $ P ^ by Donglin Li
If the stock was expected to have negative growth (g = -6%), would anyone buy the stock, and what is its value? • The firm still has earnings and pays dividends, even though they may be declining, they still have value. by Donglin Li
Find expected annual dividend and capital gains yields. • Capital gains yield = g = -6.00% • Dividend yield = 13.00% - (-6.00%) = 19.00% • Since the stock is experiencing constant growth, dividend yield and capital gains yield are constant. Dividend yield is sufficiently large (19%) to offset a negative capital gains. by Donglin Li
Corporate value model • Also called the free cash flow method. Suggests the value of the entire firm equals the present value of the firm’s free cash flows. • Remember, free cash flow is the firm’s after-tax operating income less the net capital investment • FCF = NOPAT – Net capital investment by Donglin Li
Applying the corporate value model • Find the market value (MV) of the firm. • Find PV of firm’s future FCFs • Subtract MV of firm’s debt and preferred stock to get MV of common stock. • MV of = MV of – MV of debt andcommon stock firm preferred • Divide MV of common stock by the number of shares outstanding to get intrinsic stock price (value). • P0 = MV of common stock / # of shares by Donglin Li
Issues regarding the corporate value model • Often preferred to the dividend growth model, especially when considering number of firms that don’t pay dividends or when dividends are hard to forecast. • Similar to dividend growth model, assumes at some point free cash flow will grow at a constant rate. • Terminal value (TVn) represents value of firm at the point that growth becomes constant. by Donglin Li
0 1 2 3 4 r = 10% ... g = 6% -5 10 20 21.20 -4.545 8.264 15.026 21.20 398.197 530 = = TV3 0.10 - 0.06 416.942 Given the long-run gFCF = 6%, and WACC (weighted average cost of capital) of 10%, use the corporate value model to find the firm’s intrinsic value. by Donglin Li
If the firm has $40 million in debt and has 10 million shares of stock, what is the firm’s intrinsic value per share? • MV of equity = MV of firm – MV of debt = $416.94m - $40m = $376.94 million • Value per share = MV of equity / # of shares = $376.94m / 10m = $37.69 by Donglin Li
Firm multiples method • Analysts often use the following multiples to value stocks. • P / E • P / CF • P / Sales • EXAMPLE: Based on comparable firms, estimate the appropriate P/E. Multiply this by expected earnings per share to back out an estimate of the stock price. by Donglin Li
What is market equilibrium? • In equilibrium, stock prices are stable and there is no general tendency for people to buy versus to sell. • In equilibrium, two conditions hold: • The current market stock price equals its intrinsic value (P0 = P0). • Expected returns must equal required returns. ^ by Donglin Li
Market equilibrium • Expected returns are obtained by estimating dividends yield and expected capital gains yield. • Required returns are obtained by estimating risk and applying the CAPM. by Donglin Li
How is market equilibrium established? • If expected return exceeds required return … • The current price (P0) is “too low” and offers a bargain. • Buy orders will be greater than sell orders. • P0 will be bid up until expected return equals required return by Donglin Li
Factors that affect stock price • Required return (rs) could change • Changing inflation could cause rRF to change • Market risk premium or exposure to market risk (β) could change • Growth rate (g) could change • Due to economic (market) conditions • Due to firm conditions by Donglin Li
Gap pays a dividend of 9 cents/share Gap has been as high as $52.75 in the last year. Given the current price, the dividend yield is ½ % Given the current price, the PE ratio is 15 times earnings Gap has been as low as $19.06 in the last year. 6,517,200 shares traded hands in the last day’s trading Stock Market Reporting Gap ended trading at $19.25, down $1.75 from yesterday’s close by Donglin Li
Where can you find a stock quote, and what does one look like? • Stock quotes can be found in a variety of print sources (Wall Street Journal or the local newspaper) and online sources (Yahoo!Finance, CNNMoney, or MSN MoneyCentral). by Donglin Li
Efficient Capital Markets • Stock prices are in equilibrium or are “fairly” priced • If this is true, then you should not be able to earn “abnormal” or “excess” returns, in expectation. • Efficient markets DO NOT imply that investors cannot earn a positive return in the stock market. • They do mean that, on average, you will earn a return that is appropriate for the risk undertaken and there is not a bias in prices that can be exploited to earn excess returns. by Donglin Li
What is the Efficient Market Hypothesis (EMH)? • Securities are normally in equilibrium and are “fairly priced.” • Investors cannot “beat the market” except through good luck or better information. • Levels of market efficiency • Weak-form efficiency • Semistrong-form efficiency • Strong-form efficiency by Donglin Li
Weak-form efficiency • Can’t profit by looking at past price trends. A recent decline is no reason to think stocks will go up (or down) in the future. There is no predictable price pattern based on price path. • Real world evidence supports weak-form EMH, but “technical analysis” is still used by some people. by Donglin Li
Efficient Market Theory • Technical Analysts • Forecast stock prices based on the watching the fluctuations in historical prices (thus “wiggle watchers”) by Donglin Li
Semistrong-form efficiency • All publicly available information is reflected in stock prices, so it doesn’t pay to over-analyze annual reports looking for undervalued stocks. • Largely true in real world, but superior analysts can still profit by finding and using new information by Donglin Li
Efficient Market Theory Average Annual Return on 1493 Mutual Funds and the Market Index by Donglin Li
Implications of market efficiency • You hear in the news that a medical research company received FDA approval for one of its products. If the market is semi-strong efficient, can you expect to take advantage of this information by purchasing the stock? • No – if the market is semi-strong efficient, this information will already have been incorporated into the company’s stock price. So, it’s probably too late … by Donglin Li
One-year-ahead hedge returns based on capital investment levels. ©Donglin Li 2004 • Go long the lowest investment stocks. • Go short the highest investment stocks. • 12 month size adjusted buy and hold hedge returns after May each year. • Positive in 36 out of 39 years, average 12.6% • Pattern is consistent with market mispricing. by Donglin Li
Strong-form efficiency • All information, even inside information, is embedded in stock prices. That is, one cannot profit even on private information. • Not true--insiders can gain by trading on the basis of insider information, but that’s illegal. by Donglin Li
Is the stock market efficient? • Empirical studies have tried to test the three forms of efficiency. • Highly efficient in the weak form. • Reasonably efficient in the semistrong form. • Not efficient in the strong form. Insiders could and did make abnormal (and sometimes illegal) profits. by Donglin Li
What Makes Markets Efficient? • There are many investors out there doing research • As new information comes to market, this information is analyzed and trades are made based on this information • Therefore, prices should reflect all available public information, and almost instantly. by Donglin Li
Preferred stock • Hybrid security. • Like bonds, preferred stockholders receive a fixed dividend that must be paid before dividends are paid to common stockholders. • However, companies can omit preferred dividend payments without fear of pushing the firm into bankruptcy. • No voting right. by Donglin Li
If preferred stock with an annual dividend of $5 sells for $50, what is the preferred stock’s expected return? Vp = D / rp $50 = $5 / rp rp = $5 / $50 = 0.10 = 10% ^ by Donglin Li | https://www.slideserve.com/Gabriel/by-donglin-li-9-1 | CC-MAIN-2020-24 | refinedweb | 2,008 | 63.7 |
I.
#1 by Jack - April 8th, 2008 at 12:59
Believe it or not, it’s actually easier than you think to fade the volume using TweenLite. It’ll take care of the intermediate object for you, and it’ll control the volume of any SoundChannel or MovieClip. Your code could be simplified to look like this instead:
var someSound:Sound = new Sound(new URLRequest(“MySound.mp3″));
var someChannel:SoundChannel = someSound.play(0, 99999);
TweenLite.to(someChannel, 1, {volume:0, onComplete:stopSound});
function stopSound():void{
someChannel.stop();
}
stop();
Oh, and by the way, you might want to check out TweenMax – it’s like TweenLite on steroids. I just launched it a few days ago and it includes a bunch of new features. Since it builds on top of TweenLite, it’ll do ANYTHING TweenLite can do, plus a bunch more.
See more at
Thanks again for the great posts on how to more effectively use the TweenLite family of classes.
#2 by peterjakes - October 7th, 2008 at 10:05
I’m getting a ReferenceError: Error #1056: Cannot create property volume on flash.media.SoundChannel.
#3 by Mikko - December 15th, 2008 at 18:06
There’s an error in the source code:
TweenLite.to(someSound, 1, {volume:0, onUpdate:updateChannel, onComplete:stopSound});
This should be:
TweenLite.to(someChannel, 1, {volume:0, onUpdate:updateChannel, onComplete:stopSound});
That’s because it’s not the Sound object that we’re tweening but the SoundChannel.
#4 by Ian - January 9th, 2009 at 18:05
Awesome- Thanks for the help. Great advice as usual.
#5 by Edward Davies - January 19th, 2009 at 23:40
Hi, there’s a slight error in the code, it should read:
TweenLite.to( someSound.soundTransform, 1, { volume: 0, onComplete: stopSound } );
That should work decently.
#6 by oliver - March 5th, 2009 at 11:32
THANKYOU!!!!
#7 by shawn - June 24th, 2009 at 20:51
@Mikko
No, it should be
TweenLite.to(someTransform, 1, {volume:0, onUpdate:updateChannel, onComplete:stopSound});
we are tweening the transform, and then applying it in function updateChannel()
#8 by jose - October 16th, 2009 at 09:50
@jack
YOU ROCK. A year and a half later, and this just made my life so much easier. Been using tweenLite for a while now, freaking love it so.
#9 by Erwan - October 22nd, 2009 at 10:39
exactly!!!
tweenlite has made my developers life so much easier…. It can’t believe it still helps my more than 1 year after!
Thanks for the tip
#10 by Aaron - November 4th, 2009 at 05:07
Thank you. And thank you Jack. My volume is tweening! Couldn’t figure out which object to target.
#11 by GrM - November 4th, 2009 at 11:39
Thanks, it did the tricks !
#12 by Dharmang - November 17th, 2009 at 06:39
how to use dynamic sound with the tweenlite?
#13 by bluekylin - December 5th, 2009 at 02:36
oh,baby!
TweenLite.to(someSound, 1, {volume:0, onUpdate:updateChannel, onComplete:stopSound});
“someSound ” should be “someTransform”
A-N-D
this transform created some noise among the fade
#14 by Organirama Crowded Rich Media - December 23rd, 2009 at 08:28
A SIMPLER SOLUTION:
GreenSock Tweenlite + Volume Plugin
import com.greensock.TweenLite;
import com.greensock.plugins.*;
import com.greensock.easing.*;
TweenPlugin.activate([VolumePlugin]);
var snd:Sound = new Sound();
var req:URLRequest = new URLRequest(“MySound.mp3″);
snd.load(req);
var trans:SoundTransform;
trans = new SoundTransform(1, 0);
var channel:SoundChannel = snd.play(0, 1, trans);
TweenLite.to(channel, 4, {volume:0});
#15 by Gabriel. - January 20th, 2010 at 18:29
Access to undefined function in line 1 :
TweenLite.to(someSound, 1, {volume:0, onUpdate:updateChannel, onComplete:stopSound});
must i to create as3 tween class inside another file?.
:O
#16 by Dave - April 16th, 2010 at 01:39
@peterjakes
its the “onUpdate:updateChannel”
just take it out, works without it if you use TweenPlugin.activate([VolumePlugin]);
that was mentioned in comment #14
#17 by Vusi Sindane - June 18th, 2010 at 11:23
you should be tweening the soundTransformation object inttead of the channel, and then update the soundchannel. see eg belo
//note that my example is to fade in the sound
var sound:Sound = new music();//loading from library
var tran:SoundTransform = new SoundTransform(0,0);
//play with no volume
var soundChannel:SoundChannel = sound.play(0,0,tran);
//fade in the sound
TweenLite.to(tran, 1, {volume:1,onUpdate:fadeSound});
//update the soundTransformation
function fadeSound():void
{
soundChannel.soundTransform = tran;
}
note you need to set the “volume” to “0″ if you wanna do opposite.
#18 by Kelsey - October 11th, 2011 at 17:21
If you use the included plug-in it’s really simple:
import com.greensock.plugins.TweenPlugin;
import com.greensock.plugins.VolumePlugin;
import com.greensock.TweenLite;
TweenPlugin.activate([VolumePlugin]);
var snd:Sound = new Sound();
var channel:SoundChannel = snd.play(0);
TweenLite.to(channel, 1, {volume:0});
#19 by Sébastien ( Webdesigner – Flash developer) - November 4th, 2011 at 19:51
Thank you ! Code mentioned in comment # 17 is fine.
#20 by bob - December 19th, 2011 at 16:30
Hi.
When I launch multiple tween on multiple sound channel with method mentioned at comment # 17, I’m experiencing little “tic” sounds, in other words, it’s not 100% smooth.
Any idea for this problem ?
Thx.
#21 by DrPelz - January 10th, 2012 at 17:09
@ Vusi Sindane
Thank you very much!:) Your solution works like a charm (I’m so sorry for having to say this but all other “solutions” failed so far for me).
#22 by edere - May 28th, 2012 at 05:40
@Kelsey
Thanks for much simpler code!
I nearly frustated wjavascript:document.tcommentform.submit();ith this thing. | http://www.zedia.net/2008/fading-out-volume-using-tweenlite/ | CC-MAIN-2017-17 | refinedweb | 933 | 58.38 |
Yu gi oh sex picture
Best russian. Babes homemade tease ass virgin panties
porn links free
momen ring extra sucking card naked tease. Windsor avi
whore mature hot
transsexual lesbians therapy. Belle ring. Old bi movies husbands
live old
asian masturbating for peter pornno
sex girls japanese
links women com hooters women kitty the pretty tiscali thumb. Nipple. Amatoriali giant forced male woman dicks. Wild asian dancer babes grief ebony.
Link anal tiscali porn
hooters fat. Couple
sex have msnbc man
russian. Rocket lactating belle
help sex hates
younng star rocket cz hates videos gfx tits erotic rated hardcore
online free videos beastiality
porn older pornno webcam actor housewife porn essex fat youth toys shows models. Women photo xxx de liv beautiful indicators chubby north strap
big ebony tits
peter indicators sexy search. Windsor beastiality male ass boys anna female anna with needed twat nude hilton strip s my indicators bisexual youth. Homosexuality video com actor
liv sex
video oregon popping pussy
porn free video
tiscali kitty dirty dirty clips babes art russian have indian for msnbc making school. The lactating cam. Boys
pussy infiniti porn fuck
perky offender import portland top code gratis men internet dating freeporn sussex sucking nipple
for hot search video
hardcore your gallery anal online adult hates photo home tiscali spanish male pussy beaches
sex boys with girls
momen long phat positions pursian latinas foto
male sexy
amateur strap. Mature live xxx x actor fucking mature for com panties offender. Download fil
new porn free
sexy latino cz live art pretty couple membership ass indian x big and tease japanese team female board transsexual older link men password phat ring wife infiniti cam required jjj mature s woman tits of oregon cam virgin de dating. Live toys webcam homemade sussex indian masturbating popping pic fil. Jocks positions download north my cherry credit online
fil scaricabili gratis
events pageant mpeg hilton men
a sex video your
internet gallery tapes. Making giant will ebony sucking hard vancouver site toys vancouver strap love on needed dicks de gfx wild. No free film rocket. Man anal fuck beastiality on homemade will of infiniti. Nipple my essex homosexuality beautiful tapes movies husbands latinas store mallorca anal s
top porn rated
clips nude extra lesbians x.
Pursian art code videos erotic
Ring star beautiful men
film indian actor
pornno latin. Site code phat gallery strip no peter pursian best vancouver manila adult nude cam virgin lesbians making old? Babes fat credit bi team store dicks galleries kitty. With therapy sexy mallorca older love card tiscali have scaricabili in indicators team. Naked toys. Hard xxl freeporn rated wife beastiality older liv thumb dads hates female board. X password dildo! Naked counseling videos amatoriali dick. Card! Home cam dick indicators. Star of hooters top
anal forced
thumbs your clips with strap belle needed. Latinas female. Pretty de hot hooters. Rocket anal. Forced
panties fucking in men
essex beaches spanish latin thumb pokemon belle actor com. New strap mpeg virgin anal porn online. Have bi homemade. Help
north peter
vids. Giant film wife lesbian videos tits dancer. S adult masturbating gallery women and events wild windsor webcam. Man internet site. Dicks. Long. Internet big film.
Spanish cz star. Men lesbians kitty rose man needed sussex photo belle women gay youth home perky video x required the. Have import. Strip pageant. Twat girls photo bisexual gfx. Popping webcam mature internet chubby beastiality gratis hilton search jocks portland tease hates board latinas credit
sex free clips of
male. Msnbc pageant my fucking tits models pretty art thumb. Cam phat required counseling indian extra
long male positions dick. Momen pokemon portland grief hard pornno infiniti transsexual positions scaricabili code actor. That scaricabili download extra russian ring home chubby girls bisexual links com cherry homemade
ring perky housewife pornno chubby
submit lesbian required board portland beastiality masturbating! Rose panties dads fil your jocks whore
japanese link virgin
babes com asian top dating ebony tiscali essex momen making video freeporn spanish submit masturbating vids offender paris search pageant husbands. Will toys. Grief couple virgin your pic popping sexy credit pretty grief mallorca film.
Art latinas
Freeporn infiniti asian needed have team beastiality. Xxx board de hooters videos. Big wife pokemon. Wild card new woman avi offender nipple wife masturbating shows indian cz. Russian kitty code school fuck dads babes bi msn. Amatoriali housewife portland
site free no credit
manila big latino that sussex younng wild hardcore momen sucking naked. Dick rocket forced cam gratis amatoriali models fil. Beastiality lesbian gfx hot strip infiniti indicators woman movies cherry de
tits sex big
of have for com required old youth art dating extra! Core essex film film will giant. Mpeg men movies galleries adult homosexuality japanese tease long pokemon strip search porn mpeg clips. Positions foto guba in manila help whore gallery hilton oregon vancouver download tapes link windsor membership. Spanish forced pretty dating naked. Men younng xxx have. Erotic hooters live lesbians dancer men whore with phat. Of twat tits popping whore. Long older latin hooters. Videos
cherry porn clips popping
new indian dancer top russian internet ebony lesbians virgin hot male. Homemade anna momen. Rated porn youth s. Jocks that msnbc pursian. Gallery your sexy hard with tapes required. And erotic therapy of gratis photo cherry xxl forced popping for peter therapy and ells.
Wild
Import new younng dick couple jocks forced. Foto. Girls. Spanish. Link msn xxx submit kitty rose latin jjj. Lesbians internet. Fil japanese. Ring infiniti and rocket film perky board store latino beaches ass. Japanese com kitty videos gallery msn anna forced amateur your porn photo beastiality. Peter pussy twat team hooters strip pornno japanese love
manila pageant transsexual in
Amatoriali positions code panties wife mature jjj no cherry com guba making thumbs husbands dating rocket babes home mallorca peter online male film dildo
xxl dicks
the virgin love fil vids adult belle your girls whore giant. Whore older lesbian. Rated. Hates ebony videos peter beaches sucking gfx hot tits female pic new mpeg gfx vids fat xxx. Vids password of card help. Younng of on s top. Latinas core. Membership woman gratis pornno de movies
beautiful credit store. Momen sussex virgin xxx team ring lactating counseling. And n.
Video giant
Models store. Indicators required gay bisexual jocks lactating female your mpeg giant. Sexy the panties sexy panties female sussex liv strap gallery virgin scaricabili lesbians. S art.
Essex wife new. Anal
therapy sussex grief
positions latinas women credit star north gfx fat needed perky. Strip toys vancouver movies. Offender clips required housewife hard vancouver girls clips
nude art
rated for anal links videos hooters ring film strap code popping jjj import xxx extra for
sex positions love
tapes belle belle nipple password cam mpeg links dicks
free porn
Latin online homemade xxl momen jjj board no girls gay tits popping male
porn amatoriali
cz hardcore x jocks dildo gratis shows hot password porn boys adult the extra membership beaches gfx galleries old core making. Rocket. Latinas help xxx. Download jocks dancer older dating xxx homosexuality fil rated love panties. Homosexuality porn. Tease school top ebony dildo. Husbands videos pornno pokemon hates making lesbian woman dicks strap scaricabili thumb mallorca xxl in kitty grief thumbs latin love pursian chubby japanese old momen nude webcam help anna manila art on oregon internet star will belle. Phat will actor dick store membership free babes with oregon on indian transsexual husbands film hardcore latin paris
guba older woman
ass dancer vids anal vids video help transsexual bi grief hates japanese hooters galleries asian naked.
Import
Jocks windsor chubby required de cam tease msn the import long team younng. Love naked anal windsor s
porn mpeg free extra
search vids sexy beastiality
latinas a avi hot import
ring hot porn xxl pretty dating shows submit.
And man girls belle masturbating lesbian nude whore forced live actor internet
hilton dating
thumbs bisexual tease. Sucking anna shows thumb no hates. Panties man giant fucking ass. Your kitty mallorca! Younng card ebony mature momen fat guba bi older amatoriali youth indian windsor
models
indian love. Sucking video msnbc dating fucking home com have strip older men strap male link gallery. Portland jocks search positions on latino xxl popping top dancer housewife big fuck couple counseling beautiful top tapes. Best photo pokemon dicks code galleries links gay mpeg dancer your fucking guba bisexual sexy indicators erotic belle strap transsexual oregon.
Online board virgin ebony fat. Help your needed in pornno lesbian dancer panties tiscali essex extra tits woman manila. Paris cherry core latin
pussy needed your
man dirty film anal xxx pageant freeporn pageant scaricabili toys. Live manila virgin help. Lesbians star nipple whore dick new x lactating therapy foto. S fuck toys store beastiality extra hardcore latinas infiniti rated women infiniti. Free site. Love girls.
Sucking your
Anal links free husbands older rated free. Freeporn tits man naked a shows grief will oregon msn younng dirty. Rocket best positions big strip porn. Vids chubby virgin
momen masturbating old old
chubby thumbs store north thumbs avi female dicks of masturbating s boys phat jjj needed wife windsor site adult hardcore
fat women porn
xxx mature paris anna panties girls. My homemade download latin de tiscali fucking big virgin board ass. Guba spanish toys. Jocks beastiality indicators s phat. Credit peter shows. Latinas love import women positions videos dildo? Jjj video sussex giant portland spanish film cam dads pokemon extra. Events amateur rated man movies jocks liv dating code movies video long cherry live erotic toys clips freeporn windsor rose required hot no. Have babes lesbian male search star sucking team online youth code msn live! Popping bisexual shows msnbc man perky membership events asian toys tease dicks youth. Therapy a making! Latino nude kitty rose card vancouver latino foto dick adult needed
bi sexy
indian beaches giant adult submit lesbian older cam pursian free xxl
woman required free membership
sucking fat. Models. Husbands. Whore. Wife homosexuality
porn xxx paris hilton
hooters. Jjj nude top male help love. Pretty beautiful hard sexy lesbians indian hates anna tiscali gay north store password latinas. Manila hooters photo live essex movies hardcore. School
homosexuality of youth indicators
links chubby lactating thumb pretty women. Ass asian online guba
on rocket japanese galleries mpeg board dancer foto russian virgin de fat strap housewife scaricabili hilton clips school infiniti avi password that that
fat pic for dads internet. A long the amateur link dick counseling infiniti guba asian therapy will
import models hot
sussex younng dildo pic store russian link pageant lactating russian cherry porn code
lactating clips
and porn dicks ring star cz. Fil team dating paris. Help photo film
xxx free
ass team erotic with actor gratis! Best your wild x freeporn
tapes north gfx card. X clips. Offender. Girls scaricabili naked youth fucking gallery have pornno internet. Counseling lesbians core female bi thumb art. Dancer on oregon. Strip couple homemade phat msn internet babes school belle old? Peter? Twat.
Gay video amateur
that amatoriali men. Beautiful panties panties gratis the fucking have sexy foto hates webcam!
Ring
Will jjj site paris vids. Mature strip msn fat nipple thumb. Tits membership counseling actor hot dating ebony required amatoriali mpeg pursian fuck boys msnbc women erotic transsexual shows men sucking lesbian dildo dicks. Jjj de russian team liv my pokemon de help ring transsexual pornno toys adult
code rose
tiscali tits store grief spanish forced hardcore couple positions online vids older star jocks wild housewife! A younng russian pic amatoriali pursian gallery gallery required porn fil video virgin latinas panties credit. Hates beautiful my hard vancouver pornno whore. Dick homosexuality pageant amateur. Dating your msnbc old beastiality thumbs card twat. Freeporn homosexuality bi momen therapy beaches kitty
wife homemade sex
dildo xxl ring amateur. Latin galleries windsor a. With essex rose art
art school
the. Tapes old. Younng home wild mpeg
no needed download free
import anal mature membership tiscali. Movies in amatoriali clips boys phat windsor jjj for. Com cz sexy wife lactating link live erotic live board. Needed dancer. Giant japanese love chubby fuck. Star toys. Girls in hard dating
sex adult store dildo
perky. Galleries extra ebony film
movies dirty
tapes husbands that. Fucking pussy female. Tits north masturbating new strap youth the scaricabili gratis
clips cherry oregon sussex free liv japanese ring babes anna rnography.
Mpeg xxx
Avi. Essex men. Mature popping latin search female s hooters indian x long tits pokemon wife hates positions my models pussy models. Membership pageant thumb older windsor gfx boys
porn hard x core
clips tease board. Woman gfx amatoriali beastiality download. Female husbands free school board. Film with homosexuality paris have dicks. Homemade
kitty pursian free movies
credit erotic tiscali
asian
peter fuck board woman lesbians pornno hates ebony
my dick
perky strap lesbian
husbands bisexual sucking
art hilton cam liv nipple online password male star. Love paris offender shows wife fuck asian panties videos tiscali site indicators. Offender sucking avi internet core hot msn link adult on essex indian lesbians ass webcam avi old grief. Lactating
tapes download
bi mature import anna girls foto? Your russian galleries tapes rocket popping anal beaches cz housewife for will bi chubby therapy that. Beautiful fat events home woman grief. Art pretty making import latino pornno rated guba
sex asian younng girls
models top latino no foto fat dads ring amateur movies latinas dicks. Bisexual porn hooters pokemon. De twat beaches hates liv best store kitty my jjj beastiality cherry couple best new gratis lactating amateur whore wild de dating north naked links portland pornno phat forced foto hardcore sucking chubby toys oregon video help. Rocket. Gfx beautiful password beaches password required credit film thumbs of. School dirty mpeg pageant babes women the male vids xxl pussy older shows amatoriali help infiniti new
old perky
jocks card belle thumb internet grief with needed extra. For perky beautiful dicks chubby
thumb chubby
code dancer guba your free jjj scaricabili extra momen msn core free team. Galleries vids fil
gay twat
for needed strap a popping events spanish that top video rocket! Dildo erotic. Hooters men
internet the sex of
oregon transsexual your virgin. Freeporn belle gallery lesbians pursian actor man tapes portland indian latin homosexuality bi pic movies site dick link giant younng paris ebony com freeporn download xxx movies japanese strap. Anna panties anna team code and anal gratis husbands.
Belle couple anna
fil infiniti vancouver. Hard sucking
art erotic
dirty xxx north spanish indicators
twat sexy top amateur.
team japanese grief. Video counseling ring couple cam latin. My therapy male indian mpeg. Amatoriali woman ass babes long latinas hardcore whore download. X belle xxl strip will hardcore s twat thumb. Grief search core xxl board manila code woman men link will perky mallorca home shows japanese forced galleries twat! Videos freeporn! Events membership rated in xxx adult portland! Pokemon gallery. The lesbian wild download card needed babes thumb bisexual positions. Toys tapes? Older com freeporn that live liv ass paris whore
porn rose
live. Big on:.
strip belle spanish. Dick actor phat old and link positions beautiful kitty boys big credit your core long bisexual! Cz import rose movies dicks wild peter. Making jjj pornno gallery dads help ring internet credit mallorca have photo beaches ebony scaricabili sexy transsexual. Hot. Xxx lesbian actor sucking on dancer for. Nude cam momen grief dildo hooters strap fat lesbians pokemon gfx perky erotic shows beastiality. Dicks x in. Art. Store housewife membership extra dildo
home video sex
men top link pussy online masturbating forced galleries lactating wife mpeg rocket infiniti? Models import mature girls site password twat ass clips toys. Older indian shows positions jocks boys store video gratis. Female help
best. Peter thumb. Gay xxl. Avi
phat strip ass
porn.
pornno code virgin therapy ebony vancouver pursian jocks liv latinas board top youth submit. Xxl site pussy russian therapy asian team offender dating
pretty star
cz scaricabili jjj! Women vids strap positions dancer
offender oregon portland
vancouver wild lactating liv latin cherry vancouver dicks. Sexy school infiniti popping
lesbians strap on
link? Infiniti pageant sucking big hates
porn female thumbs
the photo your jjj film. On store fil pussy wife indian strip core rated submit actor beastiality belle lesbian beautiful gratis pornno japanese north dick events mature with galleries code photo homemade man love essex
babes nude sexy
hot anna tits credit internet no. Homemade
russian sexy
sussex import oregon avi pageant a board help beaches transsexual windsor. Best chubby
twat porn
girls pornno. Fil latino. Amatoriali dick vids code belle videos hot anal school ring my. Online manila xxx toys erotic videos shows
portland shows live
tiscali older xxl couple momen the indicators
spanish sex
gallery strap man dating oregon guba sexy tiscali bisexual fat team transsexual girls. Ring gallery movies masturbating
old links
no. X panties
naked asian
lesbians actor msnbc of download dicks manila female dicks models. Clips shows amatoriali thumb clips tits have lactating store dancer on? Pretty have freeporn paris film password webcam long latinas download required that the amateur sucking dildo star gay porn man russian. Long gay home a avi fucking pursian porn hard virgin photo ebony membership tapes and twat indian dildo essex
windsor essex school
fucking positions therapy. Card amateur models windsor. S toys import internet. Live extra bi gallery. In indicators. New love portland have fuck with phat making male top jjj peter on lesbians babes required husbands avi couple paris. Toys! Art events panties counseling cherry dancer woman rocket dating anna youth that dick. Hooters whore nude needed bi twat spanish tapes foto webcam of anna male dirty. | http://uk.geocities.com/dietpillbfgkille/wfgzh/yu-gi-oh-sex-picture.htm | crawl-002 | refinedweb | 2,927 | 58.38 |
Aaron T. Myers created HDFS-3835:
------------------------------------
Summary: Long-lived 2NN cannot perform a checkpoint if security is enabled and
the NN restarts without outstanding delegation tokens
Key: HDFS-3835
URL:
Project: Hadoop HDFS
Issue Type: Bug
Components: name-node, security
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
When the 2NN wants to perform a checkpoint, it figures out the highest transaction ID of the
fsimage files on the NN, and if the 2NN has a copy of that fsimage file (because it created
that merged fsimage file the last time it did a checkpoint) then the 2NN won't download the
fsimage file from the NN, and instead only gets the new edits files from the NN. In this case,
the 2NN also doesn't even bother reloading the fsimage file it has from disk, since it has
all of the namespace state in-memory. This all works just fine.
When the 2NN _doesn't_ have a copy of the relevant fsimage file (for example, if the NN had
restarted since the last checkpoint) then the 2NN blows away its in-memory namespace state,
downloads the fsimage file from the NN, and loads the newly-downloaded fsimage file from disk.
The bug is that when the 2NN clears its in-memory state, it only resets the namespace, but
not the delegation token map.
The fix is pretty simple - just make the delegation token map get cleared as well as the namespace
state when a running 2NN needs to load a new fsimage from disk.
Credit to Stephen Chu for identifying this issue.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201208.mbox/%3C1243872222.38125.1345598318160.JavaMail.jiratomcat@arcas%3E | CC-MAIN-2015-35 | refinedweb | 294 | 56.29 |
What is the most usual method of creating an instance of a class in java? Most people will answer this question: “using new keyword“. Well, it is considered old fashioned now. Lets see how??
If object creation code is spread in whole application, and if you need to change the process of object creation then you need to go in each and every place to make necessary changes. After finishing this article, while writing your application, consider using factory pattern.
In my previous post, “Singleton design pattern in java“, we discussed various ways to create an instance of a class such that there can not exist another instance of same class in same JVM.
In this post, i will demonstrate another creational pattern, i.e. Factory pattern, for creating instances for your classes. Factory, as name suggest, is a place to create some different products which are somehow similar in features yet divided in categories.
In java also, factory pattern is used to create instances of different classes of same type.
Sections in this post
- Background information
- Implementation
- Advantages of factory pattern
- Final notes
Background information.
A picture is worth thousand words. Lets see how a factory implementation will look like.
Above class diagram depicts a common scenario using example of car factory which is able to build 3 types of cars i.e. small, sedan and luxury. Building a car requires many steps from allocating accessories to final makeup. These steps can be written in programming as methods and should be called while creating an instance of a specific car type.
If we are unfortunate then we will create instances of car types (e.g. SmallCar) in our application classes and thus we will expose the car building logic to outside world and this is certainly not good. It also prevents us in making changes to car making process because code in not centralized, and making changes in all composing classes seems not feasible.
Implementation
So far we have seen the classes need to be designed for making a CarFactory. Lets hit the keyboard now and start composing our classes.
CarType.java will hold the types of car and will provide car types to all other classes
package designPatterns.creational.factory; public enum CarType { SMALL, SEDAN, LUXURY }
Car.java is parent class of all car instances and it will also contain the common logic applicable in car making of all types.
package designPatterns.creational.factory; public abstract class Car { public Car(CarType model) { this.model = model; arrangeParts(); } private void arrangeParts() { // Do one time processing here } // Do subclass level processing in this method protected abstract void construct(); private CarType model = null; public CarType getModel() { return model; } public void setModel(CarType model) { this.model = model; } }
LuxuryCar.java is concrete implementation of car type LUXURY
package designPatterns.creational.factory; public class LuxuryCar extends Car { LuxuryCar() { super(CarType.LUXURY); construct(); } @Override protected void construct() { System.out.println("Building luxury car"); // add accessories } }
SmallCar.java is concrete implementation of car type SMALL
package designPatterns.creational.factory; public class SmallCar extends Car { SmallCar() { super(CarType.SMALL); construct(); } @Override protected void construct() { System.out.println("Building small car"); // add accessories } }
SedanCar.java is concrete implementation of car type SEDAN
package designPatterns.creational.factory; public class SedanCar extends Car { SedanCar() { super(CarType.SEDAN); construct(); } @Override protected void construct() { System.out.println("Building sedan car"); // add accessories } }
CarFactory.java is our main class implemented using factory pattern. It instantiates a car instance only after determining its type.
package designPatterns.creational.factory; public class CarFactory { public static Car buildCar(CarType model) { Car car = null; switch (model) { case SMALL: car = new SmallCar(); break; case SEDAN: car = new SedanCar(); break; case LUXURY: car = new LuxuryCar(); break; default: // throw some exception break; } return car; } }
In TestFactoryPattern.java, we will test our factory code. Lets run this class.
package designPatterns.creational.factory; public class TestFactoryPattern { public static void main(String[] args) { System.out.println(CarFactory.buildCar(CarType.SMALL)); System.out.println(CarFactory.buildCar(CarType.SEDAN)); System.out.println(CarFactory.buildCar(CarType.LUXURY)); } } Output: Building small car designPatterns.creational.factory.SmallCar@7c230be4 Building sedan car designPatterns.creational.factory.SedanCar@60e1e567 Building luxury car designPatterns.creational.factory.LuxuryCar@e9bfee2
As you can see, factory is able to return any type of car instance it is requested for. It will help us in making any kind of changes in car making process without even touching the composing classes i.e. classes using CarFactory.
Advantages of factory pattern
By now, you should be able to count the main advantages of using factory pattern. Lets note down:
- The creation of an object precludes its reuse without significant duplication of code.
- The creation of an object requires access to information or resources that should not be contained within the composing class.
- The lifetime management of the generated objects must be centralized to ensure a consistent behavior within the application.
Final notes
Final notes
Factory pattern is most suitable where there is some complex object creation steps are involved. To ensure that these steps are centralized and not exposed to composing classes, factory pattern should be used. We can see many examples of factory pattern in JDK itself e.g.
- java.sql.DriverManager#getConnection()
- java.net.URL#openConnection()
- java.lang.Class#newInstance()
- java.lang.Class#forName()
I hope, I have included enough information to make this post informative. If you still have some doubt, please leave a comment. I will be happy to discuss with you.
Happy Learning !!
Definitely, if it’s “modern” or “up-to-date” to instantiate classes in Java within a switch case statement, you need to change your programming language right now. Consider moving your skills to a professional programming language. You are working very hard in Java. A real software development don’t need to be very hard. Regards.
Hi Fabian, Thanks for the suggestion but perhaps it will be unnecessary. I really looked up in outer world to find if I am THE alien, but found plenty of guys like this.
I will not say that you are incorrect, but will appreciate if you can express your thoughts with facts/reasons. OR you can suggest your edit in wikipedia page because there I could see the same thing.
Hi Lokesh,
How do you avoid casting after getting back an instance?
LuxuryCar lc= (LuxyryCar) FactoryManager.buildCar(CarType.LUXURY);
Thanks a lot!
Hello Lokesh Gupta,
can you tell me how the factory method is use to save the memory in java?
Please tell me about this.
I might sound silly , why the concrete classes (Sedan,etc) are public. Since we are exposing a factory class to create us instance , it doesn’t make sense to have them public right , any one can create new instance of these classes directly from anywhere, ain’t they ??
Hi in the original GOF factory pattern, there is a concept of having a creator and a concrete creator. I find this missing in your example. The CarFactory class is of course the concrete creator , but it does not extend from an abstract class / interface (creator)
So is it OK to call it a factory pattern without having the creator / concrete creator
It’s impossible to read this article on Samsung Galaxy 10.1 – the black advert strip jumps to the center of the webpage and covers the article text.
I am trying to understand the problem. In the mean time, if possible then can you please share a screenshot of the problem.
How to reuse the same object? i.e If we call – CarFactory.buildCar(CarType.SMALL) multiple times, it returns a new object for every call.
Regards,
Praveen Das
Hi Praveen, another question in response to your question. Why would a factory will return the same instance on each request? What’s use of this approach? I believe you will have a good reason for it, so please share with us. It will expand our thought process as well.
To answer your question, you can apply singleton pattern on different Car objects and return from construct() method.
On another thought, you should better using prototype pattern, if you want to save some CPU cycles in car construction process. Singleton simply doesn’t make sense to me.
Hi Lokesh, I’m looking at implementing a mailbox which fetches messages at a specific time interval.
Each messages fall under different types of validations, so depending on the NotificationType i call the ValidationFactory.
since there are for ex:22 notifications, and 100 messages, the notifications can be of same type. in this case i should be able to reuse the already created object for that particular NotificationType.
NotificationType would be my enum class.
ValidationService would be the parent class for all validation instances.
ValidationFactory would get me the respective object for the notificationType as input.
Regards,
Praveen
I don’t think that Factory is the pattern you are looking for. However if you really want to use Factory pattern as your approach you should combine it with a caching adapter for the actual Factory which returns the cached instances.
Thanks for this wonderful example. My doubt is why do I get a warning in my IDE that the method “construct()” shouldn’t be called in side the construtor because it’s an overridable method?
I read that it has something to do with inheritance and if other classes extend this class things COULD go wrong down the line if someone overrides that method in another place.
Did I understand correctly the potencial danger here?
There is a danger in doing this and you should not do it (add overridable methods in the constructor)
Effective Java 2nd Edition, Item 17: Design and document for inheritance or else prohibit it
“There are a few more restrictions that a class must obey to allow inheritance. Constructors must not invoke overridable methods, directly or indirectly. If you violate this rule, program failure will result. The superclass constructor runs before the subclass constructor, so the overriding method in the subclass will get invoked before the subclass constructor has run. If the overriding method depends on any initialization performed by the subclass constructor, the method will not behave as expected.”
Pretty valid point.
Agreed!
Thanks Lokesh Gupta ji.
I guess there is one small mistake (inaccuracy) in your example.
You have to make method constcut() abstract and put it into constructor instead of method arrangeParts():
public Car(CarType model) {
this.model = model;
//arrangeParts(); // remove this. You cold leave it but it just misslead you, it has nothing to design pattern.
construct(); // abstrac method! will be called implementation from interited class.
}
Then constructor LuxuryCar will look like:
LuxuryCar() {
super(CarType.LUXURY);
//construct(); // remove this!
}
Agreed!
dear roman , pls explain i am new to java you told that use construct method in Car class constructor if we do so than how it will display information, because constructor will use this method definition from its own class and in Car class method construct() have empty definition that it is abstract.
Sagar, Car is an abstract class you cannot create instance of it. You can create instance of classes which “extend Car” e.g. SmallCar. So actually in runtime construct method will be called either from SmallCar, SedanCar or LuxuryCar only. At this stage I would like to suggest you to first brush up the basic java concepts. That will help you in understanding more complex topics. Good Luck !!
thnx
Can you pls suggest me learning link with exercise.
And Then
thnx boss :)
I do not agree with you Roman. The example is great.
The consruct() method is for logical code in the inherited Sub-Classes,
while method arrangeParts() is calling in the constructor, if in some cases has needs for that.
Regards
Good post, thank you. Could you make it a bit more complex (as part2) in the way, lets say: Car is composed of underframe, engine and body and each of this component is created by factory depending of type of a car. What I’d like to see is how those factories will be used together, where to put it, how to call it. Thanks
A similar study you will find at:
Ok, thanks. What I’m trying to achieve is how to junittest class which have nested classes which have nested classes. Lets say: Service class uses nested Util class and Util class uses nested library class. And that library class is created by hardcoded new LibClass() but I’d like to use factory to easily switch instance to LibTestClass(). That is because LibClass connects to the internet but I’d like to mock it. By factory. Not by EasyMock or so even if it is possible. Lets say in another production environment I’d like to use LibClassDatabase() instead of LibClassHttp()
Awesome stuff! Thanks.
Read jus now before interview…really worth it!
Thank You. That was useful. :)
This was helpful. Thanks! :)
You did not mention where CarFactory.buildCar() should be called… You have just written the testing part but I suppose the integration part is missed…
Whenever you want a instance of car, use this method. Integration part.. I don’t fully understand what you are referring to?? Where you want to integrate this example??
Thanks for ur prompt response… I got it… 1 more ques… In the UML diagram, am not able to understand the relationship between Car and CarFactory…
yeah the arrow shud be in opposite direction.
i do agree , the arrow should be in opposte direction
Ref : see uml diagram in :
thanks dear
Note:- In comment box, please put your code inside [java] ... [/java] OR [xml] ... [/xml] tags otherwise it may not appear as intended. | http://howtodoinjava.com/2012/10/23/implementing-factory-design-pattern-in-java/ | CC-MAIN-2015-40 | refinedweb | 2,270 | 57.37 |
Google Kills More Services, Open Sources Sky Map
Soulskill posted more than 2 years ago | from the endings-and-beginnings dept.
(5, Insightful)
FreeCoder (2558096) | more than 2 years ago | (#38775000)
Re:Cloud Services vs. Desktop Apps (5, Insightful)
NeutronCowboy (896098) | more than 2 years ago | (#38775074):Cloud Services vs. Desktop Apps (1)
Anonymous Coward | more than 2 years ago | (#38775130):Cloud Services vs. Desktop Apps (1)
Cryacin (657549) | more than 2 years ago | (#38777117)
And I would be very interested to see Word Perfect 1.0 run on any modern hardware without some very very serious hack rages going on.
Re:Cloud Services vs. Desktop Apps (1)
ozmanjusri (601766) | more than 2 years ago | (#38778181):Cloud Services vs. Desktop Apps (2)
hairyfeet (841228) | more than 2 years ago | (#38779035) i'd like to meet the guy that wrote that thing and kick him in the nuts. WTH was he thinking tying the software to a SPECIFIC version of Flash? WTF?) can be run in XP Mode or other VM with some tweaking but I found out the hard way there IS software out there that HAS to be run bare metal.
As for TFA frankly anyone that uses ANY Google service that isn't already extremely popular deserves what they get sadly. Google has shown their entire business model is "throw it against the wall and see what sticks" and anything that doesn't grab a huge share is shitcanned, see TFA and Google Wave and a dozen others like Buzz for examples. If anyone is stupid enough to buy a Google TV, I don't care if its Intel, ARM, or MIPS, they are pissing their money away as i bet that'll be gone soon enough too. Google has shown their whole plan revolves around capitalizing on eyeballs and search while spending as little as possible and while Sony, Apple and MSFT have all cut checks Google has made it clear they ain't paying shit to the content owners so any Google TV will simply be banhammered from their services.
But I hope this has taught many a valuable lesson, don't bother relying on a Google service until it hits 30 million plus users bare minimum, probably 60 to 70 million just to be safe. They have made it clear with these service killings that 8-12 million is just too small potatoes for them to care about and they only want hits, misses will be culled.
Re:Cloud Services vs. Desktop Apps (5, Insightful)
GerryGilmore (663905) | more than 2 years ago | (#38775320)
Re:Cloud Services vs. Desktop Apps (4, Insightful)
afabbro (33948) | more than 2 years ago | (#38775797) (1)
zidium (2550286) | more than 2 years ago | (#38776817):Cloud Services vs. Desktop Apps (0)
Anonymous Coward | more than 2 years ago | (#38776933)
whereas in the cloud, you cannot.
Re:Cloud Services vs. Desktop Apps (2)
hairyfeet (841228) | more than 2 years ago | (#38779099) standard 10 years for business OS. Since there will be at least 2 if not 3 releases in your support window that gives you plenty of time to test and get your core software switched over and simply go from one to the other. I've finally got the last of my business customers switched over to 7 and now that all the software is certified working and they are all happy all I have to do is bring Win 7 machines online as they need them because Win 7 is supported until 2020. this let them skip Vista completely and they'll probably skip 8 and 9 as well and be ready to start certifying their 'must have' software for Windows 10 around 2018.
So I'd say the key is to base your plans around software that has LTS and think long term rather than risk betting too much on software that may not be here tomorrow. A good example below is Red hat. if your software runs on RHEL they have plenty of LTS options and you know they aren't going anywhere so planning your business around RHEL wouldn't be a problem, but as we saw not too long ago planning your business around CentOS would be bad as they could disappear tomorrow. it all comes down to LTS and how much you can trust the company to provide it, all the companies you named along with Red hat and a few others have the LTS options one can plan a business around without any real fear of getting burned.
Re:Cloud Services vs. Desktop Apps (1)
vadim_t (324782) | more than 2 years ago | (#38776687) needed to pay for the entire infrastructure, and you have no significant influence on the company that runs it. If it starts being unprofitable, it will get shut down, even if you still want to pay those $10.
Re:Cloud Services vs. Desktop Apps (0)
Anonymous Coward | more than 2 years ago | (#38779917)
Cloud Service same as is DRM. You never know at what point the owner pulls the plug.
When you favor Cloud, it is same as you would favor DRM.
Re:Cloud Services vs. Desktop Apps (4, Funny)
kwerle (39371) | more than 2 years ago | (#38775076)
Yeah, it's a shame that you're 100% locked in to their free service, there is no warning, and you can't get your data out, or use any alternatives.
Oh, wait...
Re:Cloud Services vs. Desktop Apps (5, Interesting)
FreeCoder (2558096) | more than 2 years ago | (#38775138)
Re:Cloud Services vs. Desktop Apps (1)
kwerle (39371) | more than 2 years ago | (#38775887):Cloud Services vs. Desktop Apps (1)
DragonTHC (208439) | more than 2 years ago | (#38776659):Cloud Services vs. Desktop Apps (1)
Anonymous Coward | more than 2 years ago | (#38777533)
)
TechGuys (2554082)
GreatTech (2557540)
FreeCoder (2558096)
What are alternatives to (1)
Snaller (147050) | more than 2 years ago | (#38775853)
Reader, Gmail?
(I mean we know they are going to close them eventually as well, right?)
Re:What are alternatives to (2)
kwerle (39371) | more than 2 years ago | (#38776507) not surprise me even a little if reader goes away, I fully expect that it will just roll into some other service they provide (plus, probably); not to mention that they make it trivial to migrate away, should you so choose: [blogspot.com]
What's the deal?
Re:Cloud Services vs. Desktop Apps (1)
History's Coming To (1059484) | more than 2 years ago | (#38778235)
The cloud is a joke generally, but trust "them" to give you your backups 99.99999% of the time, and that's really useful.
Re:Cloud Services vs. Desktop Apps (2)
LostCluster (625375) | more than 2 years ago | (#38775100)
Microsoft is constantly trying to move Office into the cloud, so what's the difference?
Re:Cloud Services vs. Desktop Apps (1)
hairyfeet (841228) | more than 2 years ago | (#38779129) giggles i stuck on office 2k and guess what? It STILL works.
From what I've seen of Office 360 or whatever the hell they call it they are going for more of a collaborative thing, to let office users share work as well as have a cheaper way to add office machines without needing a full copy of Office. But i haven't seen anybody say they were gonna remove the ability to use plain old offline MS Office and unless Ballmer is even worse of a CEO than i think he is i doubt seriously anybody in the future will be talking that either.
I'd say this is a problem for those that use Google cloud services but frankly most of their stuff is cheap or free and they DO give you plenty of warning and easy ways to migrate so....meh. I really can't fault the company for dumping a money loser as long as they keep giving users easy ways to migrate. This is one point in Google's favor as they do give plenty of time, its not like they just flip the switch the second they decide to kill something. How long did they give you on Wave and Buzz, something like 6 months? Plenty of time IMHO. i'd say the only thing I'd change is that they have a set in stone EOL like on MSFT's products but frankly they have so many I doubt anybody would know what the EOL is on any of them anyway, so keep doing what you are doing Google, it seems to be fair.
Re:Cloud Services vs. Desktop Apps (0)
Presto Vivace (882157) | more than 2 years ago | (#38775104)
Re:Cloud Services vs. Desktop Apps (3, Insightful)
Anonymous Coward | more than 2 years ago | (#38775148):Cloud Services vs. Desktop Apps (3, Insightful)
Anonymous Coward | more than 2 years ago | (#38775174):Cloud Services vs. Desktop Apps (1)
genner (694963) | more than 2 years ago | (#38777155):Cloud Services vs. Desktop Apps (1)
Presto Vivace (882157) | more than 2 years ago | (#38778293)
Re:Cloud Services vs. Desktop Apps (1)
bgarcia (33222) | more than 2 years ago | (#38775116)
And Google is trying to make sure that's possible. [dataliberation.org]
Re:Cloud Services vs. Desktop Apps (5, Insightful)
FreeCoder (2558096) | more than 2 years ago | (#38775170)
Re:Cloud Services vs. Desktop Apps (1)
Yoda's Mum (608299) | more than 2 years ago | (#38779097) (5, Insightful)
madmark1 (1946846) | more than 2 years ago | (#38775322):Cloud Services vs. Desktop Apps (0)
Anonymous Coward | more than 2 years ago | (#38775728)
Guess I'm loading up win2k or WINE.
I'll still be able to run everything but the games and even many of those games will run on 2k with an xp
.dll or regedit(it hilarious)
Re:Cloud Services vs. Desktop Apps (1)
Snaller (147050) | more than 2 years ago | (#38776815)
I mourn the death of Clippy!
Re:Cloud Services vs. Desktop Apps (2)
hairyfeet (841228) | more than 2 years ago | (#38779149) with a 6 core. It took less than 5 seconds and it said 'thank you" and that was it.
So with offline software there is ALWAYS a way around it, be it "legit" or no, but online only and you're screwed. After all I don't see anybody playing their Star Wars Galaxies characters they invested serious time and money in now, do you?
Re:Cloud Services vs. Desktop Apps (1)
andydread (758754) | more than 2 years ago | (#38775354)
Re:Cloud Services vs. Desktop Apps (1)
icebraining (1313345) | more than 2 years ago | (#38775490)
For Wikipedia: [wikimedia.org]
For Office, there's Sironta [sironta.com] . Server-less P2P collaboration that works on the three major OSs. It's AGPLv3 licensed.
Re:Slashdot a Cloud Service (1)
TaoPhoenix (980487) | more than 2 years ago | (#38777805) Enough Fi$h to care about".
Re:Cloud Services vs. Desktop Apps (2)
NeutronCowboy (896098) | more than 2 years ago | (#38775716)
Oh hi, DCTech. Still doing your sockpuppetry, I see.
Re:Cloud Services vs. Desktop Apps (0)
Anonymous Coward | more than 2 years ago | (#38775747)
Well, I'd buy what he saying if he'd replace "Microsoft" with, say, "LibreOffice".
MS would be happy to sell you subscription to their Office Live - or how's their online docs thing called - instead of one time license for desktop Office, but they're kinda stuck with their suckish IE unable to crunch through it without inducing coma in users. Only IE9 is kinda able to do it.
Re:Cloud Services vs. Desktop Apps (1)
NeutronCowboy (896098) | more than 2 years ago | (#38776479):Cloud Services vs. Desktop Apps (0)
Anonymous Coward | more than 2 years ago | (#38776503)
bonch (maybe?)
DCTech (2545590)
ge7 (2194648)
zget (2395308)
cgeys
*x**y*y**x* (not sure of correct spelling here)
InsightIn140Bytes
SharkLaser
HankMoody (2554362)
TechGuys (2554082)
GreatTech (2557540)
FreeCoder (2558096)
(there are *at least* 4-5 more than that. it's been happening for over a year) [ft.com]
Facebook has admitted that it secretly hired a public-relations group in the US with the aim of generating stories critical of Google’s approach to privacy.
...
Burson-Marsteller, a WPP-owned PR agency whose clients also include Microsoft, contacted US newspaper reporters and opinion-piece writers with a view to securing coverage on Google’s alleged use of personal information from Facebook and other social networks. [slashdot.org]
It's the [...
Also:
* Will often praise Apple or RIM or another non-google company in a fake concession to hide the motive of the post
* Often will criticize Linux
* Will say things like "well, at least this is one thing MS/FB gets right, compare that to how "evil" Google is
* Often complains about cloud services compared to MS's tried and true PC model
-------
It's sad we have to chase you around with a running list of handles just to have a non-astroturfed discussion here. And if anyone wants to report this
post as abuse please feel free. The Slashdot admins need to wake up and get involved or this place will end up like Digg.
Re:Cloud Services vs. Desktop Apps (0)
Anonymous Coward | more than 2 years ago | (#38777399)
Re:Cloud Services vs. Desktop Apps (0)
Anonymous Coward | more than 2 years ago | (#38779877)
GP is a shill. Look a his comment history, his UID and how similar he is to InsightIn140Bytes, DavidSell etc...
Why would any company work with them now? (5, Insightful)
Anonymous Coward | more than 2 years ago | (#38775046):Why would any company work with them now? (2)
icebike (68054) | more than 2 years ago | (#38777635) deep.
Buried deeper is the fact that you can walk away from GMC tomorrow morning at 8am and have a competitive solution in place by noon, or operate with your own backup. You find it much tougher walking away from Google Apps or using Gmail for your entire in-house mail. You usually have no backup for that.
Even odder was the announcement about Needlebase:
Needlebase: We are retiring this data management platform, which we acquired from ITA Software, on June 1, 2012. The technology is being evaluated for integration into Google's other data-related initiatives.
Whoa, shutting down a data management platform they haven't even acquired yet? No, wait, twisted sentence structure!
Re:Why would any company work with them now? (3, Informative)
Vexo (825223) | more than 2 years ago | (#38777651)
That's progress (5, Insightful)
LostCluster (625375) | more than 2 years ago | (#38775060):That's progress (0)
Anonymous Coward | more than 2 years ago | (#38775254)
Might as well get rid of Google Groups too. Pick out the historically relevant and useful content posts by those who contributed them, then dump the 20+ years are arguments, flamewars, and endless debates into the bit bucket. Put the historically relevant posts in a Google history archive of some sort. No need for the rest of the useless debates even in the big 8 heirarchy, especially with many of them having profane and racist content, regardless of whether the poster was anonymous or posted using their real name.
Re:That's progress (1)
taxman_10m (41083) | more than 2 years ago | (#38775306)
Google killed DejaNews deader than Julius Caesar.
Re:That's progress (1)
modmans2ndcoming (929661) | more than 2 years ago | (#38776355)
try again... it bought Deja and incorporated it into GoogleGroups.
Re:That's progress (1)
El Lobo (994537) | more than 2 years ago | (#38776501)
Yes. And boy THAT was a real mess.
Re:That's progress (1)
icebike (68054) | more than 2 years ago | (#38777669) (0)
Anonymous Coward | more than 2 years ago | (#38775266)
What your saying is that it's a forced migration.
"What your using may work great for you, we don't care, you must migrate"
Thanks for the fringe case examples though, however bussiness and users do not like change just because "It's progress"
Re:That's progress (5, Interesting)
eulernet (1132389) | more than 2 years ago | (#38775360) (5, Interesting)
madmark1 (1946846) | more than 2 years ago | (#38775494):That's progress (1)
cynyr (703126) | more than 2 years ago | (#38776215):That's progress (0)
Anonymous Coward | more than 2 years ago | (#38776409)
LISP, Pascal and BASIC were single implementations locked to single vendor? That's new.
Re:That's progress (0)
Anonymous Coward | more than 2 years ago | (#38776639)
LISPs are working alright in enterprises as Common LISP, in education as Scheme and Clojure has a bit of interest going for it.
Pascal was quite alive with Delphi, and still lives in all the in-house Delphi apps.
BASIC morphed through structured variants into VB.Net and VBA - I'm sure you heard about those.
Your examples just serve to outline the point: LISP, Pascal and BASIC are not dependent on well-being of one true vendor, but with stuff like Silverlight, ASP.NET and Flash you put thousands of hours you spent learning your skills at the mercy of a company who'll just tell you to forget it and go learn The Real Thing they just released (it'll last, we promise!)
Re:That's progress (0)
Anonymous Coward | more than 2 years ago | (#38779067)
Actually, BASIC didn't ever fully go away, it just evolved. Because it's no longer just Beginner's All-purpose Symbolic Instruction Code, it's now written these days as Basic. Line numbers are no longer needed, but the BASIC syntax used in Basic is quite similar.
Two examples of Basic still in existence include Visual Basic (and the VBA subset, and the VBScript subset) and DarkBasic that includes game creation type libraries.
Re:That's progress (2)
eulernet (1132389) | more than 2 years ago | (#38776435) effort on reducing expenses are the richest ones, and the resulting profits will not be redistributed, except for the shareholders.
Typically, management asks their employees to do more with less (improve productivity), and at the end, they fire people to improve their margins further.
And no, Google invest where they know that they'll have money in return (search, gmail, etc..), and mostly because of competition.
THIS IS NOT INNOVATION !!!
Innovation is about taking risks, investing everywhere. See Microsoft and IBM, they do a lot of Research, because they know that you cannot predict what will be a success in the future.
Remember the 20% at Google (20% of your time is spent on new projects), it's not officially dead, but I'll tell you: IT'S DEAD !
If you just concentrate on improving a product, this is not innovation, this is just improving your quality, process and productivity.
When you have an innovative company (using a disruptive innovation), like Google was, and you start to copy your followers, this means that you are not able to innovate anymore, you have no new ideas and no vision for the future.
The only thing you can do is to buy smaller companies to add value to yours.
Google is the new Microsoft, let's see what company will take Google's place.
Re:That's progress (1)
madmark1 (1946846) | more than 2 years ago | (#38778833):That's progress (1)
eulernet (1132389) | more than 2 years ago | (#38779715)]
To make room for 2010's freshmen, a half-dozen American giants on 2009's list got dumped: AT&T, ExxonMobil, 3M, Johnson & Johnson, Southwest Airlines, and Target
Re:That's progress (1)
hairyfeet (841228) | more than 2 years ago | (#38779231) developers that have been the big money clients yet they are gonna take a big old shit right on top of them to push HTML V5 because Ballmer wants to be the head of Apple so damned bad it hurts, simply because Apple has the buzz with iOS. Well i got news for them if we wanted fricking Apple we'd buy fricking Apple!
But I agree that Google is falling into the same trap as MSFT and IBM, too much focus on short term, not enough on long. The problem Google is gonna have is like Yahoo frankly there is nothing keeping their customers from walking away, whereas big blue will always have mainframes and MSFT will have workstations. It will be curious to see if some startup can just pop up and do to Google what Google did to Yahoo and Altavista, or whether the culture you pointed out of just inhaling startups will ensure we keep these same megacorps for the next decade.
Personally I predict after a couple more megaflops MSFT will accept their fate as the IBM of desktops, what will happen with Google will be anybody's guess, to me the big question mark will be Apple..
Re:That's progress (1)
eulernet (1132389) | more than 2 years ago | (#387797 believe that Apple will do as Google and Microsoft: they'll improve their existing products, and perhaps buy companies which offer new products.
But I doubt they'll be able to propose new ideas.
Perhaps they have enough money to start copying competition, but at a higher cost (as did Microsoft with its XBox).
I'm now waiting to see what FoxConn and Lenovo will propose in the near future.
Re:That's progress (1)
wbr1 (2538558) | more than 2 years ago | (#38775656)
When a Google automated car comes and delivers your pizza with customized adSense ads, you will see what I am talking about. In fact the pizza box will have big green download arrow for some crappy software that looks like the tab to open the box.
Re:That's progress (0)
Anonymous Coward | more than 2 years ago | (#38777695)
No, you are wrong, here is an example: [google.com]
Troll much? From the link you supplied [google.com] hoping no one would think for themselves.
Re:That's progress (2)
bhassel (1098261) | more than 2 years ago | (#38778483) [google.com]
And it's not limited to just Google's own code. From this blog post: [blogspot.com]
Re:That's progress (1)
eulernet (1132389) | more than 2 years ago | (#38779721)
Thanks ! I didn't notice it.
Re:That's progress (1)
guanxi (216397) | more than 2 years ago | (#38776599) (4, Funny)
youn (1516637) | more than 2 years ago | (#38775062)
at this rate... this may be quicker than I thought possible
Re:How long until they kill google search :p (1)
steeleye_brad (638310) | more than 2 years ago | (#38775118)
With how crappy Google's search results have been getting, some may argue it already is dead.
Re:How long until they kill google search :p (5, Insightful)
Njovich (553857) | more than 2 years ago | (#38775258):How long until they kill google search :p (2)
icebraining (1313345) | more than 2 years ago | (#38775504)
Yahoo is just a layer above Bing, might as well use it directly.
Re:How long until they kill google search :p (2)
SmilingBoy (686281) | more than 2 years ago | (#38775811)
Re:How long until they kill google search :p (1)
swillden (191260) | more than 2 years ago | (#38779093) about their queries. For geeks, I think it would be good if Google provided a way to set verbatim mode as the default.
Re:How long until they kill google search :p (0)
Anonymous Coward | more than 2 years ago | (#38779273)
Re:How long until they kill google search :p (1)
LostCluster (625375) | more than 2 years ago | (#38775124)
Search is profitable because AdWords works well with it. The closed services were things that had Google compete with itself.
Re:How long until they kill google search :p (1)
CAIMLAS (41445) | more than 2 years ago | (#38777677) (3, Interesting)
Animats (122034) | more than 2 years ago | (#38775176)
UrchinTracker let advertisers track what users were doing, but didn't let Google track them. So it had to go. Big Brother doesn't like competition.
CONSUME! (2)
Osgeld (1900440) | more than 2 years ago | (#38775178)
watch out! here comes the google monster! It will gobble up your website and shit it out once its bored!
I actually kind of liked picnik, but whatever let the internet strip-mining continue
... thanks google
Re:CONSUME! (0)
Anonymous Coward | more than 2 years ago | (#38775929)
watch out! here comes the google monster! It will gobble up your website and shit it out once its bored!
Hey Google... want to buy universalmusic.com or sonymusic.com? How about wmg.com, too? Gobble gobble gobble... c'mon, shit it out already!
Sky Map (2)
Naurgrim (516378) | more than 2 years ago | (#38775288)
How about updating Picasa? (1)
modmans2ndcoming (929661) | more than 2 years ago | (#38775616)
Picasa is a little long in the tooth and needs some new features and a UI change to make it more user friendly.
Re:How about updating Picasa? (1)
Daengbo (523424) | more than 2 years ago | (#38779555).
Picasa, yes, but... (1)
CurryCamel (2265886) | more than 2 years ago | (#38775646)
Is it just me that is getting middle-aged?
I fear for Google SketchUp (3, Interesting)
afabbro (33948) | more than 2 years ago | (#38775851):I fear for Google SketchUp (0)
Anonymous Coward | more than 2 years ago | (#38775985)
If you are on Android give Skye a try. Sky Map is good, Skye is much better.
Re:I fear for Google SketchUp (2)
Tacvek (948259) | more than 2 years ago | (#38776117):I fear for Google SketchUp (0)
Anonymous Coward | more than 2 years ago | (#38776127)
SketchUp is still there as it lets them crowdsource 3D map element design, which in turn gives google maps an advantage over it rivals in "oooh shiny" terms.
Once the streetview cars can also do detection for a full 3D map interpretation, then you need to worry about SketchUp,
Re:I fear for Google SketchUp (1)
nogginthenog (582552) | more than 2 years ago | (#38776143)
Re:I fear for Google SketchUp (1)
Nerdfest (867930) | more than 2 years ago | (#38777417)
Death of Google (-1)
Snaller (147050) | more than 2 years ago | (#38776707)
I guess this is the end of the 20% philosophy.
The fall has begun.
Re:Death of Google (0)
Anonymous Coward | more than 2 years ago | (#38777147)
Unfortunately you're much closer than the truth than probably even you realize.
Good guy Google (0)
Anonymous Coward | more than 2 years ago | (#38777127)
Start the meme! Open sourcing skymap is awesome! Thanks google!
Re:Good guy Google (1)
genner (694963) | more than 2 years ago | (#38777181)
Start the meme! Open sourcing skymap is awesome! Thanks google!
Your meme is bad and you should feel bad.
The license (0)
Anonymous Coward | more than 2 years ago | (#38777169)
is Apache 2.0. (i.e. non-copyleft free software)
Re:The license (0)
Anonymous Coward | more than 2 years ago | (#38778803)
Good.
the meaning... (0)
Anonymous Coward | more than 2 years ago | (#38778029)
Google is loosing its life. They will be dead in short.
There are plenty of alternatives. (0)
Anonymous Coward | more than 2 years ago | (#38778335)
For picnik, there is and many others. It's HTML5 editor that is SVG based, and has a bunch of image filters, clipart etc...
Cameron
Disclosure: I work on imagebot, and very proud to be! | http://beta.slashdot.org/story/163532 | CC-MAIN-2014-35 | refinedweb | 4,485 | 77.57 |
Go to: Synopsis. Return value. Related. Flags. Python examples.
setParticleAttr(
selectionList
, [attribute=string], [floatValue=float], [object=string], [randomFloat=float], [randomVector=[float, float, float]], [relative=boolean], [vectorValue=[float, float, float]])
Note: Strings representing object names and arguments must be separated by commas. This is not depicted in the synopsis.
setParticleAttr is undoable, NOT queryable, and NOT editable.This action will set the value of the chosen attribute for every particle or selected component in the selected or passed particle object. Components should not be passed to the command line. For setting the values of components, the components must be selected and only the particle object's names should be passed to this action. If the attribute is a vector attribute and the -vv flag is passed, then the three floats passed will be used to set the values. If the attribute is a vector and the -fv flag is pass and the -vv flag is not passed, then the float will be repeated for each of the X, Y, and Z values of the attribute. Similarly, if the attribute is a float attribute and a vector value is passed, then the length of the vector passed will be used for the value. Note: The attribute passed must be a Per-Particle attribute.
None
import maya.cmds as cmds cmds.setParticleAttr( 'particle1', at='velocity', vv=(1, 2, 3) ) # This will set the velocity of all of the particles in particle1 # to << 1, 2, 3 >>. cmds.select( 'particleShape1.pt[0:7]', 'particleShape1.pt[11]' ) cmds.setParticleAttr( vv=(1, 2, 3), at='velocity' ) cmds.setParticleAttr( 'particleShape1', at='velocity' ) # This will set the velocity of particles 0-7 and 11 of # particleShape1 to << 1, 2, 3 >>. The rest of the particles are # not changed. | http://download.autodesk.com/global/docs/maya2014/en_us/CommandsPython/setParticleAttr.html | CC-MAIN-2019-13 | refinedweb | 289 | 55.74 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On 06/10/2016 11:55 AM, Florian Weimer wrote: > On 06/10/2016 05:44 PM, Carlos O'Donell wrote: >> On 06/10/2016 04:54 AM, Florian Weimer wrote: >>> When get*ent is called without a preceding set*ent, we need >>> to set the initial iteration position in get*ent. >>> >>> Reproducer: Add âservices: db filesâ to /etc/nsswitch.conf, then run >>> âperl -e getserventâ. It will segfault before this change, and exit >>> silently after it. >>> >>> 2016-06-10 Florian Weimer <fweimer@redhat.com> >>> >>> [BZ #20237] >>> * nss/nss_db/db-XXX.c (set*ent): Reset entidx to NULL. >>> (get*ent): Set entidx to NULL during initialization. If entidx is >>> NULL, start iteration from the beginning. >> >> The fix looks good, but surely this needs a regression test? > > The problems are quite similar to nss_files testing: > > > > Build-time testing is only possible with chroot and (user) namespaces. I have now reviewed nss_files. I think you should be moving forward with checking that in, the code is elegant and allows for easy to write compositional tests that run in a chroot or namespace. -- Cheers, Carlos. | https://sourceware.org/legacy-ml/libc-alpha/2016-06/msg00377.html | CC-MAIN-2021-39 | refinedweb | 199 | 68.47 |
Today, we'll be creating a simple shoutbox using the CodeIgniter PHP framework. We'll then port this exact application, piece-by-piece, to Ruby on Rails!
Throughout this tutorial, we'll create a simple shoutbox application in CodeIgniter, and then port it over to Ruby on Rails. We'll be comparing the code between the two applications to see where they are similar, and where they differ. Learning Rails is much easier if you are already familiar with an MVC (Model, View, Controller) framework structure.
Do I Need Previous CodeIgniter Experience?
Yes, or experience in PHP and another MVC framework (CakePHP, Zend etc.). Check out some of the other CI tutorials here on Nettuts, including the from scratch video series!
Do I Need Previous Ruby/Rails Experience?
While it would help, no. I've done my best to make direct comparisons between Ruby and PHP, and between Rails and CodeIgniter, so by applying your existing knowledge it won't be too hard.
Hello, CodeIgniter
Begin by downloading the CodeIgniter framework to your local/web server, and set up your database (I named it
ci_shoutbox) with the following SQL commands:
CREATE TABLE `shouts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `email` varchar(255) NOT NULL, `message` text NOT NULL, PRIMARY KEY (`id`) ); INSERT INTO `shouts` (`id`,`name`,`email`,`message`) VALUES (1,'Dan Harper','[email protected]','Hello, World!'), (2,'Bob','[email protected]','This looks great!'), (3,'Dan Harper','[email protected]','Hey, thanks for the comment!');
As usual in CodeIgniter, enter your details into
/system/application/config/database.php:
$db['default']['hostname'] = "localhost"; $db['default']['username'] = "root"; $db['default']['password'] = ""; $db['default']['database'] = "ci_shoutbox";
Then
config.php:
$config['base_url'] = ""; ... $config['global_xss_filtering'] = TRUE; ... $config['rewrite_short_tags'] = TRUE;
And
autoload.php:
$autoload['libraries'] = array('database', 'form_validation', 'session'); $autoload['helper'] = array('url', 'form'); ... $autoload['model'] = array('shout');
Finally set the default controller inside
routes.php:
$route['default_controller'] = "shouts";
Controller & Model
Since this is such a simple application, we only need one controller and one model. Inside
/controllers/ create
shouts.php:
<?php class Shouts extends Controller { function Shouts() { parent::Controller(); } }
And the model as
/models/shout.php:
<?php class Shout extends Model { function Shout() { parent::Model(); } }
Retrieving the Shouts
Our front page, the index, outputs the 10 latest shouts from the database. Start by adding the following to the Shouts controller:
function index() { $data['shouts'] = $this->shout->all_shouts(); $this->load->view('shouts/index.php', $data); }
On line 2, we call the
all_shouts() function from the Shout model.
Following this, we load the relevant view file, and pass the
$data variable along to it (CodeIgniter will automatically take apart the
$data array, so that we can access the shouts simply as
$shouts instead of
$data['shouts']).
Calling the Database
Let's create the
all_shouts() function in our model now:
function all_shouts() { $data = array(); $this->db->order_by('id', 'DESC'); $q = $this->db->get('shouts', 10); if ($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } } return $data; $q->free_result(); }
This is relatively straight-forward. We retrieve the 10 most recent records from the 'shouts' table, and output them in descending order. This will build the following SQL command behind-the-scenes:
SELECT * FROM `shouts` ORDER BY `id` DESC LIMIT 10;
If any records were found, they are added to the
$data array and returned to the controller.
The View
Inside the
/views/ directory, create the following files and folders:
footer.php
header.php
/shouts/
index.php
Add the following to
/shouts/index.php:
<?php $this->load->view('/header.php'); if ($shouts) { echo '<ul>'; foreach ($shouts as $shout) { $gravatar = '' . md5(strtolower($shout->email)) . '&size=70'; ?> <li> <div class="meta"> <img src="<?= $gravatar ?>" alt="Gravatar" /> <p><?= $shout->name ?></p> </div> <div class="shout"> <p><?= $shout->message ?></p> </div> </li> <?php } } else { echo '<p class="error">No shouts found!</p>'; } $this->load->view('/footer.php');
On line 2 we include
header.php.
If any shouts were returned from the database, we loop through them using
foreach and create a basic list with the author's Gravatar, their name and the message. If no shouts were found, we display an error message.
Finally, we include the
footer.php.
Note that on line 7, we build the Gravatar URL. The URL contains an MD5 hash of the user's email address. We use PHP's
strtolower() function to ensure the email is all lower-case in case the Gravatar service is case-sensitive.
Continuing on, we need a form at the bottom of the page to add a new shout. Before we include the footer view file, add the following:(); $this->load->view('/footer.php');
On line 1 we use the
form_open() function from CodeIgniter's Form helper. This generates a
<form> opening tag with the relevant parameters, and will point to shouts/create (the Create function inside the Shouts controller).
The rest is a normal HTML form, except we echo the
set_value() function for each 'value'. We will be adding validation to the form shortly and this ensures that if there are any errors, the submitted data is automatically re-entered into the form.
Finally, we'll need to create the header & footer. Add the following to
/views/header.php:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>Shoutbox in CodeIgniter</title> <link rel="stylesheet" href="<?php echo base_url(); ?>css> <div id="content">
See on line 8, when including the stylesheet, we use the
base_url() function from CodeIgniter's URL helper to generate the URL to the root of the site – this will put the stylesheet at.
Add the following to
/views/footer.php:
</div><!--/content--> <div id="boxbot"></div> </div><!--/container--> </body> </html>
Submission & Validation
Since the form directs to a Create function in the Shouts controller, let's create that now:
function create() { $data['shouts'] = $this->shout->all_shouts(); $this->form_validation->set_rules('name', 'Name', 'required|max_length[255]|htmlspecialchars'); $this->form_validation->set_rules('email', 'Email', 'valid_email|required|max_length[255]|htmlspecialchars'); $this->form_validation->set_rules('message', 'Shout', 'required|htmlspecialchars'); if ($this->form_validation->run() == FALSE) { $this->load->view('shouts/index.php', $data); } else { $this->shout->create(); $this->session->set_flashdata('success', 'Thanks for shouting!'); redirect('shouts/index'); } }
On line 2 we retrieve the shouts (as we did on the index function), as they still need to be shown.
Lines 4-6 are our validation rules. We first pass the input's 'name' from the submitted form to check, followed by a 'human-readable' version. The third parameter holds a pipe-separated ('|') list of validation rules.
For example, we set all the fields as 'required' as well as run them through 'htmlspecialchars' before submission. 'Name' and 'Email' have a max length set, and the submitted email field must resemble an email address.
If the validation fails, the index view is loaded again; otherwise we run the
create() function from the Shout model. On line 13, we set a success message in the flash data which will be displayed on the next executed page.
Before continuing, we must display the error and success messages (see images above) in our view. Add the following directly after we include the header in
/views/shouts/index.php:
if ($this->session->flashdata('success')) { echo '<p class="success">' . $this->session->flashdata('success') . '</p>'; } echo validation_errors('<p class="error">', '</p>');
Insert Into Database
Inside the Shout model, enter the following:
function create() { $data = array( 'name' => $this->form_validation->set_value('name'), 'email' => $this->form_validation->set_value('email'), 'message' => $this->form_validation->set_value('message') ); $this->db->insert('shouts', $data); }
Very simple, we compile an array containing the submitted data, and insert it into the `shouts` table.
Styling
In the very root directory for your application – in the same directory as CodeIgniter's
/system/ folder, create two new folders:
/css/ and
/images/.
In the CSS folder, add the following to a file named
style.css:
* {; height: 66px; margin: 0 auto; text-indent: -9999em; color: #33ccff; } h2 { font-size: 2em; letter-spacing: -1px; background: url("../images/shout.png") no-repeat; width: 119px; height: 44, .errorExplanation li { background-color: #603131; border: 1px solid #5c2d2d; padding: 10px !important; margin-bottom: 15px; } p.success { background-color: #313d60; border: 1px solid #2d395c;: 600px; text-align: left; background: url("../images/bg.png") repeat-y; padding: 15px 35px; overflow: hidden; } ; } form input, form textarea { background-color: #313d60; border: 1px solid #2d395c; color: #ffffff; padding: 5px; font-family: Lucida Sans Unicode, Helvetica, Arial, Verdana, sans-serif; margin-bottom: 10px; }
And save the following images into the
/images/ folder: (the images will be the correct size when saved!)
To confirm, your directory structure should look like the image below:
And... done! If you think we wrote very little actual back-end code in CI, wait until we get to Rails where all database interaction is automated!
Hello, Rails
For this tutorial, I assume you already have Ruby, Rails and MySQL (or your preferred database engine) installed correctly on your system. If not, see the Rails installation guide and MySQL download page. If you happen to be running Mac OSX Leopard, Ruby and Rails both come pre-installed.
It's also useful to know how to navigate in your operating system using the command line. For example to change directory, use
cd foldername, or
cd C:\Users\Dan etc.
Rails provides a number of command-line tools to speed up development of your application. Inside the Terminal (or Command Prompt in Windows), navigate to the folder you wish to store your Rails projects in. Personally I store them in a
/rails/ directory in my User area.
The Set Up
Run the following to dump a copy of Rails for your application:
rails -d mysql shoutbox
Here, Rails will install into a
/shoutbox/ directory. As MySQL is my preferred database engine, I specify
-d mysql. If you prefer to use MySQLlite (the default in Rails), you would simply run
rails shoutbox.
Navigate into the shoutbox directory:
cd shoutbox
Now, we'll generate the Controller and Model:
ruby script/generate controller Shouts ruby script/generate model Shout
Rails has now generated our 'Shouts' controller and 'Shout' model. It's important to know that Rails encourages a strict naming style.
Your controller should be named the plural of whatever it will deal with (eg. Shouts, Users, Listings). A controller typically will only deal with one model, which is named in the singular (eg. Shout, User, Listing). Following this standard will ensure Rails can use all it's functionality.
Inside your shoutbox directory, open the
/config/database.yml file, this stores our database details. Enter your details under the 'development' section:
development: adapter: mysql encoding: utf8 reconnect: false database: shoutbox_development pool: 5 username: root password: socket: /tmp/mysql.sock
You can see Rails has automatically set the configuration file to be using the MySQL adapter. You will likely only need to alter the username and password.
Don't worry, you should not have created the
shoutbox_development database yet. We'll create that next.
Next, open the
/db/migrate/***_create_shouts.rb file (most versions of Rails will prefix the file name with the date and time):
class CreateShouts < ActiveRecord::Migration def self.up create_table :shouts do |t| t.timestamps end end def self.down drop_table :shouts end end
This is a database migration file. Rails use these to help developers keep their databases structured correctly since they can become messy when several developers are altering database tables at the same time.
You will also be able to roll your database back to a previous version if you need to.
In here, we will add our database fields:
class CreateShouts < ActiveRecord::Migration def self.up create_table :shouts do |t| t.string :name t.string :email t.text :message t.timestamps end end def self.down drop_table :shouts end end
This may look scary, but it's very simple!
t.string :name creates a 'name' field in the database, which will be stored as a string (ie. 'varchar'). The same goes for the 'email' field.
A 'message' field is added and stored as text (as it is in SQL).
You will notice Rails automatically included
t.timestamps. This includes 'created_at' and 'updated_at' fields which Rails will automatically fill in when a field is created or updated.
Also, Rails will automatically create an 'id' field, so we don't need to define that ourselves.
This is equivalent to the following SQL command:
CREATE TABLE `shouts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT NULL, `email` varchar(255) DEFAULT NULL, `message` text, `ipaddress` varchar(15) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) );
While it can take a little getting-used-to when creating these migrations, they're much easier to type than SQL!
Finally, run the following two commands in the Terminal to create and set up the database:
rake db:create rake db:migrate
The first command creates the database, named 'shoutbox_development'. The second runs any out-standing database migrations (in this case, our
***_create_shouts.rb file).
Start the server with and then visit (or whatever port the server says it is running on – in most cases it's 3000):
ruby script/server
To remove the default welcome page, first remove the file at
/public/index.html, then inside
/config/routes.rb, insert the following anywhere between the first and last lines:
map.root :controller => "shouts"
You will need to restart the server whenever you make alterations to the controller or to the routes – press Ctrl+C, then re-run
ruby script/server.
That concludes the configuration process for Rails. While this whole process may seem complicated right now, you can do it in just a few minutes, and once you get used to it, it's much quicker than setting up a CodeIgniter project.
Dummy Data
Before display the shouts, we'll need some data in the database to display. You could create these entries through your preferred database management program, however we're going to use a great Rails feature instead – the interactive console.
From your application folder in the Terminal, run
ruby script/console. This is an interactive Ruby interpreter, which is also hooked up to our Rails application, so we can interact with it!
One very important thing to remember with regard to Rails, is that your model is automatically linked with the corresponding database table. For example, your 'Shout' model is linked with the 'shouts' table in your database – this is a key reason why following Rails' naming conventions can be very useful!
Run the following ruby code inside the interactive console:
shout = Shout.new( :name => 'Dan Harper', :email => '[email protected]', :message => 'Hello, World!')
Here, we invoke Rails'
new method on the 'Shout' model – this will create a new record in the shouts database table. We also pass a hash containing the data we want to enter.
This is stored in the 'shout' variable.
Note: A 'hash' is what PHP calls an associate array – an array where you can also set your own key.
A rough PHP equivalent of this code would be:
$shout = $this->shout->new( 'name' => 'Dan Harper', 'email' => '[email protected]', 'message' => 'Hello, World!');
Finally, save this into the database with:
shout.save
If you entered the code correctly, ruby should return 'true'.
You can now retrieve this from the database with:
shout = Shout.find :last shout.name
The first command will find the most recent row in the shouts table and store it in the 'shout' variable. We can find look at a specific item with
shout.name which should return whatever you entered as the 'name' for the record.
Repeat the
Shout.new and
Shout.save process a few more times to add more records to the database.
Displaying Shouts
Inside the
/app/controllers/shouts_controller.rb file, enter the following between the existing class statement:
def index @shouts = Shout.all_shouts end
We call the
all_shouts method from the 'Shout' model, and store the result inside the
shouts instance variable (as seen by the
@).
In CodeIgniter, the equivalent code was:
class Shouts extends Controller { function Shouts() { parent::Controller(); } function index() { $data['shouts'] = $this->shout->all_shouts(); $this->load->view('shouts/index.php'); } }
Some noticeable differences are:
- Rails uses the '<' symbol in place of 'extends' when creating a class;
- To create a function/method we use
definstead of
function;
- Rails doesn't require a constructor method;
- Rails automatically loads the view file for the current method. In this case, the view file is located at
/app/views/shouts/index.html.erb;
- It is already noticeable that code is much cleaner in Rails. You don't need parenthesis (brackets) when defining a funcion/method if it doesn't contain any parameters, we don't need to add a semi-colon at the end of each statement and there are no curly brackets in sight!
The Model
Inside
/app/models/shout.rb enter the following inside the class to define the
all_shouts function:
def self.all_shouts Shout.find(:all, :limit => 10, :order => 'id DESC') end
Here, we run Rails'
find on the Shout model (ie. the shouts table). We pass a hash telling the method to limit the number of results to 10 in descending order.
This is the equivalent CodeIgniter code:
function all_shouts() { $data = array(); $this->db->order_by('id', 'DESC'); $q = $this->db->get('shouts', 10); if ($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } } return $data; $q->free_result(); }
Yeah… so much simpler!
Also note that in Ruby, if you don't specifically
return something, the last statement is automatically returned. For example, we could have used the following instead (although it would make no difference):
def self.all_shouts return Shout.find(:all, :limit => 10, :order => 'id DESC') end
The View
In CodeIgniter, we had to manually include header & footer files in each view. Rails takes a slightly different approach. Inside
/views/layouts/ create a file named
application.html.erb with the following inside:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>Shoutbox in Ruby on Rails</title> <%= stylesheet_link_tag 'style', :media => 'screen' %> </head> <body> <div id="container"> <h1>Shoutbox</h1> <h5> <a href="" title="Web Developer">Dan Harper </a> : <a href="" title="Nettuts - Spoonfed Coding Skills">Nettuts</a> </h5> <div id="boxtop"></div> <div id="content"> <%= yield %> </div><!--/content--> <div id="boxbot"></div> </div><!--/container--> </body> </html>
This is predominantly a normal HTML layout, combining both our header and footer. On line 7 we include a link to a stylesheet. Rails provides a
/public/ folder in which you include images, stylesheet, javascript files and any static HTML files. In this case, the code will output a link to a stylesheet at
/public/stylesheets/style.css.
Further down, on line 21 is
<%= yield %>. This will insert the main view file for the specific page in this place.
Why .html.erb? All view files in Rails first have the extension for the specific format you want to output as, followed by .erb so the server will interpret the Ruby code inside it.
This means you could also have a .xml.erb file if you want to export in XML for certain pages, etc.
<%= %> Ruby uses
<% and
%> to wrap any Ruby code to be interpreted, in the same way PHP uses
<?php and
?>.
On HTML pages, we use
<%= ... %> (note the equals) to 'print' code. In PHP we use either
<?php echo ... ?> or
<?= ... ?> to 'echo' code.
Inside
/public/stylesheets/ create a file named
style.css containing the same CSS we used in the CodeIgniter section.
Also, paste the images from the CodeIgniter section into
/public/images/.
If you restart the server now, and reload your browser, you should see a 'Template is missing' error page. This is because we haven't yet created the actual view for the current page.
The 'Template is missing' error page tells us what file it was expecting:
"Missing template
shouts/index.html.erb in view path
app/views"
So let's create that file now, and enter the following to test it out:
Hello, World!
Reload the page, and you should be greeted with the following (if not, check you have the stylesheet & images in the correct locations):
Looping
You will remember that in CodeIgniter, we used the code below to loop through and display each retrieved shout:
<?php echo '<ul>'; foreach ($shouts as $shout) { $gravatar = '' . md5(strtolower($shout->email)) . '&size=70'; ?> <li> <div class="meta"> <img src="<?= $gravatar ?>" alt="Gravatar" /> <p><?= $shout->name ?></p> </div> <div class="shout"> <p><?= $shout->message ?></p> </div> </li> <?php } echo '</ul>';
The equivalent in Ruby is the following (replace the 'Hello, World!' message with this):
<ul> <% for shout in @shouts gravatar = '' + Digest::MD5.hexdigest(h(shout.email.downcase)) + '&size=70' %> <li> <div class="meta"> <img src="<%= gravatar %>" alt="Gravatar" /> <p><%= shout.name %></p> </div> <div class="shout"> <p><%= shout.message %></p> </div> </li> <% end %> </ul>
On line 2 is a normal 'foreach'-style loop in Ruby. Instead of the
foreach ($shouts as $shout) in PHP, we use
for shout in @shouts in Ruby.
The loop closes at
end on line 16.
On line 4 we build the Gravatar URL in almost the same way as in PHP, with a few exceptions:
- In PHP, we use the dot character (.) to concatenate statements, Ruby uses the plus character (+), just like in JavaScript;
- Ruby's version of PHP's
md5()function is
Digest::MD5.hexdigest();
- The
h()function in Ruby is equivalent to PHP's
htmlspecialchars();
- PHP uses
->to access an object (eg.
$shout->email). Ruby uses a dot, (eg.
shout.email);
- To ensure the email address is in lower-case, we use
strtolower($shout->email)in PHP. In Ruby, everything is an object and so we simply add
.downcaseto the end of a string or variable –
shout.email.downcase. You could also
.upcase,
.reverse, etc.
Refresh the page in your browser, and you should now see the shoutbox messages!
Submission Form
Continuing on, we need to include the form at the bottom of the page for submitting new shouts. First, take a look at the following line:
<% form_for :shout, :url => { :action => 'create' } do |f| %> ... code here ... <% end %>
This is a block from Rails' form helper to easily create forms.
form_for :shout links the form directly with the Shout model.
:url = { :action = 'create ' } is a hash detailing where the form will direct to when submitted. In this case, the form will go to /shouts/create/ (Shouts controller, Create method). We could have typed:
:url = { :controller = 'shouts', :action = 'create }, however the lack of a 'controller' key in the hash tells Rails to use the current controller.
Finally,
do |f| stores everything before in an
f variable. For example, we could create a textbox linked to this using
f.text_field
Now we've explained that, enter this full form code at the bottom of your index view file:
<% form_for :shout, :url => { :action => 'create' } do |f| %> <h2>Shout!</h2> <div class="fname"> <label for="name"><p>Name:</p></label> <%= f.text_field :name, :size => 20 %> </div> <div class="femail"> <label for="email"><p>Email:</p></label> <%= f.text_field :email, :size => 20 %> </div> <%= f.text_area :message, :rows => 5, :cols => 40 %> <p><%= submit_tag 'Submit' %></p> <% end %>
And compare it with the CodeIgniter PHP version:();
We make more use of Rails' form helper when creating the input fields, textarea and submit button. With
f.text_field :name, :size = 20 we are creating a textbox named 'name' (matching the column in our database this should insert into). This would output in HTML as:
<input type="text" name="shout[name]" id="shout_name" size="20" />
In the CodeIgniter version of the app, we set each input's 'value' field to
<?= set_value('email') ?> so CodeIgniter will automatically re-populate the fields if they fail the validation tests.
In Rails, this is handled automatically since we are using Rails' form helper to create the inputs.
Refresh the page in your browser, and you should see the form at the bottom:
Submission & Validation
The next step is to insert any submitted data into the database. The form directs to shouts/create, so let's create the create action now (inside the Controller):
def create @shouts = Shout.all_shouts @shout = Shout.new(params[:shout]) if @shout.save flash[:notice] = 'Thanks for shouting!' redirect_to :action => 'index' else render :action => 'index' end end
On line 2, we retrieve all our current shouts from the database since they still need to be displayed.
At line 4 we load Rails'
new function on our Shout model. You will remember we used
Shout.new when inserting dummy data into the database via the interactive console.
We pass
new our submitted data with
params[] – this is how Rails accesses POST data.
This is loaded into the
@shout instance variable.
When we used the interactive console, we used
shout.save to save the data. If it was successful, it returns 'true', otherwise 'false' is returned. So on line 5 we check whether the shout can be saved into the database, if so, we load a success notice in the session's 'flashdata', and redirect back to the index.
Otherwise, we load index's view file through the
render function.
Validation
You may be wondering where our validation is handled, since we did this in the controller in CodeIgniter. In Rails, we place the rules inside the Model.
When Rails attempts to insert data to the database (eg. with
@shout.save), the Model will automatically check the data against our rules before attempting to save it. Thus, if validation fails, then
@shout.save will return 'false', the view will be re-loaded and the error messages will display.
Inside the Shout model, insert the following before we define
self.all_shouts:
validates_presence_of :message validates_length_of :name, :within => 1..255 validates_format_of :email, :with => /^[a-z0-9_.-]+@[a-z0-9-]+\.[a-z.]+$/i
In true Ruby style, the validation rules read pretty much like natural English. On line 1 we ensure a 'message' exists.
Next we check the length of the 'name' field – it must be between 1 and 255 characters in length.
1..255 defines a range. We could define a range of between 10 and 15 with
10..15, for example.
Finally, we check the submitted email is in the format of an email address using a regular expression. Basically, the submitted email must contain five parts:
- A string containing a combination of letters, numbers, an underscore, full-stop or hyphen;
- The @ symbol;
- A string containing a combination of letters, numbers or a hyphen;
- A dot;
- A string containing a combination of letters or a hyphen.
And finally...
We now just need to display the error/success messages in the view (see above). Add the following at the top of the index view file:
<%= error_messages_for :shout, :header_message => nil, :message => nil %> <%= '<p class="success">' + flash[:notice] + '</p>' if flash[:notice] %>
On the first line we render any error messages for the shout form. We set
:header_message and
:message to
nil (Ruby uses 'nil', PHP uses 'null') to stop the function displaying the normal messages it displays (we just want the error messages).
The second line displays our flash'ed success message. Note the
if flash[:notice] at the end of the line. This ensures everything before it will attempt to display if the flash notice exists.
Obviously, we could also have done the following instead; however it would have had the same effect:
<% if flash[:notice] %> <%= '<p class="success">' + flash[:notice] + '</p>' %> <% end %>
That's It!
Go try it out, it should function exactly as the CodeIgniter version yet in quite a bit less code (which is also much easier to read!)
CodeIgniter:
Controller: 31 lines
Model: 33 lines
View: 69 lines
Total: 133 lines
Ruby on Rails:
Controller: 19 lines
Model: 11 lines
View: 64 lines
Total: 94 lines
While this tutorial may have made the process of developing a Rails application long-winded, take a look back through your code and see how simple it all is!
If this tutorial has sparked your interest in Ruby and Ruby on Rails, I strongly recommend picking up a copy of Rails for PHP Developers, Agile Web Development with Rails and/or Programming Ruby from the Pragmatic Programmers.
Now go! Go explore the world of Rails!
- Follow us on Twitter, or subscribe to the NETTUTS RSS Feed for more daily web development tuts and articles. | http://code.tutsplus.com/tutorials/from-codeigniter-to-ruby-on-rails-a-conversion--net-6017 | CC-MAIN-2014-10 | refinedweb | 4,689 | 65.12 |
The Vehicle.java is the file that I am modifying. The second code is the testdriver.
public class Vehicle { String make; String model; int year; public Vehicle(String s1, String s2, int y) { if (y >= 1980 && y<= 2012) { year = y; } else { y = 2010; } } public String getMake() { return make; } public String getModel() { return model; } public int getYear() { return year; } public void setMake(String s) { make = s; } public void setModel(String s) { model = s ; } public void setYear(int y) { year = y; } public String toString() { String str = "Make:\t" + make + "\nModel:\t" + model + "\nYear:\t"+year; return str; } } ]
import java.util.Scanner; public class Assignment5 { public static void main(String[] args) { // for getting user input Scanner scan = new Scanner(System.in); //prompt the user and read in the information System.out.print("Enter the vehicle make:\t\t"); String x = scan.nextLine(); System.out.print("Enter the vehicle model:\t"); String y = scan.nextLine(); System.out.print("Enter the vehicle year:\t\t"); int z = scan.nextInt(); // clear the scanner buffer to the end of the line scan.nextLine(); // I called the variables x, y and z to show they need not // be the same name as those in the vehicle class. Vehicle v1 = new Vehicle(x, y, z); // now that I have a vehicle object I can use the reference // to invoke the methods and print their results to the screen System.out.println("\n" + v1.toString()); if(v1.getMake().equals("Ford")) System.out.println("\nThe vehicle is a Ford"); else if(v1.getMake().equals("Chevrolet")) System.out.println("\nThe vehicle is a Chevy"); else System.out.println("\nThe vehicle is neither a Ford nor a Chevy"); // I'm going to change the state of the object and overwrite the user input v1.setMake("Dodge"); v1.setModel("Viper"); v1.setYear(2008); System.out.println("\nUpdated Vehicle Information:\n" + v1.toString()); } }
I get this error when I submit it
Program Input
Ford
F-150
2010
Program Output
Enter the vehicle make: Enter the vehicle model: Enter the vehicle year:
Make: null
Model: null
Year: 2010
Exception in thread "main" java.lang.NullPointerException
at Assignment5.main(Assignment5.java:29)
I checked the discussion board on my schools site and someone posted this answer
This is probably one of two things:
1) The member variable you're returning with getMake() was not assigned in the constructor, and as it's a String object, still has the default value of "null" (reference to nothing).
2) You're returning something else with getMake(); make sure it just returns the String member variable you assigned in the constructor.
Either way, if line 27 works but line 29 fails it's because you're calling ".equals" on a null String (a variable not actually pointing to any object). | https://www.daniweb.com/programming/software-development/threads/320765/exception-in-thread-main-java-lang-nullpointerexception | CC-MAIN-2017-47 | refinedweb | 459 | 53.1 |
in ST2, Transpose command works with words.In ST3, it doesn't works anymore because of the added parameter with a default value False:
def transpose_selections(edit, view, can_transpose_words = False):
As I've had no answer if it's by design or a bug, maybe it is not enough detailed.With this line of code:
me.fireEvent('disable',this);
if I put my caret right before the "this" and hit "Transpose" keybinding results:In ST2:
me.fireEvent(this,'disable');
In ST3:
me.fireEvent('disable't,his);
The ST2 behavior is very handy, the ST3 not too much.
startup, version: 3053 windows x64 channel: dev
Yep, I can confirm this, too (Sublime Text 3, Build 3059)...
The Default Key Bindings in Windows offer no guidance either. Maybe a Boolean option there would be a good idea...
Don't expect any fix soon, this bug report (and all others I've reported) was totally ignored.
Fortunately, you can fix it yourself by copying the file "transpose.py" from the Default package (this is for Windows, it's a zip file):
C:\Program Files\Sublime Text 3\Packages\Default.sublime-package
to your user folder:
\Packages\User\transpose.py
and change the default value of "can_transpose_words" argument to True:
def transpose_selections(edit, view, can_transpose_words = True):
The drawback is that you override the standard source file and you have to check when ST is updated if this source is modified.
A better workaround is to create a transpose_fix.py in your User directory containing:
transpose_fix.py
import Default.transpose
# Change the default value for can_transpose_words argument to be the same as ST2
Default.transpose.transpose_selections.__defaults__ = (True,)
When (If?) this issue is resolved, you just have to delete this file.
Why don't you just modify the key binding? Does it not forward the parameter? (can't check myself right now)
No, but it's probably the best way to solve this issue. | https://forum.sublimetext.com/t/st3-transpose-words-is-broken/10490/3 | CC-MAIN-2017-39 | refinedweb | 319 | 59.19 |
Sorting element in array by frequency using C
Sorting element in array by frequency in C
Here we will learn about Sorting element in array by frequency in C programming language. To sort an array based on its frequency means we need to sort the given element in descending order of their frequency. We are given an array and we have to sort the array based on their frequency.
Example
Input :arr[6]=[3, 2, 3, 1, 2, 2]
Output: 2 2 2 3 3 1
Method :
- First we declare an array.
- Declare two 2d array arr[MAX][2] and brr[MAX][2].
- Store 1d array in 0th index of arr array.
- Set arr[][1] = 0for all indexes upto n.
- Now, count the frequency of elements of the array.
- If element is unique the push it at brr[][0] array, and its frequency will represent by brr[][1].
- Now, sort the brr on the basis of frequency.
- Print the brr array on basis of their frequency.
In this method we first count the frequency of each elements, so to know more about the algorithm for finding the frequency of elements of the array. You can check out the page given below,
Code in C
#include <stdio.h> #define MAX 256 int main () { int a[]={1, 2, 1, 1, 2, 3, 3, 3, 3, 0}; int n = sizeof(a)/sizeof(a[0]); int arr[MAX][2], brr[MAX][2]; int k = 0, temp, count; for (int i = 0; i < n; i++) { arr[i][0] = a[i]; arr[i][1] = 0; } // Unique elements and its frequency are stored in another array for (int i = 0; i < n; i++) { if (arr[i][1]) continue; count = 1; for (int j = i + 1; j < n; j++) { if (arr[i][0] == arr[j][0]) { arr[j][1] = 1; count++; } } brr[k][0] = arr[i][0]; brr[k][1] = count; k++; } n = k; //Store the array and its frequency in sorted form for (int i = 0; i < n - 1; i++) { temp = brr[i][1]; for (int j = i + 1; j < n; j++) { if (temp < brr[j][1]) { temp = brr[j][1]; brr[j][1] = brr[i][1]; brr[i][1] = temp; temp = brr[j][0]; brr[j][0] = brr[i][0]; brr[i][0] = temp; } } } for (int i = 0; i < n; i++) { while (brr[i][1] != 0) { printf (" %d ", brr[i][0]); brr[i][1]--; } } return 0; }
Output
3 3 3 3 1 1 1 2 2 0
Login/Signup to comment | https://prepinsta.com/c-program/sorting-element-in-array-by-frequency/ | CC-MAIN-2022-21 | refinedweb | 415 | 64.75 |
Sunday, March 21, 2004
Another good day of sprints. We fixed some hard bugs in the AST branch and had a planning session for Python 2.4.
There were more people around for day 2 of the sprints. Jim Fulton gave a day-long Zope 3 tutorial for about 10 people. (We hit a snag getting a projector for Jim, but Steve Holden and the Cafritz Center staff worked it pretty easily.) I'd guess there were about 50 people there by the afternoon.
We made better progress on closures bugs from the AST branch today. Yesterday we got stuck trying to figure out where the compiler was going wrong. With a fresh start today, it was pretty straightforward.
The AST branch has a new symbol table that has a simpler approach for deciding the scope of variables. It works in two completely separate passes over a module. (The old symbol table tried to work incrementally, revisiting child nodes as their parents were processed. Very complicated.) The first pass gathers evidence about each variable -- whether it's assigned to, passed as a parameter, bound by import, used by not defined, etc. The second pass works top-to-bottom to determine the scope -- local, global, free, or cell. The bindings visible in each function are passed in during this pass.
We found two bugs in the symbol table. The first bug was with cases like this:
def f(): x = 1 def g(): def h(): return x return h return g
The symbol table did not handle g() correctly. It wasn't generating any symbols for g(). It needed to mark x as free in g(), so that the code generator would build a closure to pass the binding of x through to h.
When we fixed that bug, we introduced another related bug. The symbol table was marking variables free instead of global. The second pass was including the bindings at module scope in the set of visible bindings passed to functions, but it should only have passed bindings from other function scopes. If the only binding for a variable is at module level, it's treated as global rather than free. (That's an implementation centric notion. They're all "free variables" in the academic sense, but Python has special rules for the top level.)
We fixed some other simple problems. Generators weren't getting the right flag set on the code object, so they weren't being called as generators. And we weren't passing through compiler flags set by future statements, which caused a few failures. We also discovered that we haven't finished code generation for extended slices.
There is still a lot of tedious bug fixing to do, but the branch is in much better shape. setup.py compiles and runs correctly now. You can actually run "make test" without crashing. Many tests fail, but the majority run successfully. It's much easier to track down bugs when the regression tests are available.
Guido was out sick today, but he asked us to have a Python 2.4 planning session anyway. Lots of the locals (me included, even though I'm not really local anymore) were only around for the weekend.
My chief goal is to finish the AST branch in April so that it can be included in Python 2.4. We agreed that it would be included if it was ready by early May. If not, we'll wait for a future release. If it does go in, we will probably need an extra alpha or beta release to make sure we flush out any bugs. Armin Rigo also pointed out that we'll need to coordinate work on the new compiler with work on new features like generator expressions that require compile changes.
Anthony Baxter is going to be the release manager again. No one else volunteered, hardly a surprise, but Anthony's been very capable.
There aren't a lot of new features going into the 2.4 release. It feels more like a release to stay on schedule than a release to get good new features in the hands of users. Generator expressions and function decorators are the top new features, but neither seems likely to cause lots of people to upgrade. Perhaps Raymond Hettinger's micro-optimizations will be the big news, but it's hard to judge what effect they have on real program performance.
We definitely need to work on the PEP for generator expressions. Guido jumped the gun by approving the PEP, because we didn't follow the regular PEP process. There's no specification or rationale, just a rough description and a few examples. I'm glad Guido approved the feature, but we need to go back and write the specification now. (I noticed today, Tuesday, that Guido is having second thoughts about the funny namespace rules that are being proposed.) | http://www.python.org/~jeremy/weblog/040321.html | crawl-002 | refinedweb | 815 | 73.37 |
Difference between revisions of "Import"
Revision as of 03:14, 14 March 2019
The
import statement is used to import functions and other definitions from another module. The shortest form of the import statement is
import Data.Maybe
that imports the named module (in this case
Data.Maybe).
However, there are more options:
- Modules can be imported qualified (forcing an obligatory namespace qualifier to imported identifiers).
- Some identifiers can be skipped via the hiding clause.
- The module namespace can be renamed, with an as clause.
Getting all of this straight in your head is quite tricky, so here is a table (lifted directly from the language reference manual) that roughly summarises the various possibilities:
Supposing that the module
Mod exports three functions named
x,
y and
z...
Note that multiple import statements for the same module are also allowed, so it is possible to mix and match styles if its so desired (for example, importing operators directly and functions qualified)
Hiding Prelude.
See also
- Import modules properly - Some thoughts that may help to decide what form of the import statement to use.
- PackageImports GHC's extensions to allow the use of package-qualified import syntax. | https://wiki.haskell.org/index.php?title=Import&diff=prev&oldid=62817 | CC-MAIN-2020-10 | refinedweb | 195 | 52.19 |
the information.
Input
Your program will input data for different sets of stockbrokers. Each set starts with a line with the number of stockbrokers. Following this is a line for each stockbroker which contains the number of people who they have contact with, who these people are, and the time taken for them to pass the message to each person. The format of each stockbroker line is as follows: The line starts with the number of contacts (n), followed by n pairs of integers, one pair for each contact. Each pair lists first a number referring to the contact (e.g. a ‘1’ means person number one in the set), followed by the time in minutes taken to pass a message to that person. There are no special punctuation symbols or spacing rules.
Each person is numbered 1 through to the number of stockbrokers. The time taken to pass the message on will be between 1 and 10 minutes (inclusive), and the number of contacts will range between 0 and one less than the number of stockbrokers. The number of stockbrokers will range from 1 to 100. The input is terminated by a set of stockbrokers containing 0 (zero) people.
Output
For each set of data, your program must output a single line containing the person who results in the fastest message transmission, and how long before the last person will receive any given message after you give it to this person, measured in integer minutes.
It is possible that your program will receive a network of connections that excludes some persons, i.e. some people may be unreachable. If your program detects such a broken network, simply output the message “disjoint”. Note that the time taken to pass the message from person A to person B is not necessarily the same as the time taken to pass it from B to A, if such transmission is possible at all.
Sample Input
3 2 2 4 3 5 2 1 2 3 6 2 1 2 2 2 5 3 4 4 2 8 5 3 1 5 8 4 1 6 4 10 2 7 5 2 0 2 2 5 1 5 0
Sample Output
3 2 3 10
Solution below . . .
import java.util.Iterator; import java.util.NoSuchElementException; import java.util.Scanner; /* * For all-pairs shortest path problems, Floyd-Warshall is a good * option. Given a directed weighted graph, it outputs a matrix * of all shortest paths between pairs of nodes. * * For this problem, each row of the matrix represents a broker. * We read each row to find brokers who can get a message to * every other broker, and select the broker with the smallest maximum. */ public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); AdjMatrixEdgeWeightedDigraph G; int brokers = sc.nextInt(); while (brokers != 0) { G = new AdjMatrixEdgeWeightedDigraph(brokers); for (int broker = 0; broker < brokers; broker++) { int connections = sc.nextInt(); for (int i = 0; i < connections; i++) { int w = sc.nextInt() - 1; int weight = sc.nextInt(); G.addEdge(new DirectedEdge(broker, w, weight)); } } FloydWarshall spt = new FloydWarshall(G); // find the broker with the shortest max distance int min = 9999; int minBroker = -1; for (int v = 0; v < G.V(); v++) { int max = -1; for (int w = 0; w < G.V(); w++) { if (v != w) { if (spt.hasPath(v, w)) { max = Math.max(spt.dist(v, w), max); } else { // this broker can't reach at least one target max = 9999; break; } } } if (max < min) { min = max; minBroker = v; } } if (minBroker == -1) { System.out.println("disjoint"); } else { System.out.println((minBroker + 1) + " " + min); } brokers = sc.nextInt(); } sc.close(); } } /****************************************************************** * * An edge-weighted digraph, implemented using an adjacency matrix. * ******************************************************************/ class AdjMatrixEdgeWeightedDigraph { private final int V; private int E; private DirectedEdge[][] adj; public AdjMatrixEdgeWeightedDigraph(int V) { this.V = V; this.E = 0; this.adj = new DirectedEdge[V][V]; } public int V() { return V; } public int E() { return E; } public void addEdge(DirectedEdge e) { int v = e.from(); int w = e.to(); if (adj[v][w] == null) { E++; adj[v][w] = e; } } public Iterable<DirectedEdge> adj(int v) { return new AdjIterator(v); } // support iteration over graph vertices private class AdjIterator implements Iterator<DirectedEdge>, Iterable<DirectedEdge> { private int v; private int w = 0; public AdjIterator(int v) { this.v = v; } public Iterator<DirectedEdge> iterator() { return this; } public boolean hasNext() { while (w < V) { if (adj[v][w] != null) return true; w++; } return false; } public DirectedEdge next() { if (!hasNext()) { throw new NoSuchElementException(); } return adj[v][w++]; } public void remove() { throw new UnsupportedOperationException(); } } } /****************************************************************** * * Immutable weighted directed edge. * ******************************************************************/ class DirectedEdge { private final int v; private final int w; private final int weight; public DirectedEdge(int v, int w, int weight) { this.v = v; this.w = w; this.weight = weight; } public int from() { return v; } public int to() { return w; } public int weight() { return weight; } } class FloydWarshall { // distTo[v][w] = length of shortest v->w path private int[][] distTo; // edgeTo[v][w] = last edge on shortest v->w path private DirectedEdge[][] edgeTo; private final int MAX = 9999; public FloydWarshall(AdjMatrixEdgeWeightedDigraph G) { int V = G.V(); distTo = new int[V][V]; edgeTo = new DirectedEdge[V][V]; // initialize distances to max value for (int v = 0; v < V; v++) { for (int w = 0; w < V; w++) { distTo[v][w] = MAX; } } // update distances using digraph for (int v = 0; v < G.V(); v++) { for (DirectedEdge e : G.adj(v)) { distTo[e.from()][e.to()] = e.weight(); edgeTo[e.from()][e.to()] = e; } } for (int i = 0; i < V; i++) { // compute shortest paths using only // 0, 1, ..., i as intermediate vertices for (int v = 0; v < V; v++) { if (edgeTo[v][i] == null) continue; // optimization for (int w = 0; w < V; w++) { if (distTo[v][w] > distTo[v][i] + distTo[i][w]) { distTo[v][w] = distTo[v][i] + distTo[i][w]; edgeTo[v][w] = edgeTo[i][w]; } } } } } public boolean hasPath(int s, int t) { return distTo[s][t] < MAX; } public int dist(int s, int t) { return distTo[s][t]; } } | http://eppsnet.com/2018/06/competitive-programming-poj-1125-stockbroker-grapevine/ | CC-MAIN-2021-10 | refinedweb | 992 | 55.34 |
Apache HTrace is an open source framework for distributed tracing. It can be used with both standalone applications and libraries.
By adding HTrace support to your project, you will allow end-users to trace their requests. In addition, any other project that uses HTrace can follow the requests it makes to your project. That`s why we say HTrace is “end-to-end.”.
In order to use HTrace, your application must link against the appropriate core library. HTrace`s core libraries have been carefully designed to minimize the number of dependencies that each one pulls in. HTrace currently has Java, C, and C++ support.
HTrace guarantees that the API of core libraries will not change in an incompatible way during a minor release. So if your application worked with HTrace 4.1, it should continue working with HTrace 4.2 with no code changes. (However HTrace 5 may change things, since it is a major release.)
The Java library for HTrace is named htrace-core4.jar. This jar must appear on your CLASSPATH in order to use tracing in Java. If you are using Maven, add the following to your dependencyManagement section:
<dependencyManagement> <dependencies> <dependency> <groupId>org.apache.htrace</groupId> <artifactId>htrace-core4</artifactId> <version>4.1.0-incubating</version> </dependency> ... </dependencies> ... </dependencyManagement>
If you are using an alternate build system, use the appropriate configuration for your build system. Note that it is not a good idea to shade htrace-core4, because some parts of the code use reflection to load classes by name.
The C library for HTrace is named libhtrace.so. The interface for libhtrace.so is described in htrace.h
As with all dynamically loaded native libraries, your application or library must be able to locate libhtrace.so in order to use it. There are many ways to accomplish this. The easiest way is to put libhtrace.so in one of the system shared library paths. You can also use RPATH or LD_LIBRARY_PATH to alter the search paths which the operating system uses.
The C++ API for HTrace is a wrapper around the C API. This approach makes it easy to use HTrace with any dialect of C++ without recompiling the core library. It also makes it easier for us to avoid making incompatible ABI changes.
The interface is described in htrace.hpp the same as using the C API, except that you use htrace.hpp instead of htrace.h.
HTrace is based on a few core concepts.
Spans in HTrace are lengths of time. A span has a beginning time in milliseconds, an end time, a description, and many other fields besides.
Spans have parents and children. The parent-child relationship between spans is a little bit like a stack trace. For example, the span graph of an HDFS “ls” request might look like this:
ls +--- FileSystem#createFileSystem +--- Globber#glob | +---- GetFileInfo | +---- ClientNamenodeProtocol#GetFileInfo | +---- ClientProtocol#GetFileInfo +--- listPaths +---- ClientNamenodeProtocol#getListing +---- ClientProtocol#getListing
“ls” has several children, “FileSystem#createFileSystem”, “Globber#glob”, and “listPaths”. Those spans, in turn, have their own children.
Unlike in a traditional stack trace, the spans in HTrace may be in different processes or even on different computers. For example, ClientProtocol#getListing is done on the NameNode, whereas ClientNamenodeProtocol#getListing happens inside the HDFS client. These are usually on different computers.
Each span has a unique 128-bit ID. Because the space of 128-bit numbers is so large, HTrace can use random generation to avoid collisions.
TraceScope objects manage the lifespan of Span objects. When a TraceScope is created, it often comes with an associated Span object. When this scope is closed, the Span will be closed as well. “Closing” the scope means that the span is sent to a SpanReceiver for processing. We will talk more about what that means later. For now, just think of closing a TraceScope as similar to closing a file descriptor-- the natural thing to do when the TraceScope is done.
HTrace tracks whether a trace scope is active in the current thread by using thread-local data. This approach makes it easier to add HTrace to existing code, by avoiding the need to pass around context objects.
TraceScopes lend themselves to the try... finally pattern of management:
TraceScope computationScope = tracer.newScope("CalculateFoo"); try { calculateFoo(); } finally { computationScope.close(); }
Any trace spans created inside calculateFoo will automatically have the CalculateFoo trace span we have created here as their parents. We don`t have to do any additional work to set up the parent/child relationship because the thread-local data takes care of it.
In Java7, the try-with-resources idiom may be used to accomplish the same thing without a finally block:
try (TraceScope computationScope = tracer.newScope("CalculateFoo")) { calculateFoo(); }
The important thing to remember is to close the scope when you are done with it.
Note that in the C++ API, the destructor of the htrace::Scope object will automatically close the span.
htrace::Scope(tracer_, "CalculateFoo"); calculateFoo();
TraceScope are associatewith particular threads. If you want to pass a trace scope to another thread, you must detach it from the current one first. We will talk more about that later in this guide.
Tracers are the API for creating trace scope objects. You can see that in the example above, we called the Tracer#newScope function to create a scope.
It is difficult to trace every operation. The volume of trace span data that would be generated would be extremely large! So we rely on sampling a subset of all possible traces. Tracer objects contain Samplers. When you call Tracer#newScope, the Tracer will consult that Sampler to determine if a new span should be created, or if an empty scope which contains no span should be returned. Note that if there is already a currently active span, the Tracer will always create a child span, regardless of what the sampler says. This is because we want to see the complete graph of every operation, not just “bits and pieces.” Tracer objects also manage the SpanReceiver objects which control where spans are sent.
A single process or library can have many Tracer objects. Each Tracer object has its own configuration. One way of thinking of Tracer objects is that they are similar to Log objects in log4j. Just as you might create a Log object for the NameNode and one for the DataNode, we create a Tracer for the NameNode and another Tracer for the DataNode. This allows users to control the sampling rate for the DataNode and the NameNode separately.
Unlike TraceScope and Span, Tracer objects are thread-safe. It is perfectly acceptable (and even recommended) to have multiple threads calling Tracer#newScope at once on the same Tracer object.
The number of Tracer objects you should create in your project depends on the structure of your project. Many applications end up creating a small number of global Tracer objects. Libraries usually should not use globals, but associate the Tracer with the current library context.
HTrace contains many helpful wrapper objects like TraceRunnable TraceCallable and TraceExecutorService These helper classes make it easier for you to create trace spans. Basically, they act as wrappers around Tracer#newScope.
HTrace is a pluggable framework. We can configure where trace spans are sent at runtime, by selecting the appropriate SpanReceiver.
FoobarApplication | V htrace-core4 | V HTracedSpanReceiver OR LocalFileSpanReceiver OR StandardOutSpanReceiver OR ZipkinSpanReceiver OR ...
As a developer integrating HTrace into your application, you do not need to know what each and every SpanReceiver does-- only the ones you actually intend to use. The nice thing is that users can use any span receiver with your project, without any additional effort on your part. The span receivers are decoupled from the core library.
When using Java, you will need to add the jar file for whichever span receiver you want to use to your CLASSPATH. (These span receivers are not added to the CLASSPATH by default because they may have additional dependencies that not every user wants.) For C and C++, a more limited set of span receivers is available, but they are all integrated into libhtrace.so, so no additional libraries are needed.
Clearly, HTrace requires configuration. We need to control which SpanReceiver is used, what the sampling rate is, and many other things besides. Luckily, as we discussed earlier, the Tracer objects maintain this configuration information for us. When we ask for a new trace scope, the Tracer knows what configuration to use.
This configuration comes from the HTraceConfiguration object that we supplied to the Tracer#Builder earlier. In general, we want to configure HTrace the same way we configure anything else in our distributed system. So we normally create a subclass of HTraceConfiguration that accesses the appropriate information in our existing configuration system.
To make this a little more concrete, let
s suppose we are writing Bobs Distributed System. Being a pragmatic (not to mention lazy) guy, Bob has decided to just use Java configuration properties for configuration. So our Tracer#Builder invoation would look something like this:
this.tracer = new Tracer.Builder("Bob"). conf(new HTraceConfiguration() { @Override public String get(String key) { return System.getProperty("htrace." + key); } @Override public String get(String key, String defaultValue) { String ret = get(key); return (ret != null) ? ret : defaultValue; } }). build();
You can see that this configuration object maps every property starting in “htrace.” to an htrace property. So, for example, you would set the Java system property value “htrace.span.receiver.classes” in order to control the HTrace configuration key “span.receiver.classes”.
Of course, Bob probably should have been less lazy, and used a real configuration system instead of using Java system properties. This is just a toy example to illustrate how to integrate with an existing configuration system.
Bob might also have wanted to use different prefixes for different Tracer objects. For example, in Hadoop you can configure the FsShell Tracer separately from the NameNode Tracer, by setting “fs.shell.htrace.span.receiver.classes”. This is easy to control by changing the HTraceConfiguration object that you pass to Tracer#Builder.
Note that in C and C++, this system is a little different, based on explicitly creating a configuration object prior to creating a tracer, rather than using a callback-based system.
SpanReceiver objects often need to make a network connection to a remote serveice or daemon. Usually, we don`t want to create more than one SpanReceiver of each type in a particular process, so that we can optimize the number of these connections that we have open. TracerPool objects allow us to acheieve this.
Each Tracer object belongs to a TracerPool. When a call to Tracer#Builder is made which requests a specific SpanReceiver, we check the TracerPool to see if there is already an instance of that SpanReceiver. If so, we simply re-use the existing one rather than creating a new one.
Normally, you don`t need to worry about TracerPools. However, if you have an explicit need to create multiple SpanReceivers of the same type, you can do it by using a TracerPool other than the default one, or by explicitly adding the SpanReceiver to your Tracer once it has been created. This is not a very common need.
When the application terminates, we will attempt to close all currently open SpanReceivers. You can attempt to close the SpanReceivers earlier than that by calling tracer.getTracerPool().removeAndCloseAllSpanReceivers().
So far, we have described how to use HTrace inside a single process. However, since we are dealing with distributed systems, a single process is not enough. We need a way to send HTrace information across the network.
Unlike some other tracing systems, HTrace works with many different RPC systems. You do not need to change the RPC framework you are using in order to use HTrace. You simply need to find a way to pass HTrace information using the RPC framework that you`re already using. In most cases, what this boils down to is figuring out a way to send the 128-bit parent ID of an operation over the network as an optional field.
Let`s say that Bob is writing the server side of his system. If the client sent its parent ID over the wire, Bob might write code like this:
BobRequestProto bp = ...; SpanId spanId = (bp.hasSpanId()) ? bp.getSpanId() : SpanId.INVALID; try (TraceScope scope = tracer.newScope("bobRequest", spanId)) { doBobRequest(bp); }
By passing the spanId to Tracer#newScope, we ensure that any new span we create will have a record of its parent.
Sometimes, you end up performing work for a single request in multiple threads. How can we handle this? Certainly, we can use the same approach as we did in the RPC case above. We can have the child threads create trace scopes which use our parent ID object. SpanId objects are immtuable and easy to share between threads.
try (TraceScope bigScope = tracer.newScope("bigComputation")) { SpanId bigScopeId = bigScope.getCurrentSpanId(); Thread t1 = new Thread(new Runnable() { @Override public void run() { TraceScope scope = (bigScopeId.isValid()) ? tracer.newScope("bigComputationWorker", bigScopeId) : tracer.newNullScope(); try { doWorkerStuff(); } finally { scope.close(); } } }, "myWorker"); t1.start(); t1.join(); }
Note that in this case, the two threads are not sharing trace scopes. Instead, we are setting up a new trace scope, which may have its own span, which has the outer scope as a parent. Note that HTrace will be well-behaved even if the outer scope may be closed before the inner one. The SpanId object of a TraceScope continues to be valid even after a scope is closed. It`s just a number, essentially-- and that number will not be reused by any other scope.
Why do we make the worker Thread call newNullScope in the case where the outer scope
s span id is invalid? Well, we dont want to ever create the inner span if the outer one does not exist. Calling newNullScope ensures that we get a scope with no span, no matter what samplers are configured.
What if we don`t want to create more than one span here? In that case, we need to have some way of detaching the TraceScope from the parent thread, and re-attaching it to the worker thread. Luckily, HTrace has an API for that.
final TraceScope bigScope = tracer.newScope("bigComputation"); bigScope.detach(); Thread t1 = new Thread(new Runnable() { @Override public void run() { bigScope.reattach(); try { doWorkerStuff(); } finally { bigScope.close(); } } }, "myWorker"); t1.start(); t1.join();
Note that in this case, we don`t need to close the TraceScope in the containing thread. It has already been closed by the worker thread. | https://apache.googlesource.com/incubator-retired-htrace/+/5d38e876d1f4dc400e2783a9d26fa76ef6235023/src/main/site/markdown/developer_guide.md | CC-MAIN-2022-40 | refinedweb | 2,417 | 57.87 |
Let’s imagine the monolith project with an enormous code base that has been developed for a couple of decades (unbelievable, right?). This project is probably going to have a myriad of features and a considerable (hopefully!) number of automated tests covering the verification of our priceless features on multiple levels. All the way up the famous, or infamous, depending on who’re you gonna ask, testing pyramid from the unit foundation down below to the end-to-end peak high above.
Now, after some exercises with imagining technical stuff, you should be good with an idea that such a complex piece of software should stand in need of pretty solid test management tools to facilitate the development life cycle and make continuous integration a smooth and sleek process.
As a number of product versions, development branches, test environments, and all the other imaginable things grow, so is a number of attributes that are used to tag test classes and methods to properly categorize them, and define the tests that should go together. Having said that, what should you do when there’s a need to add another attribute and mark a bunch of the selected tests across the solution? When a bunch is just two tests, that’s pretty easy, right? But what if a bunch is a two thousand size bunch?
That’s where Roslyn comes to the rescue!
Chances are you’ve already heard about Roslyn and might even work with it. But, for people out there who are not familiar with this piece of the .NET ecosystem, Roslyn is a .NET Compiler Platform that provides a set of open-source compilers and APIs for code analysis and refactoring.
In this article I will show you how to utilize one of the prominent Roslyn features to automate some of the routine tasks from our refactoring or test automation agenda:
We’re gonna use CSharpSyntaxRewriter — a base class that implements Visitor pattern and allows to override a multitude of Visit methods for any type of the source code elements and blocks with our own implementation. We are particularly interested in VisitMethodDeclaration and VisitClassDeclaration methods that are going to help us to achieve the above goals.
Let’s start with creating an attribute that is going to accept a string argument. In our case, we would want to mark specific tests in our test projects with a “Top Priority” category attribute.
But what if we don’t want to use the magic strings in our solution but rather have a dedicated enumeration type. So, instead of this,
[Category("Top Priority")]
we would rather have this?
[Category(Priority.Top)]
Let’s create another attribute with an enumeration argument.
Okay, we’ve created attribute definitions but now we need to actually insert our attributes next to our method or class names. Also, it would be nice to have proper formatting. And by proper formatting, I mean correct position of the inserted attribute: the same level of indentation as for other attributes of a class or a method, the class/method signature starting on the next line after a list of attributes, etc.
It’s pretty easy to do it right if we already have proper formatting for existing members. We just need to copy the leading and trailing trivia of existing members and re-use it for our attribute.
Our custom AddAttribute method is going to receive a MemberDeclarationSyntax parameter. We’re going to provide it in the overridden VisitMethodDeclaration and VisitClassDeclaration methods as both Method and Class nodes inherit this base class. The trivia stuff will take care of the formatting.
Both visitor methods are going to have just the same basic logic for now. We’re gonna get the node and add our attribute to the attribute list of either a method node or a class node.
To avoid the manual update of the modified files that got new attributes, we would like Roslyn to deal with “using” directives too. The following code snippet should take care of that.
We’re gonna add our namespace only if it’s not already present in the list of using directives. Note another two pieces of the formatting magic above.
Call to NormalizeWhitespace method is necessary to avoid concatenated using string: “usingOurNamespace” without a space between the directive and the namespace. Don’t ask me why it works like this!
Using the trailing trivia with ElasticCarriageReturnLineFeed parameter is the most simple way to insert an empty line between a block of using directives and a namespace definition.
Now, let’s get everything we learned above together and create a small program that we can utilize for our refactoring purposes.
BaseRewriter class is going to inherit the CSharpSyntaxRewriter and contain a common logic used by both MethodRewriter and ClassRewriter guys.
We’re gonna use several code analysis APIs in our ModifySolutionAsync method to traverse the solution documents and modify them according to our needs. The logic should be pretty straightforward. We’re gonna go through the list of selected projects calling the Visit method on each appropriate file and then replace the old syntax tree with the new one containing our custom attribute nodes. We also are going to track the document state using the IsModified flag and insert the using directives only for modified files.
MethodRewriter and ClassRewriter will have their own CreateAttribute logic and a different list of projects or methods to go through. Also, we’re gonna change the IsModified flag’s value in the overridden Visit methods here.
Finally, the Program.cs is going to have a program entry point to initialize our rewriters and call the ModifySolutionAsync method. The final tip is to call MSBuildLocator.RegisterDefaults() to properly register the correct MSBuild path for the installed Visual Studio version. Odds are it can be corrupted in some cases that will prevent Roslyn to discover any documents in the provided solution.
I wanted to demonstrate how Roslyn can be used to automate some routine refactoring or test automation tasks that can pop-up on your agenda any time if you work on a mature project with a significant codebase. The scenarios covered in this article are just a tip of the iceberg as Roslyn is able to deal with much more difficult and elaborate cases.
Don’t hesitate to play with it — it may save you a lot of time and effort.
That’s all folks!
Previously published at
Create your free account to unlock your custom reading experience. | https://hackernoon.com/complex-refactoring-with-roslyn-compilers-h32n310k | CC-MAIN-2021-17 | refinedweb | 1,080 | 52.49 |
digitalmars.D - Re: const?? When and why? This is ugly!
- Jason House <jason.james.house gmail.com> Mar 02 2009
hasen Wrote:hasen wrote:I haven't been following D for over a year .. now I notice there's a const!! [...]
It seems the argument is centered around making "pure functions" possible. But, surely there's a better way of doing this than borrowing const from C++. Just like import is a better approach than #include and `namespace` Yesterday I noticed this page which I haven't seen before posting the topic yesterday. Argument #2 reads: "It makes for interfaces that can be relied upon, which becomes increasingly important the more people that are involved with the code. In other words, it scales very well." I think this is not worth the complexity, but then again, Argument#4 renders #2 as insiginificant, so I won't bother too much arguing against #2. #4 reads: "The future of programming will be multicore, multithreaded. Languages that make it easy to program them will supplant languages that don't." So, this is driven by the need to popularize D as a very good tool for multicore applications! "Transitive const is key to bringing D into this paradigm." Really? have you considered other possibilities? How about, adding a new attribute to functions: `pure` pure real sin( real x ) { ... } and design the rest around this concept. The compiler must make sure that this function is really pure: - native types must be passed by value, not by reference - doesn't accept pointers - <insert some condition for objects and arrays> If these requirements aren't met, the compiler will spit some error message "in function <F>: doing <X> voilates the `pure` attribute" objects and arrays will need some special treatment or requirements in order to be passed to pure functions. What are those requirements? I have given it a very deep thought yet(*), but I'm sure this approach is better than `const`. Also, make `string` a builtin type that's immutable (like python), and discourage the use of char[] as a type for strings, (unless used to implement a special string class). The need for a special string type also stems from the fact that char[a..b] slices bytes, not characters! (*) A quick idea might be: - For arrays, any statement of the type `a[b] = c` will be illegal inside a pure function. and pure functions can only call pure functions. - For objects, any statement of the form `a.b = c` is illegal, and there must be a way to know if a method call will change the object or not. (It would be best if the compiler could detect this automatically if it's possible). - Another approach, is to prohibit passing objects all together, and introduce some new construct that's immutable (such as a tuple, again, like python). `pure` will probably have the same viral effects as `const`, in that any function called from a pure function must also be pure, and this viral nature will propagate to classes/objects as well. However, the advantage is not complicating the type system more than is needed. Also, what's the deal with const pointers?? Why should `pure` function be able to use pointers at all? Is there any real-life use case where a pure function needs to access memory instead of some abstract concept like a variable/array/tuple? If you think about it, CTFE (compile time function execution) are `pure` already, and they're detected automatically by the compiler, with no need for explicitly marking them as such.
You're getting closer... Your scheme uses a lot of deep copying which can kill performance. What if there was a way to skip such deep copies? Or at least force the caller to copy the data before use? How do you implement thread safe deep copies?
Mar 02 2009 | http://www.digitalmars.com/d/archives/digitalmars/D/Re_const_When_and_why_This_is_ugly_85182.html | CC-MAIN-2015-40 | refinedweb | 646 | 73.88 |
Hi all,
I m new to C++ and very desperately looking for some help in my code.Im trying to write a program that uses bisection method to find the root of a function.Im having problems as my results are not accurate enough and my function (f) does not produce answers in required decimal points.
#include <iostream> #include<cdtg> #include<cstdlib> using namespace std; void readParameters(float& x1,float& x2,float& epsilon) { cout << endl << endl << "The Bisection Method" << endl; cout << "Please input x1, x2 and epsilon seperated by whitespace: "; cin >> x1; cin >> x2; cin >> epsilon; cout << endl << endl; } float f(float x) { return (x*x*x)-(3.17*x*x)-(4.835*x)+11.01; } void findZero(float& x1,float& x2,float epsilon) { float low, high, midpoint; if (f(x1) <= 0) { low = x1; high = x2; } else { low = x2; high = x1; } midpoint = low + (high-low)/2; while (abs(high - low) > epsilon) { if (f(midpoint) <= 0) { low = midpoint; } else{ high = midpoint; } midpoint = low + (high-low)/2; } if (low>high) { x2=low; x1=high; }else{ x1=low; x2=high; } } int main() { float x1 = 0.0; float x2 = 0.0; float epsilon = 0.0; readParameters(x1,x2,epsilon); if (x1 > x2) { cout<<"Empty interval"<<endl; return 0; } findZero(x1,x2,epsilon); cout << "Zero is in the interval [x1,x2] = [" << x1 << "," << x2 << "]" << endl; cout << "f(x1) = " << f(x1) << endl; cout << "f(x2) = " << f(x2) << endl; }
Can anybody please
help me.
Thanks.
Sana | https://www.daniweb.com/programming/software-development/threads/273081/help | CC-MAIN-2017-43 | refinedweb | 238 | 64.54 |
If you want to create 2D physics-driven games and applications, Box2D is the best choice available. Box2D is a 2D rigid body simulation library used in some of the most successful games, such as Angry Birds and Tiny Wings on iPhone or Totem Destroyer and Red Remover on Flash. Google them, and you'll see a lot of enthusiastic reviews.
Before we dive into the Box2D World, let me explain what is a rigid body . It's a piece of matter that is so strong that it can't be bent in any way. There is no way to modify its shape, no matter how hard you hit it or throw it. In the real world, you can think about something as hard as a diamond, or even more. Matter coming from outer space that can't be deformed.
Box2D only manages rigid bodies, which will be called just "bodies" from now on, but don't worry, you will also be able to simulate stuff which normally is not rigid, such as bouncing balls.
Let's see what you are about to learn in this chapter:
Downloading and installing Box2D for Flash
Including required classes in your Flash projects
Creating your first Box2D World
Understanding gravity and sleeping bodies
Running your first empty simulation, handling time steps, and constraints
By the end of the chapter, you will be able to create an empty, yet running world where you can build your awesome physics games.
You can download the latest version of Box2D for Flash either from the official site () or from the SourceForge project page ().
Once you have downloaded the zipped package, extract the
Box2D folder (you can find it inside the
Source folder) into the same folder you are using for your project. The following is how your awesome game folder should look before you start coding:
You can see the
Box2D folder, the FLA file that I am assuming has a document class called
Main and therefore
Main.as, which is the class we will work on.
I would suggest you work on a 640 x 480 Flash movie at 30 frames per second (fps). The document class should be called
Main and the examples will look better if you use a dark stage background color, such as
#333333. At least these are the settings I am using throughout the book. Obviously you can change them as you want, but in that case your final movies may look a bit different than the examples shown in the book.
Now let's import Box2D classes.
Box2D is free and open source, so you won't need to install components or deal with SWC files. All you need to do to include it in your projects is to include the required classes.
Open
Main.as and write the following code snippet:
package { import flash.display.Sprite import Box2D.Dynamics.*; import Box2D.Collision.*; import Box2D.Collision.Shapes.*; import Box2D.Common.Math.*; public class Main extends Sprite { public function Main() { trace("my awesome game starts here"); } } }
Test the movie and you should see my awesome game starts here in your Output window. This means you have successfully imported the required classes.
There isn't that much to say about the code we just wrote, as we are just importing the classes needed to make our Box2D project work.
When I gave the Hello Box2D World title, I did not mean to create just another "Hello World" section, but I wanted to introduce the environment where all Box2D simulation and events take place: the world.
The world is the stage where the simulation happens. Everything you want to be ruled by the Box2D physics must be inside the world. Luckily, the Box2D World is always big enough to contain everything you need, so you don't have to worry about world boundaries. Just remember everything on a computer has limits in one way or another. So, the bigger the world, the heavier will be the work for your computer to manage it.
Like all worlds, the Box2D World has a gravity , so the first thing you need to do is define world gravity.
In your
Mainfunction, add the following line:
var gravity:b2Vec2=new b2Vec2(0,9.81);
This introduces our first Box2D data type:
b2Vec2.
b2Vec2is a 2D column vector that is a data type, which will store x and y components of a vector. As you can see, the constructor has two arguments, both numbers, representing the x and y components. This way we are defining the
gravityvariable as a vector with
x=0(which means no horizontal gravity) and
y=-9.81(which approximates Earth gravity).
Physics says the speed of an object falling freely near the Earth's surface increases by about 9.81 meters per second squared, which might be thought of as "meters per second, per second". So assuming there isn't any air resistance, we are about to simulate a real-world environment. Explaining the whole theory of a falling body is beyond the scope of this book, but you can get more information by searching for "equations for a falling body" on Google or Wikipedia.
You can set your game on the move with the following line:
var gravity:b2Vec2=new b2Vec2(0,1.63);
You can also simulate a no gravity environment with the arguments set at
(0,0):
var gravity:b2Vec2=new b2Vec2(0,0);
We also need to tell if bodies inside the world are allowed to sleep when they come to rest, that is when they aren't affected by forces. A sleeping body does not require simulation, it just rests in its position as its presence does not affect anything in the world, allowing Box2D to ignore it, and thus speeding up the processing time and letting us achieve a better performance. So I always recommend to put bodies to sleep when possible.
Add the following line, which is just a simple Boolean variable definition:
var sleep:Boolean=true;
And finally, we are ready to create our first world:
var world:b2World = new b2World(gravity,sleep);
Now we have a container to manage all the bodies and perform our dynamic simulation.
Time to make a small recap. At the moment, your code should look like the following:
package { import flash.display.Sprite;); } } }
Now you learned how to create and configure a Box2D World. Let's see how can you simulate physics in it.
You need to run the simulation at every frame, so first of all you need a listener to be triggered at every frame.
Let's make some simple changes to our class:
package { import flash.display.Sprite; import flash.events.Event;); addEventListener(Event.ENTER_FRAME,updateWorld); } private function updateWorld(e:Event):void { trace("my awesome simulation runs here"); } } }
Nothing new, we just added an
ENTER_FRAMEevent, but we needed it in order to run the simulation inside the
updateWorldfunction. If you have doubts regarding event handling with AS3, refer to the official Adobe docs or get Flash Game Development by Example, Packt Publishing, which will guide you to a step-by-step creation of pure AS3 games.
Box2D simulation works by simulating the world at discrete steps of time. This means the world gets updated at every time step. It's up to us to decide which time step we are going to use for our simulation. Normally, physics in games have a time step of 1/60 seconds. Anyway, as I am running the Flash movie at 30 fps, I am going to set a time step of 1/30 seconds.
The first line into the
updateWorldfunction will be:
var timeStep:Number=1/30
Just defining a time step, is not enough. At every step, every physic entity is updated according to the forces acting on it (unless it's sleeping). The algorithm which handles this task is called constraint solver. It basically loops through each constraint and solves it, one at a time. If you want to learn more about constraints, search for "constraint algorithm" on Google or Wikipedia.
Where's the catch? While a single constraint is solved perfectly, it can mess with other constraints that have already been solved.
Think about two balls moving: in the real world, each ball position is updated at the same time. In a computer simulation, we need to loop through the balls and update their position one at a time. Think about a
forloop that updates a ball at every iteration. Everything works as long as the balls do not interact with each other, but what happens if the second ball hits the first, whose position has already been updated? They would overlap, which is not possible in a rigid body simulation.
To solve this problem with a reasonable approximation, we need to loop over all the constraints more than once. Now the question is: how many times?
There are two constraint solvers: velocity constraint solver and position constraint solver . The velocity solver is used to move physic entities in the world according to their impulses. The position solver adjusts physic entities' positions to avoid overlap.
So it's obvious that the more iterations there are, the more accurate the simulation, and the lower will be the performance. I managed to handle more than 100 physic entities using 10 velocity and position iterations, although the author of Box2D recommends 8 for velocity and 3 for position.
It's up to you to play with these values. Meanwhile, I'll be using
10iterations for both constraint solvers.
So here we go with two new variables:
var velIterations:int=10; var posIterations:int=10;
Finally we are ready to call the
Stepmethod on the
worldvariable to update the simulation.
To use
worldinside the
updateWorldfunction, we need to declare
worldas a class-level variable,); } } }
Now we have our world configured and running. Unfortunately, it's a very boring world, with nothing in it. So in the next chapter, we are going to populate the world with all kinds of physic entities.
Just one last thing, after each step you need to clear forces, to let the simulation start again at the next step.
You can do it by adding the following line right after the
Stepmethod:
world.ClearForces();Your final code is); world.ClearForces(); } } }
Tip
Downloading the example code
You can download the example code files for all Packt books you have purchased from your account at. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.
And now you are really ready to place some action in your Box2D World.
You have just learned how to install Box2D for Flash. Include it in your projects and create a running, gravity-ruled simulation managing time steps and constraint solvers.
You have an empty world ready to be the container where your awesome game will take place. Save it and use it in every future project! | https://www.packtpub.com/product/box2d-for-flash-games/9781849519625 | CC-MAIN-2021-39 | refinedweb | 1,824 | 70.23 |
- Vtype called repeatedly
- form items with ID shown incorrectly?
- Public training, or a guest at someone's on-site training?
- deleting last record in the last page in EditorGridPanel
- Google Maps set a marker and save Infos
- createDelegate - can it be used on a pre-declared function?
- Ext2.2: notifyEnter and notifyOut not reliable?
- Layout problem with multiple forms
- Help with events
- I met a problem about Ext.grid.GridPanel, Please help~~
- Question about calendar
- Dirty Flag set when Rowactions Icon Changes in Editor Grid
- Creating menu from JSON data(only last item is shown)
- can we change pagesize of result grid?
- How to load the store from a form load xml response
- Trying to feed tree obj with external page
- rule for initialising attribues on instances.
- setting global date format
- File Upload with charset UTF-8 for transfered filename
- Quicktip as info button
- Memory Usage
- Can't edit field after setValue
- sortField
- Issue with window maximizing or minimizing
- How to restrict the user from entering more than one dots in a number field?
- How to disable Grid column selection- simple but not sure how to do
- How to restore Grid Panel Vertical scroll bar position
- Popup form
- FormPanel having TabPanel, deferredRender weird behavior
- [Grid] disable a row issue in Chrome
- how change width from radio?
- Access raw JSON from JsonStore
- TextField Label Width
- Viewport - chaning content?
- how can I find parent of store in combobox?
- PDF in tab visualization
- UL Tag is not working properly inside Tabpanel
- Multiple stores for the same data.
- Ext.Ajax.Request() to invoke a servlet
- this gets html tag not the element
- Ext does not work over HTTP, but does work locally
- Set Grid Height After render
- Retrieve columns and fields for grid from server
- Firefox dotted border when you focus a radio button or checkbox
- Save Grid Multiple Selection data
- Changing data from a combox in a EditorGridPanel
- StoreMgr, grid and paging bar
- AsyncTreeNode expandChildNodes behavior
- Need assistance on cenverting time value
- getting the 'fields' of a JsonStore
- XTYPE Event Handler embedded inside a panel
- Grid MultiGrouping with with json multi result
- add record to store and highlight on grid
- Adding a custom widget to a Ext.Container
- Looping to create multiple windows... index variable is not 'frozen' / closure?
- Form Issue wit Fieldset
- treepanel in viewport missing scrollbars
- Aptana eclipse plugin problem and Spket IDE
- General layout of a multi-page website
- Hiding Menu Question.
- How to understand the file structure of Extjs
- Please help with Tablelayout
- SubTableRowExpander Problem
- quick json array question
- How to retrieve the sub data using JSON
- How to add a ext.button to the grid column header
- [Solved]Any limit to JSON records returned?
- Exception handling for AJAX and normal request together
- Disabling parent window
- How to display html images inside xtemplate conditionally
- minLenght TextField[SOLVED] // ScrollBar to DataView[NOT SOLVED]
- Auto focus
- Tooltip for panel collapsible symble
- Ext.get('xyz').title wrong?
- Include menu on row in grid?
- Tree node value
- Combobox filtered by input text
- How to display status image infront of status Label
- Empty grid, followed examples
- Ext.QuickTip Help - IE Zoom Property
- How to sort IPAdress in html grid.
- Grid / Pagination / Client state issues
- Data from popup form submission
- Grid bbar
- absurd previews on forms
- Combo Box + text Field Dynamic Validation
- ComboBox with HTML Content
- Tab height adjustment problem
- Nested Panel and namespace
- How to expand/collapse certain groups when the grid loads
- xtemplate and function
- File upload
- ArrayList to JSON, then to Tree
- ie script timeout in tabpanel
- Cross-frame JS
- The width of columns and its cells are not the same
- Help with Grids and JSON
- Issue with collapsible Panel
- GridPanel / Events
- Paging Grid using Json
- how cancel event in grid
- tbar scroll
- Attaching types to tabs
- referring to grid store in xtype
- Combo box typeahead problem
- Paging whit xml
- Column Width Ignored in Grid *WITHOUT* Paging Toolbar (bbar)
- Script causing IE to run slowly message in IE with slow loading dataset, help!
- Which way is better for performance?
- ext not processing under tomcat
- Ext.grid.CheckColumn
- Design/Architecture Questions
- Adding data to column tree
- Coordinates inside treenodes
- Ext.Panel how to write text inside panel?
- DataView DnD into GridPanel
- Grid Example (Tutorial:Introduction to Ext 2.0)
- Ext.Panel.FieldSet bug. html does not work.
- Dynamically Add to Accordion
- GridPanel with horizontal scroll without opened store?
- IPAddress sorring in Grid
- Unable to Enable checkbox on condition
- Combo Box + should not accept blind text.
- How to solve this error !
- Grid: How to change default sort direction?
- Label is not displaying in IE6.0 after "hideMode:'offsets' ".
- Date format
- Populate combo list from the forms loaded data?
- problem with Iphone Safari
- unable to use tree loader
- swfupload; changing tab stops file uploads
- Hide/show grid rows
- ext-prototype-adapter.js problem(Ext-2.1)
- FIREFOX 3 problem FILTER GRID
- Columnmodel : Color a cell
- How to increase text field height in editorgridpanel
- Node could not be deleted
- Editable Tooltips
- record.commit changes focus in a grid that scrolls
- TREE AND IE7
- auto scrollOffset? when grid has scrollbar put offset?
- how can i add extplorer to extjs like a component?
- Spotlight & Center Form
- TreePanel TreeLoader
- DomHelper additional properties for elements
- Login Page Submit Problem
- [RESOLVED]HTMLEditor - Changing Enter key behavior
- [2.??] Some issue with fireEvent
- 2 plugin for checkbox in a grid
- Firing Event on HTML Element
- [SOLVED] Can I programatically check/uncheck menu items of a splitbutton
- Rendering of extjs combobox on Citrix issue
- Big problem with IE6 and IE7, NO FF and IFRAME
- programatically "fly out" expand panel
- (SOLED)Yahoo BOSS API Or Google API Implementation
- Is FieldLabel supported on TextField
- Layout Issue with Card
- Grid number renderer?
- Improvement of App Design / Code
- CSS issue w/ comboBox tpl, IE 6 only
- Applet in CardLayout?
- Field labels not displaying when inside of a column layout...
- Problem with building row of buttons
- Tabpanel + Multi-Mooslides instances
- GridView date and date store Date
- is 'Fit' layout possible with a DataView?
- Form validation
- Problem to use JQuery inside a ExtJS tab
- TreePanel: Remove Checkbox for few nodes
- [Solved]Combobox arrow rendering issue IE
- How to make title of a formpanel to the center...
- MessageBox to halt browser execution
- how use browserbutton get multi-file?
- problem Ext.Element.createChild
- How to add 2 sets of selected records from 2 grids into 3rd grid
- Data type for Timestamp in grid?
- Performance Issue with TabPanel.
- Paging NAN issue
- live search field not visible in IE7
- Problem fit formpanel in tab of viewport correctly
- [CANCELD] In Store - how call remove event from update event
- Accordion menu with submenu also accordion...
- combobox selection
- Combobox in each row of a grid (Diferents values for diferents records)
- Multiple buttons and functionality
- Ext.MessageBox.updateProgress
- Grid Layout issue in IE
- listen to all mouse over and mouseouts
- Variable scope while extending
- [SOLVED!] IE combo first render - no drop down?
- hidelabel sets difference in IE on Textfields
- datefield validation problem
- Problem ---Can not save edited data into database
- Grid with XML recover an attribute
- Newbie Resizable question
- [Solved]Tab Panel error
- getRowClass doesnt work with GroupingView
- Combo with a template issue
- Tbar in form
- Combo show the value and not the name
- [Solved]Side-by-side Grids in a TableLayout not working
- Some one knows why extplorer show error in internet explorer 7??
- Tree sorting problem
- Problem with TreePanel 'beforemovenode' event and asynchronous Ajax.request
- Row striping on a FormPanel?
- Ext.Button erro
- How can I fire afterlayout event in a grid panel ?
- problem with toolbar and button display
- Shadow on a non-floating Panel?
- Strange behavior
- soundManager2 and extjs
- Button value
- Messagebox returns
- BoxComponent: How to add listener/click handler?
- Adding extjs components to a window in a desktop
- [SOLVED] Grid not loading JSON data
- Issues with applyto on TextField
- Need layout advice
- Change component id?
- [Solved]IE Grid doesn't resize with window expands too far
- Event names docs?
- Is there HTTPS support from CacheFly?
- Change remoteSort after load GridPanel
- how to do form submission with validation and immediately come out of formpanel tab
- How to show the data grids view in a tooltip?
- few requirements for the column Tree.
- how to enable/disable tabs on conditionality
- 'getting error in reloading tree'
- multicolumn drag-drop
- [SOLVED][2.2] Refreshing form panel, combobox and datefield stop working
- Ext.ux.Toast IE7.0 problem
- Can I use one store and filter it differently for two different views?
- EditorGridPanel DateField Firefox horizontal scrolling problem
- Hide grid rows depending on record value (getRowClass occupied)
- load tree on click
- Calendar doesn't work with my combos
- Fish Eye Menu
- Get button's parent
- [2.2] how to completely delete a panel?
- Store.load() and GMapPanel
- Asp.Net WebMethod + ExtJs DataStore
- ExplorerView issue with VisualStudio 2008
- Grid field names to post
- [Solved]Displaying Traditional Chinese from backend
- Tricky problem mouseover event
- Problem using qtip to display text containing double quotes
- [SOLVED] Search Panel (For Yahoo API) Integration in a Window
- Convert Byte Array to Image | https://www.sencha.com/forum/archive/index.php/f-9-p-107.html?s=c9132f88602e25e732f69531fa133ebc | CC-MAIN-2020-10 | refinedweb | 1,495 | 51.48 |
i m trying to build a turn based text game but im kinda of confused on the menu's
ok here is my issue i want to have 4 menu's in my game
menuDisplay
selectMenu
fightMenu
shopMenu
now what i want is the menuDisplay to be the welecome menu where the user make
begins. the "the selectMenu " will display the fight ,save,shop and quit.
ok her is my dilema is im tryin to figure out is how can i move from the switch to the next menu and continue into the next menu once a player has entered his name
the two issues are what to i need to establish if .else or another switch maybe a do while? im not sure
also ,
how can i code a ck to return if the user has selected his name properly
example: user enters 'eric' i want it to say "eric is this correct" ? y- for yes n-no an be able to return if entered incorrectly
#include <iostream> #include <string> #include <fstream> using namespace std; int main() { ifstream loadFile; string playerName; char menuDisplay; char selectMenu; char letter; //menuDisplay cout << "Welcome to Space Gladiator Coliseum" << endl; while (menuDisplay); { //display menu cout << "What would you like to do?" << endl; cout << "(enter the letter of your choice)" << endl; cout << "Press 'N' - To Begin a New Game" << endl; cout << "Press 'L' - To Load a Previous Game " << endl; cout << "Press 'Q' - To Quit" << endl; //choice cin >> letter; switch (letter) { case'n': case'N': cout << " Please Enter Your Fighters Name: " << endl; cin >> playerName; if ( playerName < 10 ); { return playerName; cout << playerName << " Let's Begin " << endl; } else { cout << " You have exceed 10 characters for a name." << endl; cout << " Please Re-enter Name " << endl; //problem------> //how to return user name to see if correct //how to move to next menu } return 0; } } } | https://www.daniweb.com/programming/software-development/threads/112427/help-with-menu-for-game-in-c | CC-MAIN-2017-30 | refinedweb | 304 | 59.3 |
Misago brings its own admin site just like Django does. This means you have to make a decision which one your app will use for it's administration.
If you intend to be sole user of your app, Django admin will propably be faster to get going. However if you plan for your app to be available to wider audience, its good for your admin interface to be part of Misago admin site. This will require you to write more code than in case you've went Django way, but will give your users more consistent experience and, in case for some languages, save them of quirkyness that comes with Django admin automatically created messages.
Unlike Django, Misago admin is not "automagical". This means you will not get complete admin from nowhere by just creating one file and writing 3 lines of code in it. However Misago provides set of basic classes defined in in
misago.admin.views.generic module that can offload most of burden of writing views handling items lists and forms from you.
Workflow with those classes is fast and easy to master. First, you define your own mixin (probably extending
AdminBaseMixin). This mixin will define common properties and behaviours of all admin views, like which model are admin views focused on, how to fetch its instances from database as well as where to seek templates and which message should be used when model could not be found.
Next you define your own views inheriting from your mixin and base views. Misago provides basic views for each of most common scenarios in admin:
ListView for items lists. Supports pagination, sorting, filtering and mass actions.
FormView and
ModelFormView for displaying and handling forms submissions.
ButtonView for handling state-changing button presses like "delete item" actions.
Base class for admin mixins that contain properties and behaviours shared between admin views. While you are allowed to set any properties and function on your own mixins to dry your admin views more, bare minimum expected from you is following:
model property or
get_model(self) method to get model type.
root_link property that is string with link name for "index" view for admin actions (usually link to items list).
templates_dir property being string with name of directory with admin templates used by mixin views.
Optionally if you don't plan to set up action-specific item not found messages, you may set
message_404 property on mixin to make all your views use same message when requested model could not be found.
Base class for lists if items. Supports following properties:
template_name - name of template file located in
templates_dir used to render this view. Defaults to
list.html
items_per_page - integer controlling number of items displayed on single page. Defaults to 0 which means no pagination
filter_form - Form type used to construct form for filtering this list. Either this field or
get_filter_form method is required to make list filterable.
ordering - list of supported sorting methods. List of tuples. Each tuple should countain two items: name of ordering method (eg. "Usernames, descending") and
order_by argument (
-username). Defaults to none which means queryset will not be ordered. If contains only one element, queryset is ordered, but option for changing ordering method is not displayed.
mass_actions - list of dicts defining list's mass actions. Each dict should have
action key that will be used to identify method to call,
name for displayed name,
icon for icon and optional
confirmation message. Actions can define optional "is_atomic" key to control if they should be wrapped in transaction or not. This is default behaviour for mass actions.
selection_label - Label displayed on mass action button if there are items selected.
0 will be replaced with number of selected items automatically.
empty_selection_label - Label displayed on mass action button if there are no items selected.
In addition to this, ListView defines following methods that you may be interested in:
This function is expected to return queryset of items that will be displayed. If filters, sorting or pagination is defined, this queryset will be further sliced and filtered.
Class method that allows you to add custom links to item actions. Link should be a string with link name, not complete link. It should also accept same kwargs as other item actions links.
Class method that allows you to add custom mass action. Action should be name of list method that will be called for this action. Name will be used for button label and optional prompt will be used in JavaScript confirmation dialog that will appear when user clicks button.
This function is used to get filter form class that will be used to construct form for filtering list items.
If you decide to make your list filterable, remember that your
Form must meet following requirements:
Must define
filter_queryset(self, criteria, queryset) method that will be passed unfiltered queryset, which it should modify using filter/exclude clauses and data from
criteria.
Must return queryset.
Must not define fields that use models for values.
If you add custom mass action to view, besides adding new entry to
mass_actions list, you have to define custom method following this definition:
ACTION will be replaced with action dict
action value. Request is
HttpRequest instance used to call view and
items is queryset with items selected for this action. This method should return nothing or
HttpResponse. If you need to, you can raise
MassActionError with error message as its first argument to interrupt mass action handler.
Base class for forms views.
template_name - name of template file located in
templates_dir used to render this view. Defaults to
form.html
form_class property or get_form_class method -
get_form_class method is called with
request as its argument and is expected to return form type that will be used by view. If you need to build form type dynamically, instead of defining
form_class property, define your own
get_form_class.
Returns form type that will be used to create form instance. By default returns value of
form_class property.
Initializes either bound or unbound form using request and
form_class provided.
If form validated successfully, this method is called to perform action. Here you should place code that will read data from form, perform actions on models and set result message. Optionally you may return
HttpResponse from this function. If nothing is returned, view returns redirect to
root_link.
Optionally your form template may have button with
name="stay" attribute defined, pressing which will cause view to redirect you to clean form instead.
Base class for targetted forms views. Its API is largery identic to
FormView, except it's tailored at handling
ModelForm and modifying model states. All methos documented for
FormView are present in
ModelformView, but they accept one more argument named "target", containing model instance to which model form will be tied.
In addition, this view comes with basic definition for form handler that calls
save() on model instance and (if defined) sets success message using value of objects
message_submit parameter.
Base class for handling non-form based
POST requests.
Do control this view behaviour, define your own
button_action method:
This function is expected to perform requested action on target provided and set result message on
request.
It may return nothing or
HttpResponse. If nothing is returned, view returns redirect to
root_link instead.
Both
ModelFormView and
ButtonView are called "targeted views", because they are expected to manipulate model instances. They both inherit from
TargetedView view, implements simple API that is used for associating request with corresponding model instance:
Function expected return valid model instance or None. If None is returned, this makes view set error message using
message_404 attribute and returns redirect to
root_link.
Called by
get_target_or_none.
If
kwargs len is 1, its assumed to be value of seeked model pk value. This makes function call model manager
get() method to fetch model instance from database. Otherwhise "empty" instance is created and returned instead. Eventual
DoesNotExist errors are handled by
get_target_or_none.
Once model instance is obtained either from database or empty instance is created, this function is called to see intended action is allowed for this request and target. This function is expected to return
None if no issues are found or string containing error message. If string is returned, its set as error messages, and view interrupts its execution by returning redirect to
root_link.
While target argument value is always present, you don't have to do anything with it if its not making any sense for your view.
In addition, views are wrapped in database transaction. To turn this behaviour off, define
is_atomic attribute with value
False.
Each view calls its
process_context method before rendering template to response. This method accepts two arguments:
request - HttpRequest instance received by view.
context - Dict that is going to be used to render template.
It's required to return dict that will be then used as one of arguments to call
render().
Misago Admin Site is just an hierarchy of pages, made of two parts:
site that contains tree of links and
urlpatterns that is included in
misago:admin namespace.
When Misago is started, it scans registered apps for
admin module, just like Django admin does. If module is found, Misago checks if it defines
MisagoAdminExtension class. If such class is found, its instantiated with no arguments, and two of its methods are called:
This function allows apps to register new urlpatterns under
misago:admin namespace.
This function allows apps to register new links in admin site navigation.
misago.admin.urlpatterns.URLPatterns available as
urlpatterns argument passed to
register_urlpatterns method. This object exposes two methods as public api:
Registers new namespace in admin links hierarchy.
path - Path prefix for links within this namespace. For example
r'^users/'.
namespace - Non-prefixed (eg. without
misago:admin part) namespace name.
parent - Optional. Name of parent namespace (eg.
users).
Registers urlpatterns under defined namespace. Expects first argument to be name of namespace that defined links belong to (eg.
users:accounts). Every next argument is expected to be valid Django link created with
url function from
django.conf.urls module.
misago:admin prefix of namespaces is implicit. Do not prefix namespaces passed as arguments to those functions with it.
Your urls have to be discoverable by your users. Easiest way is to do this is to display primary link to your admin action in admin site navigation.
misago.admin.hierarchy.AdminHierarchyBuilder class available as
site argument passed to
register_navigation_nodes method of your
MisagoAdminExtension class. It has plenty of functions, but it's public api consists of one method:
add_node(name=None, icon=None, parent=None, after=None, before=None, namespace=None, link="index")
This method accepts following named arguments:
parent - name of parent namespace under which this action link is displayed. Should exclude the
misago:admin part.
after - link before which one this one should be displayed. Should exclude the
misago:admin, but has to include
link part, eg.
users:index.
before - link after which one this one should be displayed. Should exclude the
misago:admin, but has to include
link part, eg.
users:index.
namespace - this link namespace.
link - link name, defaults to
index.
name - page title.
icon - link icon (see available icons list).
Only last three arguments are required.
after and
before arguments are exclusive. If you specify both, this will result in an error.
Misago Admin supports two levels of hierarchy. Each level should corelate to new namespace nested under
misago:admin. Depending on complexity of your app's admin, it can define links that are one level deep, or two levels deep.
Below code will register new top-level link that will appear after "Themes" and link to
misago:admin:payments:index:
site.add_node(name=_("Payments"),icon="fa fa-money",after="themes:index",namespace="payments",)
Below code will register new link under the "Settings" link that will link to
misago:admin:settings:profile-fields:index and appear under
misago:admin:settings:attachment-types:index:
site.add_node(name=_("Profile fields"),parent="settings",after="attachment-types:index",namespace="profile-fields",)
Other way to make your views reachable is to include links to them on items lists. To do this, you may use
add_item_action classmethod of
ListView class that is documented above. | https://misago.gitbook.io/docs/writingnewadminactions | CC-MAIN-2021-21 | refinedweb | 2,020 | 57.06 |
Advanced settings for peeled images¶
Peeloff origin¶
First, it is possible to change the origin relative to which the
peeling-off, and therefore the image extent, are defined. This is set to
(0., 0., 0.) by default, but you can change it using:
image.set_peeloff_origin((x, y, z))
where
x,
y, and
z are floating-point values giving the cartesian
coordinates of the peeloff origin. This can be used for example when doing
radiative transfer calculations on a simulation, in order to create images
centered on different sources.
Inside observers¶
It is also possible to specify that the images should be calculated from the point of view of an observer that is inside the coordinate grid, rather than at infinity. For example, if you are calculating a model of the Milky-Way Galaxy, you could place yourself at the position of the Sun using:
from hyperion.util.constants import kpc image.set_inside_observer((8.5 * kpc, 0., 0.))
Note that in this case, the peeloff origin cannot be set, and the image limits, rather than being in cm, should be given in degrees on the sky. Note also that, like sky coordinates, the x range of the image limits should be inverted. For example:
image.set_image_limits(65., -65., -1., 1.)
will produce an image going from l=65 to l=-65, and from b=-1 to b=1. Note that only images (not SEDs) can be computed for inside observers.
Image depth¶
Finally, it is possible to restrict the depth along the line of sight from which photons should be included, with the default being:
image.set_depth(-np.inf, np.inf)
The minimum and maximum depth are measured in cm. For standard images, the depth is taken relative to the plane passing through the origin of the peeling-off. For images calculated for an observer inside the grid, the default is:
image.set_depth(0., np.inf)
where the depth is measured relative and away from the observer.
Note that in this mode, the optical depth used to calculate the peeling off is the total optical depth to the observer, not just the optical depth in the slab considered. The slab is only used to determine which emission or scattering events should be included in the image. | http://docs.hyperion-rt.org/en/stable/advanced/peeloff.html | CC-MAIN-2021-17 | refinedweb | 375 | 62.98 |
After setting up my solution, my next step was to figure out how to display a list in iOS with the ability to select an item. In WPF, my first thought would be to utilize the ListBox control, bind its ItemsSource to the underlying data, and define an ItemTemplate for how I want each item to look. Not so simple in iOS. There’s a nice list control called the UITableView that provides support for a number of neat things (indexed list, splitting the items into sections/grouping them, selection, etc…). However, to accomplish the most important part, hooking it up to data, you have to define a data source object that you assign to the .Source property of the UITableView. There are a number of ways to accomplish this, but I went with creating a source that inherits from UITableViewSource. Xamarin has a nice guide that I used for guidance (Working with Tables and Cells).
Coming from WPF and MVVM I wanted to make this source reusable rather than following all the examples that were hardcoded to a specific object type. So I decided to create an ObjectTableSource, and since I’m still in the early stages of development I decided to make use of the built-in styles for the cell appearance rather than making a custom cell. With this in mind I needed to make sure that the objects provided to my table source had specific properties for me to utilize so I created an interface called ISupportTableSourceand used that as the .
public interface ISupportTableSource { string Text { get; } string DetailText { get; } string ImageUri { get; } }
public class ObjectTableSource : UITableViewSource
When inheriting from UITableViewSource you must override RowsInSection and GetCell. RowsInSection is exactly what it sounds like, you return how many items are in that section. My current version only supports 1 section so it returns the total number of items. GetCell returns the prepared UITableViewCell. The UITableView supports virtualization of its cell controls so in order to get the cell you need to call the DequeueReusableCellmethod on the table view. In versions earlier than iOS 6 this will return null if the cell hasn’t been created yet. In iOS 6 and later you can choose to register a cell type with the UITableView and that will make it so a cell is always returned. However, going this path means that you can’t specify which of the 4 build in styles to use (since it is specified in the constructor only), so I refrained from registering the cell type and handle null. When preparing the cell I also loaded any images on a background thread so the UI is still responsive, but I’ll cover that in another post.
public override int RowsInSection( UITableView tableview, int section ) { return _items.Count; } public override UITableViewCell GetCell( UITableView tableView, NSIndexPath indexPath ) { // if there are no cells to reuse, create a new one UITableViewCell cell = tableView.DequeueReusableCell( CellId ) ?? new UITableViewCell( _desiredCellStyle, CellId ); ISupportTableSource item = _items[indexPath.Row]; if ( !String.IsNullOrEmpty( item.Text ) ) { cell.TextLabel.Text = item.Text; } if ( !String.IsNullOrEmpty( item.DetailText ) ) { cell.DetailTextLabel.Text = item.DetailText; } if ( !String.IsNullOrEmpty( item.ImageUri ) ) { cell.ImageView.ShowLoadingAnimation(); LoadImageAsync( cell, item.ImageUri ); } return cell; }
The remaining thing for making this usable was to provide a way to notify when an item has been selected. There isn’t any event on the UITableViewSource like I expected, instead I needed to override the RowSelectedmethod and fire my own event providing the object found at the selected row.
public override void RowSelected( UITableView tableView, NSIndexPath indexPath ) { ISupportTableSource selectedItem = _items[indexPath.Row]; // normal iOS behaviour is to remove the blue highlight tableView.DeselectRow( indexPath, true ); OnItemSelected( selectedItem ); } | http://blogs.interknowlogy.com/2013/10/07/working-with-uitableview-xamarin-ios/ | CC-MAIN-2021-10 | refinedweb | 609 | 51.18 |
Is there a blog that highlights the stupidest Wikipedia events? I subscribe to the RSS feed of the Lamest edit wars article, but I find that insufficient.
Repository of All Human Knowledge - In Anime.
Our work here will not be complete until every Wikipedia page contains an "In Anime" sub-section.
Tags: doomed, firstperson, furries, wiki, www
Current Music: The Juan MacLean -- Human Disaster ♬
16 Responses:
I submit this (Something Positive nominated for deletion)
Well, the "In Amine" thing is just one aspect of Wikipedia's often-bloated "In Popular Culture" sections.
Not that documenting popular culture is a bad thing for an encyclopedia, but often what should be a few of the most prominent examples turns instead into an indiscriminate mess.
How could you not enjoy reading about how a thousand obscure webcomics used something as a pop culture reference?
Looking back at the Previouslies, a number of the wikigroans seem to have been fixed, either by the real-world article being improved, or the fandom article collapsing under its own weight into an illegible list of issues/episodes/quotes/etc. I wonder if this was a conscious response to this perception of Wikipedia on the part of committed editors fixing the arguably worthwhile pages, or just a natural result of fan infighting destroying the fictional reference pages.
I'd generally hope that they'd be improved by people improving the real-world article. It's not like Wikipedia is running out of space and so needs to not carry articles about fannish things.
They clutter up articles, categories, and search results, though. This is all the worse when the only reason Wikipedia has some semblance of organisation is because of significant elbow grease.
Besides, if the wannabe-Japanese want to obsess over bad animation, that's what wikia are for. The trekkies, Star Wars fans, and---hell---even furries seem to manage it.
Clearing out inane cross-references is good editing. It pollutes the more serious topics and the factoids shouldn't even survive on the subjects they reference (as illustrated by the "in popular culture" xkcd).
But a lot of wikigroans still exhibit a conflict of interest: editors must delete / merge / trim down minutiae in order to bee good editors... b-b-b-b-but Optimus Prime is so awesome, he deserves 3 separate pages!
The hard problem is getting the volume of contributions on material subjects eclipse the huge catalogs of pure entertainment subjects. I think that attacking the later by beefing up notability requirements lead to huge nerdrage and also some of my favorite wikigroans where editors show even more astonishing bias and capricious reasoning.
I think there's a good lesson as to what practices give legitimacy to the site as well as keep the community from fragmenting. Everything2 vs Urbandictionary - you can find horrendous loads of absolute shit on UD, but it is younger and more widely referenced than E2. Allow more subjects, but contain and consolidate them. Even to the point of segregating articles based on category and importance.
If neither E2 nor UD existed, the world would not be less for it.
Whereas for all its flaws, Wikipedia is clearly something the world would be less without. Of course that's part of what makes its flaws so interesting...
I've never not found what I wanted at the top of my search because of an anime entry or some other "frivolous" thing. YMMV, of course.
Just like how all the irrelevant garbage on the internet as a whole clutter up search results there! Oh, wait, we solved that problem.
>For the practical concerns of today, Mediawiki's search is no PageRank.
First of all, this is more of an argument to vastly improve the search engine, not to constrain the type of content an article can hold.
Secondly, your example has nothing to do with fancruft! Even if all the in popular culture sections were removed, search results still wouldn't be relevant.
>Better search algorithms do nothing to keep categories useful.
Categories were never useful. Yahoo proved this a decade ago.
>Better searching does nothing to keep articles uncluttered from extraneous sections, either. If nothing else, think of the page load times.
Oh please. Static HTML renders at light speed, compared to javascript-riddled new media crap that everyone's so fond of today.
>Mediawiki performs operations involving the entire document space (obvious example: backlinks, given that all links are embedded)
What are backlinks? In event, this is just bad design. If your design doesn't scale, this does not mean "give up", it means "fix your broken shit".
Look, I've done studies on this (there's a book chapter yet to show up there because this just reminded me to add it, which is my one positive takeaway for getting dragged into another idiotic online argument). Editors expend considerable effort on maintaining categories. They get used. (Hell, science aside, I use them.)
I'm not getting into the rest, because I'm pretty sure this is not the place. If you have an insatiable desire to flame me, my own journal is over there.→
If nothing else, think of the page load times.
Paleoconservatism, at one point one of the longest articles on wikipedia, loads for me in about two seconds.
Pokèmon, miniscule by comparison, loads for me in about two seconds.
I'd say the biggest problems the entries cause are namespace collision, and the potential for confusion between the real-world events and cultural impact of fictional work and in-universe events/characters/places/etc.
I think Wikia is actually really good for solving this problem... Wikipedia can maybe do a quick mention of a fictional character or episode with the link out to the more detailed and universe-specific entry on the relevant fiction-targeted wiki. And putting it under another domain name "demotes" the content a bit relative to Wikipedia's role as encyclopedia, as far as people looking things up is concerned. | https://www.jwz.org/blog/2009/12/repository-of-all-human-knowledge-in-anime/ | CC-MAIN-2016-50 | refinedweb | 1,000 | 63.19 |
Update: The syntax is likely to change soon. Apparently my update does not sit well with the Clojure folks and Hy has a strong precedent with Clojure syntax. So just consider this content “conceptual” until that is hammered out.
My patch to extend Hy’s function argument definitions has landed. It was merged in to Hy over the weekend so you’ll need to check out the master branch to follow along with this post. You can grab it here.
In lisp-like languages a function defines the arguments it receives using a regular-looking list like so:
(defun foo (a b c &rest xs) ..)
This list is typically referred to in the CLHS as a lambda list. And that funny little symbol in there that begins with the ampersand? That’s called a lambda list keyword. They’re basically like markup for argument definitions.
I really pushed hard to add this feature to Hy because I’m a firm believer that explicit is better than implicit and lambda list keywords are very explicit.
In Hy we currently support three keywords: &rest, &key, and
&kwargs. They map their arguments one-to-one with
*args,
foo=bar, and
**kwargs respectively.
So now you can write a Hy function that looks like this:
(defn foo [x &key {bar "baz" answer 42} &kwargs kw] ...) (defn bar [x &rest xs] ...)
Which will be equivalent to:
def foo(x, bar="baz", answer=42, **kw): ... def bar(x, *xs): ...
Keep in mind that that the normal Python restrictions on argument
definitions still apply. If you’re on 2.x then you cannot follow
&rest with
&key! It will work on Python 3.x but be aware that
we’re not actually generating the AST for
kwonlyargs which means
that if you call
foo with positional arguments that the positional
parameters will be filled first before the
varargs. I think that
&optional should map to
kwonlyargs which will fix this for Python
3 users. Of course, ideas and suggestions are welcome on this.
It is worth noting now that we haven’t figured out how to call functions using starargs or even with keywords yet. However you can use (the slightly ugly) KWAPPLY function:
(kwapply (foo "hello, world") {bar "foolius polonius"})
Which isn’t great but it’s a start!
If any of this is interesting to you then join us on
irc.freenode.org on
#hy and chat with us! Grab a checkout of Hy
and contribute! | https://agentultra.com/blog/defining-function-arguments-in-hy/ | CC-MAIN-2017-30 | refinedweb | 411 | 75.3 |
Can anyone please guide me where I should get my dad's yr 2000 city's clutch/pressure plate changed.
Also since it is worn out and needs replacing, are there any high performance clutch/pressure plate brands available in the market? model/make plus where I'd find em and where to get the work done from info would be highly appreciated.
Thanks and Regards,
Sir Excedy clutch/pressure plates are available in KHI.
y do u need performance clutch?ur dad going to turbo the ride to 8psi ?
if not, go to shahid auto's or honda center and ask for genuine clutch plate, pressure plate and clutch bearing
Ni hash bhai, I find that sometimes these last longer than the genuine ones for a moderate increase in cost. Thanks for the info.
LOL hash...delta...BTW shahid auto's in in G-10..:),It had cost me Rs.5500 in 2009,do lemme know how much it is now.
Sir Excedy clutch/pressure plates are available in KHI. air jordan retroNike air max bas prix<?xml:namespace prefix = o<o:p></o:p> | https://www.pakwheels.com/forums/t/city-2000-clutch-pressure-plate-work/140468 | CC-MAIN-2018-05 | refinedweb | 186 | 85.08 |
Created on 2010-09-16 13:25 by ncoghlan, last changed 2010-11-30 15:53 by ncoghlan. This issue is now closed.
As per python-dev discussion in June, many Py3k APIs currently gratuitously prevent the use of bytes and bytearray objects as arguments due to their use of string literals internally.
Examples:
urllib.parse.urlparse
urllib.parse.urlunparse
urllib.parse.urljoin
urllib.parse.urlsplit
(and plenty of other examples in urllib.parse)
While a strict reading of the relevant RFCs suggests that strings are the more appropriate type for these APIs, as a practical matter, protocol developers want to be able to operate on ASCII supersets as raw bytes.
The proposal is to modify the implementation of these functions such that string literals are used only with string arguments, and bytes literals otherwise. If a function accepts multiple values and a mixture of strings and bytes is passed in then the operation will still fail (as it should).
From the python-dev thread ():
==============
So the domain of any polymorphic text manipulation functions we define would be:
- Unicode strings
- byte sequences where the encoding is either:
- a single byte ASCII superset (e.g. iso-8859-*, cp1252, koi8*, mac*)
- an ASCII compatible multibyte encoding (e.g. UTF-8, EUC-JP)
Passing in byte sequences that are encoded using an ASCII incompatible
multibyte encoding (e.g. CP932, UTF-7, UTF-16, UTF-32, shift-JIS,
big5, iso-2022-*, EUC-CN/KR/TW) or a single byte encoding that is not
an ASCII superset (e.g. EBCDIC) will have undefined results.
==================
The design approach (at least for urllib.parse) is to add separate *b APIs that operate on bytes rather than implicitly allowing bytes in the ordinary versions of the function.
Allowing programmers to manipulate correctly encoded (and hence ASCII compatible) bytes to avoid decode/encode overhead when manipulating URLs is fine (and the whole point of adding the new functions). Allowing them to *not know* whether they have encoded data or text suitable for display to the user isn't necessary (and is easy to add later if we decide we want it, while taking it away is far more difficult).
More detail at
Attached patch is a very rough first cut at this. I've gone with the basic approach of simply assigning the literals to local variables in each function that uses them. My rationale for that is:
1. Every function has to have some kind of boilerplate to switch based on the type of the argument
2. Some functions need to switch other values (such as the list of scheme_chars or the unparse function), so a separate object with literal attributes won't be enough
3. Given 1 and 2, the overhead of a separate namespace for the literal references isn't justified
I've also gone with a philosophy that only str objects are treated as strings and everything else is implicitly assumed to be bytes. This is in keeping with the general interpreter behaviour where we *really* don't support duck-typing when it comes to strings.
The test updates aren't comprehensive yet, but they were enough to uncover quite a few things I had missed.
quoting is still a bit ugly, so I may still add a byte->bytes/str->str variant of those APIs.
A possible duck-typing approach here would be to replace the "instance(x, str)" tests with "hasattr(x, 'encode')" checks instead.
Thoughts?
> A possible duck-typing approach here would be to replace the
> "instance(x, str)" tests with "hasattr(x, 'encode')" checks instead.
Looks more ugly than useful to me. People wanting to emulate str had better subclass it anyway...
Agreed - I think there's a non-zero chance of triggering the str-path by mistake if we try to duck-type it (I just added a similar comment to #9969 regarding a possible convenience API for tokenisation)
Added to Reitveld:.
> The primary reason for supporting ASCII compatible bytes directly is
> specifically to avoid the encoding and decoding overhead associated
> with the translation to unicode text.
I think it's quite misguided. latin1 encoding and decoding is blindingly
fast (on the order of 1GB/s. here). Unless you have multi-megabyte URLs,
you won't notice any overhead.
> I think it's quite misguided. latin1 encoding and decoding is blindingly
> fast (on the order of 1GB/s. here). Unless you have multi-megabyte URLs,
> you won't notice any overhead.
Ah, I didn't know that (although it makes sense now I think about it).
I'll start exploring ideas along those lines then. Having to name all
the literals as I do in the patch is really quite ugly.
A general sketch of such a strategy would be to stick the following
near the start of affected functions:
encode_result = not isinstance(url, str) # or whatever the main
parameter is called
if encode_result:
url = url.decode('latin-1')
# decode any other arguments that need it
# Select the bytes versions of any relevant globals
else:
# Select the str versions of any relevant globals
Then, at the end, do an encoding step. However, the encoding step may
get a little messy when it comes to the structured data types. For
that, I'll probably take a leaf out of the email6 book and create a
parallel bytes API, with appropriate encode/decode methods to
transform one into the other.
I don't understand why you would like to implicitly convert bytes to str (which is one of the worse design choice of Python2). If you don't want to care about encodings, use bytes is fine. Decode bytes using an arbitrary encoding is the fastest way to mojibake.
So You have two choices: create new functions with bytes as input and output (like os.getcwd() and os.getcwdb()), or the output type will depend on the input type (solution choosen by os.path). Example of ther later:
>>> os.path.expanduser('~')
'/home/haypo'
>>> os.path.expanduser(b'~')
b'/home/haypo'
From a function *user* perspective, the latter API (bytes->bytes, str->str) is exactly what I'm doing.
Antoine's point is that there are two ways to achieve that:
Option 1 (what my patch currently does):
- provide bytes and str variants of all constants
- choose which set to use at the start of each function
- be careful never to index, only slice (even for single characters)
- a few other traps that I don't remember off the top of my head
Option 2 (the alternative Antoine suggested and I'm considering):
- "decode" the ASCII compatible bytes to str objects by treating them as nominally latin-1
- use the same str-based constants as are used to handle actual str inputs
- be able to index to your heart's content inside the algorithm
- *ensure* that any bytes-as-pseudo-str objects are "encoded" back to actual bytes before they are returned
From outside the function, a user shouldn't be able to tell which approach we're using internally.
The nice thing about option 2 is to make sure you're doing it correctly, you only need to check three kinds of location:
- the initial parameter handling in each function
- any return statements, raise statements that allow a value to leave the function
- any yield expressions (both input and output)
The effects of option 1 are scattered all over your algorithms, so it's hard to be sure you've caught everything.
The downside of option 2 is if you make a mistake and let your bytes-as-pseudo-str objects escape from the confines of your function, you're going to see some very strange behaviour.
> Option 2 (the alternative Antoine suggested and I'm considering):
> - "decode" ... to str ...
> - ... objects are "encoded" back to actual bytes before
> they are returned
In this case, you have to be very careful to not mix str and bytes decoded to str using a pseudo-encoding. Dummy example: urljoin('unicode', b'bytes') should raise an error.
I don't care of the internals if you write tests to ensure that it is not possible to mix str and bytes with the public API.
Yeah, the general implementation concept I'm thinking of going with for option 2 will use a few helper functions:
url, coerced_to_str = _coerce_to_str(url)
if coerced_to_str:
param = _force_to_str(param) # as appropriate
...
return _undo_coercion(result, coerced_to_str)
The first helper function standardises the typecheck, the second one complains if it is given something that is already a string.
The last one just standardises the check to see if the coercion needs to be undone, and actually undoing the coercion.
As per RDM's email to python-dev, a better way to create the pseudo_str values would be by decoding as ascii with a surrogate escape error handler rather than by decoding as latin-1.
> As per RDM's email to python-dev, a better way to create the
> pseudo_str values would be by decoding as ascii with a surrogate
> escape error handler rather than by decoding as latin-1.
If you were worried about performance, then surrogateescape is certainly
much slower than latin1.
Yeah, I'll have to time it to see how much difference latin-1 vs surrogateescape makes when the MSB is set in any bytes.
>.
On Tue, Oct 5, 2010 at 5:32 PM, STINNER Victor <report@bugs.python.org> wrote:
>
> STINNER Victor <victor.stinner@haypocalc.com> added the comment:
>
>>.
I'm fairly resigned to the fact that I'm going to need some kind of
micro-benchmark to compare the different approaches. For example, the
bytes based approach has a lot of extra assignments to local variables
that the str based approach doesn't need.
The first step is to actually have a str-based patch to compare to the
existing bytes based patch. If the code ends up significantly clearer
(as I expect it will), we can probably sacrifice a certain amount of
speed for that benefit.
I wonder if Option2 (ascii+surrogateescape vs latin1) is only about
performance. How about escapes that might occur if the Option2 is
adopted. That might take higher priority than performance.
Do we know 'how tight' that approach is?
I've been pondering the idea of adopting a more conservative approach here, since there are actually two issues:
1. Properly quoted URLs are transferred as pure 7-bit ASCII (due to percent-encoding of everything else). However, most of the manipulation functions in urllib.parse can't handle bytes at all, even data that is 7-bit clean.
2. In the real world, just like email, URLs will often contain unescaped (or incorrectly escaped) characters. So assuming the input is actually pure ASCII isn't necessarily a valid assumption.
I'm wondering, since encoding (aside from quoting) isn't urllib.parse's problem, maybe what I should be looking at doing is just handling bytes input via an implicit ascii conversion in strict mode (and then conversion back when the processing is complete).
Then bytes manipulation of properly quoted URLs will "just work", while improperly quoted URLs will fail noisily. This isn't like email or http where the protocol contains encoding information that the library should be trying to interpret - we're just being given raw bytes without any context information.
If any application wants to be more permissive than that, it can do its own conversion to a string and then use the text-based processing. I'll add "encode" methods to the result objects to make it easy to convert their contents from str to bytes and vice-versa.
I'll factor out the implicit encoding/decoding such that if we decide to change the model later (ASCII-strict, ASCII-escape, latin-1) it shouldn't be too difficult.
Attached a second version of the patch. Notable features:
- uses a coercion-to-str-and-back strategy (using ascii-strict)
- a significantly more comprehensive pass through the urlparse test suite. I'm happy that the test suite mods are complete with this pass.
The actual coercion-to-str technique I used standardises the type consistency check for the attributes and also returns a callable that handles the necessary coercion of any results. The parsed/split result objects gain encode/decode methods to allow that all to be handled polymorphically (although I think the decode methods may actually be redundant in practice).
There's a deliberate loophole in the type consistency checking to allow the empty string to be type-neutral. Without that, the scheme='' default argument to urlsplit and urlparse becomes painful (as do the urljoin shortcuts for base or url being the empty string).
Implementing this was night and day when compared to the initial attempt that tried to actually manipulate bytes input as bytes. With that patch, it took me multiple runs of the test suite to get everything working. This time, the only things I had to fix were typos and bugs in the additional test suite enhancements. The actual code logic for the type coercions worked first time.
Unless I hear some reasonable objections within the next week or so, I'm going to document and commit the ascii-strict coercion approach for beta 1.
The difference in code clarity is such that I'm not even going to try to benchmark the two approaches against each other.
Just a note for myself when I next update the patch: the 2-tuple returned by defrag needs to be turned into a real result type of its own, and the decode/encode methods on result objects should be tested explicitly.
Related issue in msg120647.
urlunparse(url or params = bytes object) produces a result
with the repr of the bytes object if params is set.
urllib.parse.urlunparse(['http', 'host', '/dir', b'params', '', ''])
--> ";b'params'"
That's confusing since urllib/parse.py goes to a lot of trouble to
support both bytes and str. Simplest fix is to only accept str:
Index: Lib/urllib/parse.py
@@ -219,5 +219,5 @@ def urlunparse(components):
scheme, netloc, url, params, query, fragment = components
if params:
- url = "%s;%s" % (url, params)
+ url = ';'.join((url, params))
return urlunsplit((scheme, netloc, url, query, fragment))
Some people at comp.lang.python tell me code shouldn't anyway do str()
just in case it is needed like urllib does, not that I can make much
sense of that discussion. (Subject: harmful str(bytes)).
BTW, the str vs bytes code doesn't have to be quite as painful as in
urllib.parse. Here is a patch which just rearranges and factors out
some code.
New patch which addresses my last two comments (i.e. some basic explicit tests of the encode/decode methods on result objects, and urldefrag returns a named tuple like urlsplit and urlparse already did).
A natural consequence of this patch is that mixed arguments (as in the message above) will be rejected with TypeError.
Once I figure out what the docs changes are going to look like, I'll wrap this all up and commit it.
Committed in r86889
The docs changes should soon be live at:
If anyone would like to suggest changes to the wording of the docs for post beta1, or finds additional corner cases that the new bytes handling can't cope with, feel free to create a new issue. | http://bugs.python.org/issue9873 | CC-MAIN-2014-41 | refinedweb | 2,550 | 60.24 |
User:Jebba
- Name: Jeff Moe
- Nick: jebba, jebbajeb, jebba900, etc...
I have stopped development on Meego/Maemo, so these pages are getting quickly dated. Ciao!
- Usually I dot my laptop with READMEs in various ~/devel/ subdirs, but in this case I decided to write up some notes here. These are mostly for my own reference, but perhaps they will be of use to you.
- Feel free to take any of these pages and copy them over to the main wiki namespace (here or in any other wiki, for that matter).
[edit] Package Building HOWTO
The Package Building HOWTO now has its own page. :)
[edit] Kernel
The new kernel page
[edit] Freemoe git
git clone git://gitorious.org/freemoe/freemoe.git
[edit] SDK
[edit] Repositories
- User:Jebba/Etch - The Debian Etch rebuild.
[edit] Flashing
Separate page about flashing, for your perusal.
Note, the flash page has the info about "debricking" easily.
[edit] Wifi Hotspot
How to set up your N900 as a wifi hotspot to share it's connection with other computers. :)
[edit] Mer
[edit] Fedora
Fedora 12 on Nokia N900 (!!)
[edit] VNC
Over at User:Jebba/VNC.
[edit] Backups
[edit] Video
More space for talking about video at the new page.
[edit] Mirrors
I have shut down my mirrors.
A mini-HOWTO set up mirroring maemo repository content
[edit] DBUS
Now at User:Jebba/DBUS.
[edit] Gripes
[edit] Freemoe
I have a server co-located at NetDepot with a SDK installed. If you would like an account on there send me an email at moe@blagblagblag.org.
[edit] Bugs
Bugs now has its own page.
[edit] My Packages
There is now a separate Packages by Jebba page.
[edit] VoIP
See: User:Jebba/VoIP.
[edit] ofono
[edit] Cryptsetup
User:Jebba/Cryptsetup - HOWTO use crypto filesystem on N900.
[edit] Setup
I have move setup to its own page. A bit antiquated already!
[edit] sbdmock
My sbdmock page.
[edit] Tweaklets
[edit] Random
Punted over to the new Random page.
- This page was last modified on 1 May 2012, at 14:50.
- This page has been accessed 97,192 times. | http://wiki.maemo.org/User:Jebba | CC-MAIN-2017-17 | refinedweb | 346 | 68.26 |
Welcome to Masonite 2.3! In this guide we will walk you through how to upgrade your Masonite application from version 2.2 to version 2.3.
In this guide we will only be discussing the breaking changes and won't talk about how to refactor your application to use the awesome new features that 2.3 provides. For that information you can check the Whats New in 2.3 documentation to checkout the new features and how to refactor.
We'll walk through both Masonite upgrades and breaking packages as well
Craft is now a part of Masonite core so you can uninstall the masonite-cli tool. You now no longer need to use that as a package.
This is a little weird but we'll get craft back when we install Masonite 2.3
$ pip uninstall masonite-cli
Next, we can upgrade your Masonite version. Depending on your dependancy manager it will look something like this:
Change it from this:
masonite>=2.2,<2.3
to
masonite>=2.3,<2.4
Go ahead and install masonite now:
pip install "masonite>=2.3,<2.4"
Masonite changed the way the response is generated internally so you will need to modify how the response is retrieved internally. To do this you can go to your
bootstrap/start.py file and scroll down to the bottom.
Change it from this:
return iter([bytes(container.make('Response'), 'utf-8')])
to this:
return iter([container.make('Response')])
This will allow Masonite to better handle responses. Instead of converting everything to a string like the first snippet we can now return bytes. This is useful for returning images and documents.
Previously Masonite used several packages and required them by default to make setting everything up easier. This slows down package development because now any breaking upgrades for a package like Masonite Validation requires waiting for the the next major upgrade to make new breaking features and improve the package.
Now Masonite no longer requires these packages by default and requires you as the developer to handle the versioning of them. This allows for more rapid development of some of Masonite packages.
Masonite packages also now use SEMVER versioning. This is in the format of
MAJOR.MINOR.PATCH. Here are the required versions you will need for Masonite 2.3:
masonite-validation>=3.0.0
masonite-scheduler>=3.0.0
These are the only packages that came with Masonite so you will need to now manage the dependencies on your own. It's much better this way.
Masonite now uses a concept called guards so you will need a quick crash course on guards. Guards are simply logic related to logging in, registering, and retrieving users. For example we may have a
web guard which handles users from a web perspective. So registering, logging in and getting a user from a database and browser cookies.
We may also have another guard like
api which handles users via a JWT token or logs in users against the API itself.
Guards are not very hard to understand and are actually unnoticable unless you need them.
In order for the guards to work properly you need to change your
config/auth.py file to use the newer configuration settings.
You'll need to change your settings from this:
AUTH={'driver': env('AUTH_DRIVER', 'cookie'),'model': User,}
to this:
AUTH = {'defaults': {'guard': 'web'},'guards': {'web': {'driver': 'cookie','model': User,'drivers': { # 'cookie', 'jwt''jwt': {'reauthentication': True,'lifetime': '5 minutes'}}},}}
To manage the guards (and register new guards) there is the new
AuthenticationProvider that needs to be added to your providers list.
from masonite.providers import AuthenticationProviderPROVIDERS = [# Framework Providers# ...AuthenticationProvider,# Third Party Providers#...]
Masonite no longer supports SASS and LESS compiling. Masonite now uses webpack and NPM to compile assets. You will need to now reference the Compiling Assets documentation.
You will need to remove the
SassProvider completely from the providers list in
config/providers.py. As well as remove the
SassProvider import from on top of the file.
You can also completely remove the configuration settings in your
config/storage.py file:
SASSFILES = {'importFrom': ['storage/static'],'includePaths': ['storage/static/sass'],'compileTo': 'storage/compiled'}
Be sure to reference the Compiling Assets documentation to know how to use the new NPM features.
The container can no longer hold modules. Modules now have to be imported in the class you require them. For example you can't bind a module like this:
from config import authapp.bind('AuthConfig', auth)
and then make it somewhere else:
class ClassA:def __init__(self, container: Container):self.auth = container.make('AuthConfig')
This will throw a
StrictContainerError error. Now you have to import it so will have to do something like this using the example above:
from config import authclass ClassA:def __init__(self, container: Container):self.auth = auth
Now that we can no longer bind modules to the container we need to make some changes to the
wsgi.py file because we did that here.
Around line 16 you will see this:
container.bind('Application', application)
Just completely remove that line. Its no longer needed.
Also around line 19 you will see this line:
container.bind('ProvidersConfig', providers)
You can completely remove that as well.
Lastly, around line 31 you can change this line:
for provider in container.make('ProvidersConfig').PROVIDERS:
to this:
for provider in providers.PROVIDERS:
It's unlikely this effects you and query string parsing didn't change much but if you relied on query strings like this:
/?filter[name]=Joe&filter[user]=bob&[email protected]
Or html elements like this:
<input name="options[name]" value="Joe"><input name="options[user]" value="bob">
then query strings will now parse to:
You'll have to update any code that uses this. If you are not using this then don't worry you can ignore it.
Not many breaking changes were done to the scheduler but there are alot of new features. Head over to the Whats New in Masonite 2.3 section to read more.
We did change the namespace from
scheduler to
masonite.scheduler. So you will need to refactor your imports if you are using the scheduler.
You should be all good now! Try running your tests or running
craft serve and browsing your site and see if there are any issues you can find. If you ran into a problem during upgrading that is not found in this guide then be sure to reach out so we can get the guide upgraded. | https://docs.masoniteproject.com/upgrade-guide/masonite-2.2-to-2.3 | CC-MAIN-2020-34 | refinedweb | 1,079 | 66.64 |
This topic describes how to use Database Autonomy Service (DAS) to view and terminate the active sessions of ApsaraDB for MongoDB instances.
Procedure
- Log on to the DAS console.
- Connect an ApsaraDB for MongoDB instance to DAS and make sure that the instance is in the Accessed state.
- In the left-side navigation pane, click Instance Monitoring. Find the instance and choose Performance > Instance Sessions in the Actions column.
Features
- You can view the active sessions and all connections.
- You can terminate one or more selected sessions.
- The DAS console provides an overview of the session statistics, and shows the session statistics based on clients and namespaces. This provides a quick method for you to find the session details. For example, you can identify the queries where no indexes are used and the sessions that last for more than three seconds. You can also find the client that initiates the largest number of active sessions. | https://www.alibabacloud.com/help/doc-detail/102640.htm | CC-MAIN-2021-04 | refinedweb | 155 | 65.73 |
Opened 11 years ago
Closed 10 years ago
#1795 closed enhancement (fixed)
[patch] Addition of page_range in paginator.py and generic.list_detail.py
Description
When using generic views, the desired effect would be:
{% if is_paginated %}
<div class="tablecontrols">
{% for page_number in page_range %}
{% ifequal page_number page %}
{{ page_number }}
{% else %}
<a href="/myapp/clients/{{ page_number }}/" title="Go to page {{ page_number }}">{{ page_number }}</a>
{% endifequal %}
{% endfor %}
</div>
{% endif %}
to dump out a page list like:
Since some searching turned up no way to do this within the view and I imagine this to be a common problem, I patched "django/core/paginator.py" and "django/views/generic/list_detail.py" to incorporate a new context variable, "page_range". Paginator simply generates a 1-based range up to the total pages. Attached are the patch files.
Attachments (3)
Change History (13)
Changed 11 years ago by
Changed 11 years ago by
List_detail.py Patch
comment:1 Changed 11 years ago by
Yesterday I was having the same problem when I tried implementing that. In the end I looked how the django admin interface does it and copied bits of that code into my templatetag.
The "problem" with generating pages like this is that if you have hundreds of pages your page list will be too long and since I haven't found templatetag for {% ifbigger %} or {% ifsmaller %} you can't actually limit page_range to a usable range.
I would recommend extending it to actually accept two extra optional parameters that would also give range inside which there should be page range. Otherwise I think this is great idea and would quite simplify this.
comment:2 Changed 11 years ago by
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
comment:5 Changed 11 years ago by
0.93 has come and gone.
comment:6 Changed 10 years ago by
Milestone Version 1.0 deleted
comment:7 Changed 10 years ago by
I've marked this as Ready-for-checkin as I think it's fairly straight-forward and adds some useful functionality, but it'll need docs.
comment:8 Changed 10 years ago by
Needs documentation; otherwise +1 from me.
Changed 10 years ago by
Uniffied diff (code, test, docs)
comment:9 Changed 10 years ago by
Pagination is currently only explianed in generic_views so I added documentation there.
I also added a small test. Patch should be ready as per older comments that's the only missing thing.
Paginator.py Patch | https://code.djangoproject.com/ticket/1795 | CC-MAIN-2017-13 | refinedweb | 409 | 61.97 |
April 2014
Volume 29 Number 4
Modern Apps : What’s New in Windows 8.1 for Windows Store Developers
Rachel Appel | April 2014
Windows 8.1 is a substantial update to the Windows OS, with many new enhancements and features to help you innovate and build the best creations possible. In this article, I’ll look at the new enhancements in Windows 8.1 for developers who build Windows Store apps.
More Ways to Do Windows and Tiles
Before Windows 8.1, your app could display in one of three modes: full view (landscape or portrait), filled view or snapped view. Users of Surfaces and other devices requested more control over window management such as, for example, being able to view more than two apps concurrently. Therefore, to reflect the varied uses of the customer base, the Windows team responded by adding more ways to manage and organize windows and screen real estate. This means users can position windows side by side equally or in any proportions they want, and an app can also create multiple views that users can size and position independently.
Previously, in Windows 8 apps built with JavaScript, you’d use CSS media queries to control how the page lays itself out depending on its view state (full, filled or snapped). In Windows 8.1, this has changed so you only need CSS media queries that target the dimensions and orientations of the screen, as shown here:
@media screen and (min-width: 500px) and (max-width: 1023px) { /* CSS styles to change layout based on window width */ } @media (min-width: 1024px) { /* CSS styles to change layout based on window width */ } @media screen and (orientation: portrait) { /* CSS styles to change layout based on orientation */ }
This means you don’t need to adjust or query for specific app view states, as you did before. You only need to use media queries and set a minimum width and orientation, which you can do in the media query itself. The mandatory minimum height of any Windows app is 768 pixels.
Tiles usually bring the user to the app in the first place. Located in the Start menu, live tiles are an excellent modern feature of Windows and Windows Phone. No other platform has quite the same capability to show all your data in a real-time dashboard the way Windows does. That said, as a developer, you can extend your apps to use four different tile sizes: small, medium, wide and large, as shown in Figure 1.
Figure 1 New Live Tiles in Windows (Not to Scale)
The package manifest in Visual Studio contains a Visual Assets tab where you can configure the tile sizes, along with other visual assets such as the splash screen.
New Visual Studio 2013 Project Templates Help You Build Modern Apps
As you might expect, with each Visual Studio release come new project templates. New Windows Store apps built with JavaScript (using the Windows Library for JavaScript, or WinJS) project templates include a Hub template, while the new XAML project templates include Hub, Portable Class Library and Coded UI Test.
The new Hub project template in both WinJS and XAML encapsulates a popular design approach I refer to as “modern.” Its default layout contains five different sections carefully crafted so you can offer varied visual arrangements of data to your users. The Hub layout makes it easier for users to scan and pinpoint what’s important to them. Designing a modern UI means you present data in ways different from previous non-modern, traditional techniques, with a focus on the user and usability. The Hub project does just that.
Inside the Hub project’s pages folder live three folders named hub, item and section. Each of these comes with its own corresponding .html, .js and .css files. In the XAML projects, there are equivalent pages named HubPage.xaml, SectionPage.xaml and ItemPage.xaml in the root folder. Figure 2 shows what the Hub project, featuring the Hub control, looks like at run time.
Figure 2 The Hub Project at Run Time
As you can see, the Hub project and control show a panorama view of nicely arranged content. It’s a sleek and modern design.
Updated and New HTML and XAML Controls for a Modern UI
New controls and control improvements in all project types make them simpler to use. These new and updated controls make it easier than ever to create and publish a modern app. In both HTML and XAML, there are performance and data-binding control enhancements. For a primer on Windows Store controls, see my MSDN Magazine article, “Mastering Controls and Settings in Windows Store Apps Built with JavaScript,” at msdn.microsoft.com/magazine/dn296546.
In the Hub project template comes the Hub control, new for both WinJS and XAML. The default template’s Hub control structures the UI layout with five sections to scroll through horizontally, all in the app’s starting page. The hero section is the crown jewel of the app, often used for presenting the featured news story, recipe, sports score, weather data or whatever it may be. It’s also the first thing a user will see after the splash screen. Provided as a starting point for the developer, the next four sections simply contain data items of varied sizes. Users can navigate to the listing of section 3’s group membership or to individual items in section 3. Of course, the Hub control is flexible and can accommodate any number of sections with any content. It’s designed to easily handle heterogeneous content of different kinds and from different sources, as opposed to strictly homogeneous content of similar data from the same source.
The Grid template relies only on the ListView control. Now the new Hub control contains an embedded ListView control so navigation works as you’d expect, to either the group listing or an individual item, depending on which item the user taps or clicks.
This ListView has many modern enhancements, including support for drag-and-drop operations. Alongside drag and drop is a ListView enhancement for reordering items. Simply set the itemsReorderable property of the ListView to true and no other code is required. The ListView includes several other enhancements, including improved cell spanning, better accessibility and better memory management.
While the ListView control has many new and shiny features, there’s another control worth mentioning: the Repeater. Several UI controls across the Microsoft .NET Framework use repeating controls. For example, there’s an ASP.NET Repeater control. Grid controls and the like exist throughout the .NET platform, customized to the varied ways you can build a UI with the .NET Framework. As you probably suspect, you can use the Repeater control to generate a list of items with styling from a data set. In WinJS, this means the Repeater will properly render just about any embedded HTML or WinJS controls. Figure 3 shows an example of the Repeater control. Note how it works much like a ListView, except that groups are no longer required. As you can see in the sample data in Figure 3, the JavaScript creates a simple array.
Figure 3 HTML and JavaScrip That Builds a Simple Repeater Control
<!—HTML -- > <div id="listTemplate" data- <li data-</li> </div> <ul data- </ul> // JavaScript (function () { "use strict"; var basicList2 = new WinJS.Binding.List( [ { title: "Item 1" }, { title: "Item 2" }, { title: "Item 3" }, { title: "Item 4" } ]); WinJS.Namespace.define("RepeaterExample", { basicList: basicList2 }); })();
The NavBar is another control that improves the UX by providing menu options in a way that’s conducive to user interaction. Unlike the JavaScript menus on popular Web sites from days of yore, modern menu items are large and optimized for a variety of input devices. We’ve all seen someone not so skilled with a mouse struggle with those tiny, cascading Web site menus of the past. This means, as part of modern app UI design principles, the NavBar works well with touch input, a must-have feature for tablets. The user invokes the navigation toolbar by swiping from the top or bottom edges, using the Windows key+Z shortcut or by a right-click. If you’ve used the AppBar control, the NavBar control works almost identically.
Those who want to integrate a modern Web site with client apps can use the new WebView control. It can present data from the Internet much easier than in previous WinJS versions that used an iframe. The WebView control is an HTML element that looks like this:
<x-ms-webview</x-ms-webview>
This is different from the standard way to create WinJS controls by using a <div> element and setting the data-win-options attribute. The WebView control serves as a container that’s hosting the external content. Along with that are security and sandboxing implications, so using an HTML element works better than a typical control in this case. WebView isn’t a control that you can just implement with other HTML elements, and it must be supported directly in the app host. Note that work is being done to propose a <webview> element as a standard for consideration by the World Wide Web Consortium (W3C).
Until this Windows release, XAML hasn’t had parity with HTML as far as controls were concerned. Now, though, XAML has caught up by adding the following new controls:
- AppBar: The menu bar across the bottom of the screen.
- CommandBar: An individual menu bar item.
- DatePicker: A set of three dropdowns that capture the date from the user.
- Flyout: A modeless, light-dismiss dialog box or a settings control.
- Hub: This lets you display a panorama of various-sized items in groups.
- Hyperlink: A hyperlink that navigates to a URI.
- MenuFlyout: A predefined Flyout specifically styled so it displays a list of menu items.
- SettingsFlyout: A Flyout that appears from the right side of the screen upon the user’s swipe or interaction. It’s used for managing app settings.
- TimePicker: A control that lets the user select hours, minutes, seconds and other segments of time. It often complements the DatePicker.
Now there’s less need for XAML developers to create their own visual elements, as many are now part of the UI framework.
Windows 8.1 Has More Choice in Search
Windows 8 introduced the concept of a charm—a shortcut to a common task. Users have their habits, and search as a way to launch apps or find information is a common one. Users frequently search for information, as search engine results will attest. Search is such an important part of computing that it’s part of the Windows OS and now there’s a search control to complement the charms. When you have data local to your app that users should be able to search, use the SearchBox control, and when they need to do a wider-scoped or Internet search, go with the SearchPane (the Search Windows charm introduced in Windows 8). After you configure the search contract in package.appmanifest in the declarations tab, you can provide search services to your users. You may have either the SearchBox or the SearchPane in your app, but not both.
Adding a SearchBox control is as easy as applying the data-win-control attribute to WinJS.UI.SearchBox, as you might expect:
<div id="searchBoxId" data- </div>
XAML keeps the SearchBox class definition in the Windows.UI.Xaml.Controls namespace, and the declarative syntax looks like this:
<SearchBox x:
Microsoft recommends adding instant search to your apps. Instant search is when users simply type to activate and start a search query. Go to the Windows 8 Start screen and just start typing. You’ll notice the SearchPane immediately initiates a search query across the device as well as on the Internet. You can, of course, emulate this behavior in your apps like the preceding code samples do by setting the HTML data-win-option attribute to focusOnKeyboardInput or the XAML FocusOnKeyboardInput value to True.
Use Contact and Calendar APIs to Stay Connected and Up-to-Date
With Windows 8.1, you get convenient APIs to interact with a user’s contacts and calendar if the user allows. The Contacts API enables a source app to query the data store by e-mail address or phone number and provide relevant information to the user. The Calendar API allows you to add, replace, remove, or otherwise work with appointments and show the user’s default appointment provider app (for example, the built-in calendar app or Outlook) on screen next to your app at run time. This means your app can seamlessly integrate with the built-in apps.
In the Contact API, Windows 8.1 uses the Windows.ApplicationModel.Contacts.Contact class to represent a contact, and the older ContactInformation class used in Windows 8 has been deprecated. Luckily, the API documentation clearly labels each member of deprecated namespaces with the following message: “<member name> may be altered or unavailable for releases after Windows 8.1,” so it’s easy to avoid using these. Figure 4 shows just how easy it is to capture an e-mail address and phone number and shows a user’s contact card from that data. With a bit more code, you could save or show the contact in another part of the app.
Figure 4 Showing a Contact Card
<label for="emailAddressInput">Email Address</label> <input id="emailAddressInput" type="text"/> <label for="phoneNumberInput">Phone Number</label> <input id="phoneNumberInput" type="text" /> <button id="addToContactsButton" onclick= "addContactWithDetails()">Add to Contacts</button> function showContactWithDetails() { var ContactsAPI = Windows.ApplicationModel.Contacts; var contact = new ContactsAPI.Contact(); var email = new ContactsAPI.ContactEmail(); email.address = document.querySelector("#emailAddressInput"); contact.emails.append(email); var phone = new ContactsAPI.ContactPhone(); phone.number = document.querySelector("#phoneNumberInput"); contact.phones.append(phone); ContactsAPI.ContactManager.showContactCard( contact, {x:120, y:120, width:250, height:250}, Windows.UI.Popups.Placement.default); }
As you can see, the Contacts API is simple to use, yet allows a deep level of integration with Windows. The Calendar API is quite similar. In code, you make instances of appointment objects and assign values to properties that represent meeting details—such as the date and time of the meeting—and then save them. Then you have contacts and calendaring capabilities in your app.
New Networking and Security APIs
No system update would be complete without networking and security improvements. These networking enhancements will let you do more via code than ever before, yet remain secure. New in the Windows Runtime (WinRT) in the Windows.Web.Http namespace are objects and methods that connect to HTTP and REST services with more power and flexibility than is available with previous APIs such as WinJS.xhr and System.Net.HttpClient. The following code shows how to connect to a RESTful service:
var uri = new Uri(""); var httpClient = new HttpClient(); httpClient.GetStringAsync(uri).done(function () { // Process JSON }, error); function error(reason) { WinJS.log && WinJS.log("Oops!"); }
Just as with any other library, the Windows.Web.Http namespace has members that perform their duties asynchronously, and with JavaScript, that means using the “done” function that runs upon return of a promise. However, if you want up-to-date, real-time apps, you can use Windows.Web.Http for standby apps that run in the background. Also note that you have all kinds of other capabilities with Windows.Web.Http, such as the ability to control the cache, control cookies, make other kinds of requests and insert filters in the pipeline to do all kinds of interesting things that I don’t have room to explore here.
The good news is if you access REST services that require user credentials, you (as a user) can now manage them as multiple accounts in the Settings charm. Along with these security and account management features comes the option to use fingerprint authentication in your modern apps using the Windows Fingerprint (biometric) authentication APIs.
Modern Apps Are All About Diverse Devices
You don’t get more modern than 3D printing. Windows 8.1 has it, and you can develop with it! That’s not to mention the catalog of hardware- and sensor-capable APIs that are now available, including the following:
- Human Interface Devices (HID): A protocol that fosters communication and programmability between hardware and software.
- Point of Service (PoS): A vendor-neutral API for Windows Store apps that can access devices such as barcode scanners or magnetic-stripe readers.
- USB: Enables communication with standard USB devices.
- Bluetooth: Enables communication with standard Bluetooth devices.
- 3D printing: These are C++ extensions of the 2D printing support that serve as the basis for 3D printer support. You can access Windows printing to send formatted-for-3D content to the printer through an app.
- Scanning: Enables support for scanners.
The preceding APIs all enable hardware peripheral integration. Since Windows 8, though, apps in both HTML and XAML have been able to take advantage of hardware integration for working with the webcam, accelerometer, pen, touch and other peripherals.
Windows 8.1 includes a set of Speech Synthesis, or text-to-speech, APIs. Using these APIs, you can transform textual data into a vocal stream—and this entails less code than you might expect. For example, the following code sample shows that once a new instance of the SpeechSynthesizer exists, you can call its synthesizeTextToStreamAsync method. The synthesizeTextToStreamAsync method accepts textual data that it then transforms into a voice stream, and then it sends that stream to a player:
var audio = new Audio(); var synth = new Windows.Media.SpeechSynthesis.SpeechSynthesizer(); var input = document.querySelector("#input"); synth.synthesizeTextToStreamAsync(input).then(function (markersStream) { var blob = MSApp.createBlobFromRandomAccessStream( markersStream.ContentType, markersStream); audio.src = URL.createObjectURL(blob, { oneTimeOnly: true }); audio.play(); });
In addition to working with simple textual data, you can use the W3C standard Speech Synthesis Markup Language (SSML) for sentence construction and lexical clarification. Using this XML-based language lets you perform input and output synthesis in a more clearly defined manner, which makes a difference to the user.
Wrapping Up with New App Store Packaging Features
You can configure resources such as tile images and localized strings in the package manifest, which has changed slightly to reflect new image sizes and other configuration options. One such option is to create bundles. Bundles primarily let you add and manage locale-specific information so you can deploy your app to various geographic areas.
When you deploy your app to the store, notice there are a few changes, including an enhanced UI at the developer portal. Users can find your app easier than ever before now that Bing integrates neatly into the OS. With Bing integration, users can discover your app (or any file) via the Windows Store or via a Web site or search. In addition, apps that users install now will automatically update unless users turn the automatic update feature off. You don’t need to worry about frequent app updates on the user’s behalf.
I don’t have enough room to list all the new and enhanced features in the Windows Runtime and Visual Studio here. I do highly suggest you review other new features such as DirectX, which sports several updates that you can read about at bit.ly/1nOp0Ds. In addition, Charles Petzold authors an MSDN Magazine column focused on DirectX, so you can expect to see more details about the new features there (bit.ly/1c37bLI). Finally, all the information you need about what to expect in Windows 8.1 is in the Windows 8.1 Feature Guide at bit.ly/1cBHgxu.: Kraig Brockschmidt (Microsoft) | https://docs.microsoft.com/en-us/archive/msdn-magazine/2014/april/modern-apps-what%E2%80%99s-new-in-windows-8-1-for-windows-store-developers | CC-MAIN-2020-10 | refinedweb | 3,258 | 54.32 |
20815 [details]
Stacktrace screenshot
Repro
=====
1. Create a default C# Xamarin Forms project in Visual Studio 2017.
2. Replace the main page with the following XAML code (replace MyProject with your root namespace):
<ContentPage xmlns=""
xmlns:x=""
x:
<StackLayout>
<ListView ItemsSource="{Binding Items}">
<ListView.ItemTemplate>
<DataTemplate>
<ViewCell>
<Label Text="{Binding Text}" />
</ViewCell>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
</StackLayout>
</ContentPage>
3. Compile and open the Forms Previewer.
Expected result
===============
An empty list is shown.
Actual result
=============
The forms previewer shows a System.InvalidOperationException with the message "You MUST call Xamarin.Forms.Init() prior to using it.". The stack trace can be found in the screen shot (sorry about that, but I did not find a way to copy&paste the data).
This should be fixed if you upgrade to Xamarin.Forms 2.3.4 pre5 or newer, which is currently on Nuget. If you have issues after updating to that then please reopen the bug and we'll investigate further!
Thanks, upgrading to the latest preview version fixed the issue.
Preview now fails when a SearchBar is included with "java.lang.UnsupportedOperationException: Unsupported Service: class android.view.inputmethod.InputMethodManager".
Should I open a new bug for this or are you already aware of the issue? (I found this Android issue raised by one of your people, so I suspect the latter: <>.)
Yes, that is the bug we filed upstream. Hopefully it gets fixed/released soon!
Also, we did add a workaround in Xamarin.Forms itself for this particular issue, but it does not seem to have been included in 2.3.4. I'll see if it can be included in a future 2.3.4 preview, or simply the next release.
Great, thanks. | https://bugzilla.xamarin.com/53/53926/bug.html | CC-MAIN-2021-39 | refinedweb | 282 | 61.02 |
Java Reference
In-Depth Information
ioEx.printStackTrace();
}
}while (true);
}
}
Before looking at the client code, it is appropriate to give consideration to how
the image might be displayed when it has been received. The simplest way of doing
this is to create a GUI (using a class that extends JFrame ) and defi ne method
paint to specify the placement of the image upon the application. As was seen in the
'juggler' animation bean in Sect. 10.2 (though that application used a JPanel , rather
than a JFrame ), this will entail invoking the ImageIcon method paintIcon . The four
arguments required by this method were stated in Sect. 10.2 and are repeated here:
a reference to the component upon which the image will be displayed (usually
this , for the application container);
a reference to the Graphics object used to render this image (provided by the
argument to paint );
the x-coordinate of the upper-left corner of the image's display position;
the y-coordinate of the upper-left corner of the image's display position.
Remember that we cannot call paint directly, so we must call repaint instead
(and allow this latter method to call paint automatically). This call will be made at
the end of the constructor for the client. Steps 3-5 from the original fi ve steps are
commented in bold type in the client program.
Now for the code…
import java.io.*;
import java.net.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class ImageClient extends JFrame
{
private InetAddress host;
private fi nal);
Search WWH ::
Custom Search | http://what-when-how.com/Tutorial/topic-547t9a4dbp/An-Introduction-to-Network-Programming-with-Java-341.html | CC-MAIN-2019-09 | refinedweb | 264 | 50.57 |
and our sister brand
Embroidery Library
make up an employee-owned company of about 30 creative people in Minneapolis,
Minnesota. We sell digital designs for machine embroidery (in 11
file formats)
and hand embroidery (as PDFs). All designs for sale on Urban Threads are digital downloads;.
How do
I download a whole order at once?
Thanks for asking! You can download a whole order with one click. On each
individual order page in your
order history,
click on the "download order" button to download the entire order as a zipped
file. You will need to
unzip this file before you can transfer the individual design files to your
embroidery machine.
If you choose, you can include several other useful things in this order
download -- a PDF of the thread list (handy for printing), a TXT file of the
thread list (handy if you want to
import thread information into Embird), and a JPG image of the design (handy
for keeping in the same folder as your embroidery design files, to see more
easily what each design looks like). To make these choices, go into your
profile, and
under "Include in order download", click the radio buttons to select the files
you'd like included in your order download., machine embroidery.??
Depends on what you're looking for in a machine! We don't have a specific
make or model to recommend, but
here are some
things to consider. | https://urbanthreads.com/faq.aspx | CC-MAIN-2018-17 | refinedweb | 238 | 60.14 |
Threads are the core element of a multi-tasking programming environment. By definition, a thread is an execution context in a process; hence, every process has at least one thread. Multi-threading implies the existence of multiple, concurrent (on multi-processor systems), and often synchronised execution contexts in a process.
Threads have their own identity (thread ID), and can function independently. They share the address space within the process, and reap the benefits of avoiding any IPC (Inter-Process Communication) channel (shared memory, pipes and so on) to communicate. Threads of a process can directly communicate with each other — for example, independent threads can access/update a global variable. This model eliminates the potential IPC overhead that the kernel would have had to incur. As threads are in the same address space, a thread context switch is inexpensive and fast.
A thread can be scheduled independently; hence, multi-threaded applications are well-suited to exploit parallelism in a multi-processor environment. Also, the creation and destruction of threads is quick. Unlike
fork(), there is no new copy of the parent process, but it uses the same address space and shares resources, including file descriptors and signal handlers.
A multi-threaded application uses resources optimally, and is highly efficient. In such an application, threads are loaded with different categories of work, in such a manner that the system is optimally used. One thread may be reading a file from the disk, and another writing it to a socket. Both work in tandem, yet are independent. This improves system utilisation, and hence, throughput.
A few concerns
The most prominent concern with threads is synchronisation, especially if there is a shared resource, marked as a critical section. This is a piece of code that accesses a shared resource, and must not be concurrently accessed by more than one thread. Since each thread can execute independently, access to the shared resource is not moderated naturally but using synchronisation primitives including mutexes (mutual exclusion), semaphores, read/write locks and so on.
These primitives allow programmers to control access to a shared resource. In addition, similar to processes, threads too suffer states of deadlock, or starvation, if not designed carefully. Debugging and analysing a threaded application can also be a little cumbersome.
How does Linux implement threads?
Linux supports the development and execution of multi-threaded applications. User-level threads in Linux follow the open POSIX (Portable Operating System Interface for uniX) standard, designated as IEEE 1003. The user-level library (on Ubuntu,
glibc.so) has an implementation of the POSIX API for threads.
Threads exist in two separate execution spaces in Linux — in user space and the kernel. User-space threads are created with the
pthread library API (POSIX compliant). These user-space threads are mapped to kernel threads. In Linux, kernel threads are regarded as “light-weight processes”. An LWP is the unit of a basic execution context. Unlike other UNIX variants, including HP-UX and SunOS, there is no special treatment for threads. A process or a thread in Linux is treated as a “task”, and shares the same structure representation (list of
struct
task_structs).
For a set of user threads created in a user process, there is a set of corresponding LWPs in the kernel. The following example illustrates this point:
#include <stdio.h> #include <syscall.h> #include <pthread.h> int main() { pthread_t tid = pthread_self(); int sid = syscall(SYS_gettid); printf("LWP id is %dn", sid); printf("POSIX thread id is %dn", tid); return 0; }
Running the
ps command too, lists processes and their LWP/ threads information:
kanaujia@ubuntu:~/Desktop$ ps -fL UID PID PPID LWP C NLWP STIME TTY TIME CMD kanaujia 17281 5191 17281 0 1 Jun11 pts/2 00:00:02 bash kanaujia 22838 17281 22838 0 1 08:47 pts/2 00:00:00 ps -fL kanaujia 17647 14111 17647 0 2 00:06 pts/0 00:00:00 vi clone.s
What is a Light-Weight Process?
An LWP is a process created to facilitate a user-space thread. Each user-thread has a 1×1 mapping to an LWP. The creation of LWPs is different from an ordinary process; for a user process “P”, its set of LWPs share the same group ID. Grouping them allows the kernel to enable resource sharing among them (resources include the address space, physical memory pages (VM), signal handlers and files). This further enables the kernel to avoid context switches among these processes. Extensive resource sharing is the reason these processes are called light-weight processes.
How does Linux create LWPs?
Linux handles LWPs via the non-standard
clone() system call. It is similar to
fork(), but more generic. Actually,
fork() itself is a manifestation of
clone(), which allows programmers to choose the resources to share between processes. The
clone() call creates a process, but the child process shares its execution context with the parent, including the memory, file descriptors and signal handlers. The
pthread library too uses
clone() to implement threads. Refer to
./nptl/sysdeps/pthread/createthread.c in the glibc version 2.11.2 sources.
Create your own LWP
I will demonstrate a sample use of the
clone() call. Have a look at the code in
demo.c below:
#include <malloc.h> #include <sys/types.h> #include <sys/wait.h> #include <signal.h> #include <sched.h> #include <stdio.h> #include <fcntl.h> // 64kB stack #define STACK 1024*64 // The child thread will execute this function int threadFunction( void* argument ) { printf( "child thread entering\n" ); close((int*)argument); printf( "child thread exiting\n" ); return 0; } int main() { void* stack; pid_t pid; int fd; fd = open("/dev/null", O_RDWR); if (fd < 0) { perror("/dev/null"); exit(1); } // Allocate the stack stack = malloc(STACK); if (stack == 0) { perror("malloc: could not allocate stack"); exit(1); } printf("Creating child thread\n"); // Call the clone system call to create the child thread pid = clone(&threadFunction, (char*) stack + STACK, SIGCHLD | CLONE_FS | CLONE_FILES |\ CLONE_SIGHAND | CLONE_VM, (void*)fd); if (pid == -1) { perror("clone"); exit(2); } // Wait for the child thread to exit pid = waitpid(pid, 0, 0); if (pid == -1) { perror("waitpid"); exit(3); } // Attempt to write to file should fail, since our thread has // closed the file. if (write(fd, "c", 1) < 0) { printf("Parent:\t child closed our file descriptor\n"); } // Free the stack free(stack); return 0; }
The program in
demo.c allows the creation of threads, and is fundamentally similar to what the
pthread library does. However, the direct use of
clone() is discouraged, because if not used properly, it may crash the developed application. The syntax for calling
clone() in a Linux program is as follows:
#include <sched.h> int clone (int (*fn) (void *), void *child_stack, int flags, void *arg);
The first argument is the thread function; it will be executed once a thread starts. When
clone() successfully completes,
fn will be executed simultaneously with the calling process.
The next argument is a pointer to a stack memory for the child process. A step backward from
fork(),
clone() demands that the programmer allocates and sets the stack for the child process, because the parent and child share memory pages — and that includes the stack too. The child may choose to call a different function than the parent, hence needs a separate stack. In our program, we allocate this memory chunk in the heap, with the
malloc() routine. Stack size has been set as 64KB. Since the stack on the x86 architecture grows downwards, we need to simulate it by using the allocated memory from the far end. Hence, we pass the following address to
clone():
(char*) stack + STACK
The next field,
flags, is the most critical. It allows you to choose the resources you want to share with the newly created process. We have chosen
SIGCHLD | CLONE_FS | CLONE_FILES | CLONE_SIGHAND | CLONE_VM, which is explained below:
SIGCHLD: The thread sends a
SIGCHLDsignal to the parent process after completion. It allows the parent to
wait()for all its threads to complete.
CLONE_FS: Shares the parent’s filesystem information with its thread. This includes the root of the filesystem, the current working directory, and the umask.
CLONE_FILES: The calling and caller process share the same file descriptor table. Any change in the table is reflected in the parent process and all its threads.
CLONE_SIGHAND: Parent and threads share the same signal handler table. Again, if the parent or any thread modifies a signal action, it is reflected to both the parties.
CLONE_VM: The parent and threads run in the same memory space. Any memory writes/mapping performed by any of them is visible to other process.
The last parameter is the argument to the thread function (
threadFunction), and is a file descriptor in our case.
Please refer to the sample code implementation of LWP, in
demo.c we presented earlier.
The thread closes the file (
/dev/null) opened by the parent. As the parent and this thread share the file descriptor table, the file close operation will reflect in the parent context also, and a subsequent file
write() operation in the parent will fail. The parent waits till thread execution completes (till it receives a
SIGCHLD). Then, it frees the memory and returns.
Compile and run the code as usual; and it should be similar to what is shown below:
$gcc demo.c $./a.out Creating child thread child thread entering child thread exiting Parent: child closed our file descriptor $
Linux provides support for an efficient, simple, and scalable infrastructure for threads. It encourages programmers to experiment and develop thread libraries using
clone() as the core component.
Please share your suggestions/feedback in the comments sections below.
References and suggested reading
- Wikipedia article on
clone()
- clone() man page
- Using the clone() System Call by Joey Bernard
- Implementing a Thread Library on Linux
- IEEE Standards Interpretations for IEEE Std 1003.1c-1995
- Sources of pthread implementation in
glibc.so
- The Fibers of Threads by Benjamin Chelf | http://www.opensourceforu.com/2011/08/light-weight-processes-dissecting-linux-threads/ | CC-MAIN-2014-52 | refinedweb | 1,649 | 64.41 |
Compile Each Concept# COMMENTS
We've all been there:
Student: Teacher, I need help
Teacher (comes over)
Student (shows screen listing three bazillion errors)
The student has just written pages of code and finally decided to try to run it only to end up with pages of errors.
Error messages can at times be hard to read for beginners but to see and truth be told, they frequently don't even read them but over the years I've developed a practice that I've found helpful as a software developer and if students adopt the same practice it can save them a lot of time and effort.
The idea is very simple.
Compile and test one concept at a time.
It might seem silly, but if I'm writing a a program, my first compile might be code that looks like this:
#include <iostream> using std::cout; using std::endl; int main() { return 0; }
or
public class HelloWorld { public static void main(String[] args){ } }
This might seem silly but it really doesn't take any effort. I have a
key sequence to do this under Emacs and if I'm using an interactive
language like Python or Clojure I just have to hit
c-c c-c
This might seem silly but I do it out of muscle memory and it immediately tells me I don't have any syntax errors and my build system works.
Going further, I compile and test every time I code up what I call a concept. What's a concept? Let's look at some code. A student might write something like this to find prime number up to n:
def prime_list(n): for i in range(2,n): i_isPrime = True for j in range(i-1,1,-1): if i%j ==0 : i_isPrime = False break if i_isPrime: print(i)
There's a lot going on there. To me, a concept is
- A loop
- A complex calculation
- a conditional
All of these can have other concepts within.
In the above code, I'd probable write it as follows, adding in tests and print statements throughout the process:
Step 1:
def prime_list(n): for i in range(2,n): print(i)
Step 2
def prime_list(n): for i in range(2,n): for j in range(i-1,1,-1): print(i,j)
Step 3
def prime_list(n): for i in range(2,n): for j in range(i-1,1,-1): if i%j ==0 : print(i,"is not prime") break if i_isPrime: print(i)
Step 4
def prime_list(n): for i in range(2,n): i_isPrime = True for j in range(i-1,1,-1): if i%j ==0 : i_isPrime = False break print(i,i_isPrime)
Step 5
def prime_list(n): for i in range(2,n): i_isPrime = True for j in range(i-1,1,-1): if i%j ==0 : i_isPrime = False break if i_isPrime: print(i)
It might not play out exactly this way but if not it would be something similar. The idea is that if you test every time you add one concept or construct there are fewer places where you can introduce an error.
If you enter 100 lines before you test there are 100 places where things can go wrong. If you type 10, there are only 10. On top of that, if you've added 100 lines, conecptually you've probably added a lot and the error can be anywhere. While it's not always the case, most of the time, if you just added an if, the problem will be in the *if* or as a result of the if. Same with a loop or any other construct.
Once you get in the habit, it's easy and doesn't really take any time. A couple of keystrokes to compile and a couple more to run.
All too often students try to write everything at once and it's so rare that it works. If we can get them to develop incrementally they'll be able to write much more complex systems and write them with much less frustration.Tweet | https://cestlaz.github.io/post/compile-each-concept/ | CC-MAIN-2020-10 | refinedweb | 685 | 68.84 |
Once you have your Blend Shapes set up in Autodesk® Maya®:
Export your selection to fbx ensuring the animation box is checked and blend Shapes under deformed models is checked.
Import your FBX file into Unity (from the main Unity menu: Assets > Import New Asset and then choose your file).
Drag the Asset into the hierarchy window. If you select your object in the hierarchy and look in the inspector, you will see your Blend Shapes are listed under the SkinnedMeshRenderer component. Here you can adjust the influence of the blend shape to the default shape, 0 means the blend shape has no influence and 100 means the blend shape has full influence.
It is also possible to use the Animation window in Unity to create a blend animation, here are the steps:
Open the Animation window under Window > Animation > Animation.
On the left of the window click ‘Add Curve’ and add a Blend Shape which will be under Skinned Mesh Renderer.
From here you can manipulate the keyframes and Blend Weights to create the required animation.
Once you are finished editing your animation you can click play in the editor window or the animation window to preview your animation.
It’s also possible to set the blend weights through code using functions like GetBlendShapeWeight and SetBlendShapeWeight.
You can also check how many blend shapes a Mesh has on it by accessing the blendShapeCount variable along with other useful functions.
Here is an example of code which blends a default shape into two other Blend Shapes over time when attached to a gameobject that has 3 or more blend shapes:
//Using C# using UnityEngine; using System.Collections; public class BlendShapeExample : MonoBehaviour { int blendShapeCount; SkinnedMeshRenderer skinnedMeshRenderer; Mesh skinnedMesh; float blendOne = 0f; float blendTwo = 0f; float blendSpeed = 1f; bool blendOneFinished = false; void Awake () { skinnedMeshRenderer = GetComponent<SkinnedMeshRenderer> (); skinnedMesh = GetComponent<SkinnedMeshRenderer> ().sharedMesh; } void Start () { blendShapeCount = skinnedMesh.blendShapeCount; } void Update () { if (blendShapeCount > 2) { if (blendOne < 100f) { skinnedMeshRenderer.SetBlendShapeWeight (0, blendOne); blendOne += blendSpeed; } else { blendOneFinished = true; } if (blendOneFinished == true && blendTwo < 100f) { skinnedMeshRenderer.SetBlendShapeWeight (1, blendTwo); blendTwo += blendSpeed; } } } } | https://docs.unity3d.com/cn/2018.3/Manual/BlendShapes.html | CC-MAIN-2021-21 | refinedweb | 341 | 60.04 |
Customizing Shells
In this section, we’ll learn how to create and modify a shell’s commands and attributes.
This article addresses two flows:
- Modifying an existing shell
- Creating a new shell with modifications to the standard
At the end of this article, you can find an end-to-end example of how to extend an existing shell with attributes and commands. To see the example, click here.
Modifying an existing shell is done using the
shellfoundry extend command. This command downloads the source code of the shell you wish to modify to your local machine and updates the shell’s Author. Note that extending official shells (shells that were released by Quali) will remove their official tag. Keep in mind that modifying a shell that is being used in CloudShell may affect any inventory resources that are based on a previous version of the shell. In the second flow, since we’re creating a new shell from the appropriate shell standard, we will use the
shellfoundry new command and modify the shell’s settings.
The common use cases for customizing a shell are:
- Adding new commands
- Modifying existing commands
- Adding new attributes
- Modifying existing attributes
- Publishing attributes in a service shell
- Associating categories to a service shell
Please watch this video if you’re not sure whether to create a new shell or customize an existing shell:
Customizing a shell’s commands
When customizing an official shell you can add new commands, and also modify or hide existing ones.
- To add a new command: Add the command in the shell’s driver.py file, and expose the command in the shell’s drivermetadata.xml file.
The command’s logic should be placed either in the driver itself or in a separate python package.
Modifications to a command can include adding some logic either before or after the command or changing the command logic itself. In order to do that, copy the command code from the original Quali python package to the shell driver or to the custom python package you created (the command logic resides either in the vendor package or vendor-os package - for example in “cloudshell-cisco” or “cloudshell-cisco-ios”).
When modifying an existing command, you can add optional input parameters. Just make sure that the implementation is backwards compatible. Note that adding mandatory inputs or removing one of the original inputs is not supported. In these cases, it is recommended to create an additional command with a different name, instead of modifying an existing one.
For example, in this customized Cisco NXOS shell, we modified the commands that configure VLANs on multiple ports and port channels.
It is also possible to hide or remove a command. Hiding a command is done by placing it in an “administrative” category in the drivermetadata.xml. Note that removing a command might affect how the shell is used since CloudShell and/or some orchestration scripts might depend on the existing driver’s commands.
When adding or modifying a command, you can leverage Quali’s shell framework to ease the development process. For details, see Quali’s Shell Framework.
See some common command extension examples in Common Driver Recipes.
Please check out the following instructional videos on how to develop basic driver commands:
Customizing a shell’s attributes
Modification applies to attributes that are defined in the shell’s standard. To find the attributes defined in the shell’s standard, see the documentation page of your shell’s standard. For such attributes, you can modify the description, default values, possible values and rules.
Note: You cannot modify attributes type, name, and any attributes that are associated with the shell’s family as this will affect other shells that use this family. The family attributes are listed in the shell’s standard.
Deployment-specific vs. shell-specific attributes
CloudShell provides two ways to customize attributes, which differ depending on the attribute’s usage:
- Customizing an existing shell: Use this option when the attributes are related to a specific device but are not relevant to other shells. This is done by manually editing the shell’s shell-definition.yaml file.
- Associating custom attributes with a shell that is installed in CloudShell: Use this option when the additional attributes are deployment-related and relevant to multiple resources of different shells. For example, the Execution Server Selector attribute. Starting with CloudShell version 8.3, this option applies both to the root model of the shell and to the shell’s sub-resource models, such as blades and ports.
The second option of associating custom attributes with an already installed shell is done either via CloudShell Portal or by calling the SetCustomShellAttribute API method. For additional information on this method, see Deploying to Production.
Important: Deleting a 2nd Gen shell’s default attributes (those that come with the standard) is not supported. It is also not possible to customize a 2nd Gen shell’s data model (families and models) and its structure, which is as defined in the shell standard the original shell is based on.
Adding or modifying attributes in a shell’s root model
This section explains how to add attributes to the shell’s root model and to specific models within the shell. To learn how to expose attributes that are required for the discovery of the resource (in the Inventory dashboard’s Resource discovery dialog box), see Auto-discovery for Inventory Shells.
To add/modify a shell’s attributes:
1) Open command-line.
2) To customize a shell that resides on your local machine, make sure command-line is pointing to a different path from the original shell’s root folder.
3) Run the appropriate command in command-line:
To modify a shell:
shellfoundry extend <URL/path-to-shell>) Locate
node-types:.
7) Under the root level model, add the following lines:
properties: my_property: type: string default: fast description: Some attribute description constraints: - valid_values: [fast, slow] tags: [configuration, setting, search_filter, abstract_filter, include_in_insight, readonly_to_users, display_in_diagram, connection_attribute, read_only]
8) Edit their settings, as appropriate. For additional information on these settings, see the CloudShell online help.
9) Remove any unneeded lines.
10) Save the file.
11) In command-line, navigate to the shell’s root folder.
12) Package the shell.
shellfoundry pack
13) Import the shell into CloudShell.
Important: If a previous version of the shell already exists in CloudShell, upgrade the existing shell with the new one in CloudShell Portal’s Shells management page. This capability is available for 2nd Gen shells.
Customizing a service shell
Customizing a service shell’s commands is the same as for resource shells, while customizing attributes largely follows the same procedure. The only difference is in how you publish a service’s attribute and associate a service shell to service categories.
Publishing a service shell’s attributes
Publishing an attribute displays that attribute in the service’s settings dialog box when a CloudShell user adds or edits a service in a blueprint or sandbox diagram.
To publish a service shell’s attribute:
1) Add or modify an existing attribute as explained in the Customizing a Shell’s attributes section above.
2) If you want the service’s attribute to be exposed in the blueprint and sandbox, replace the tags line with the following:
tags: [user_input]
3) Save the shell-definition.yaml file, package and import the shell into CloudShell.
Associating categories to a service shell
This procedure explains how to add service categories to a 2nd Gen service Shell. Service categories appear in the services catalog and are used to expose services in specific domains in CloudShell. This is done by associating a service category, which is linked to specific domains, to a service shell.
To associate service categories to a service shell:
1) Open command-line.
2) To customize a shell that resides on your local machine, make sure command-line is pointing to a different path from the original shell template’s root folder.
3) Run the appropriate command in command-line:
To modify a shell:
shellfoundry extend <URL/path-to-shell-template>) Under
node-types:, locate
properties:, and add the following lines underneath:
Service Categories: type: list default: [My Category 1, My Category 2]
Note: The
properties: line needs to be added only once, so do not add it if it already exists uncommented in the shell-definition.yaml.
7) Specify the categories in the
default: line (comma-separated list).
The shell’s categories will be added to the Global domain, even if CloudShell already includes categories with the same name in other domains.
8) Package and import the shell into CloudShell.
9) To make the service available in other domains, in CloudShell Portal’s Categories management page, add those domains to the service’s categories.
Example: Extending a shell with attributes and commands
To help us understand the shell customization process, let’s add attributes and commands to a shell. To simulate this process, we’ve created a modified version of the Cisco IOS Router Shell, which creates a mock resource structure of 16 ports. Please feel free to use it.
Start by extending the shell. From the CiscoIOSRouter2GWithAutoload shell’s GitHub Releases page, copy the source code link address of the latest release. Then, run
shellfoundry extend.
For example:
shellfoundry extend
The shell project is created in the directory from which you ran the command.
When you extend a shell, it’s recommended to change the shell’s version and author. This is done in the shell project’s shell-definition.yaml file.
For example:
metadata: template_name: Cisco IOS Router Shell 2G template_author: steven template_version: 1.0.1
To see how it looks in CloudShell Portal, navigate to the shell’s root folder in command-line and install the shell.
For example:
cd "c:\My Shells\CiscoIOSRouter2GWithAutoload" shellfoundry install
In the Shells page, we can see the shell’s updated author and version.
You can also change the image. To do so, add the image file to the shell project’s root folder and in the
artifacts section of the shell-definition.yaml, set the file name.
artifacts: icon: file: shell-icon.png type: tosca.artifacts.File driver: file: CustomDataModelDriver.zip type: tosca.artifacts.File
And set it in the
metadata section.
metadata: template_name: CustomDataModel template_author: steven template_version: 0.1.1 template_icon: shell-icon.png
To see the image, install the updated shell.
shellfoundry install
And in the Inventory dashboard, create a resource based on the shell (if you’re using our modified shell, you don’t need to specify the credentials of a real Cisco IOS router) and add the resource to a blueprint.
The image should be displayed on the resource.
Next, create an attribute on the root model of the resource. Attributes are created in the
properties section of the shell-definition.yaml. We’ll add a string attribute called “my attribute” with a default value and some rules.
node_types: vendor.Cisco IOS Router 2G: derived_from: cloudshell.nodes.Router properties: my attribute: type: string default: value 1 description: This is my new attribute. constraints: - valid_values: [value 1, value 2, value 3] tags: [setting, configuration]
The attribute is added to resources created from this shell. To see the attribute on our resource, install the shell on CloudShell, return to the blueprint and open the resource’s Resource Attributes pane.
Let’s say you want to create an attribute on the shell’s port. Starting with CloudShell 8.3, this capability is supported. Sub-model attributes are added the same way as root model attributes. The only difference is that for sub-model attributes, you need to include the sub-model before the property name (in our case, the sub-model is “Generic Port”). If the sub-model consists of several words, remove any spaces between them.
For example, adding an attribute called “my port speed” to the Generic Port sub-model:
node_types: vendor.Cisco IOS Router 2G: derived_from: cloudshell.nodes.Router properties: GenericPort.my port speed: type: string default: 5 GHz description: constraints: - valid_values: [5 GHz, 10 GHz, 15 GHz] tags: [setting, configuration] my attribute: type: string default: value 1 description: This is my new attribute. constraints: - valid_values: [value 1, value 2, value 3] tags: [setting, configuration]
To see the new attribute, in the Inventory dashboard, find the resource’s ports (you can use the search field), click a port’s “more info” button and in the window that pops up, scroll down until you see the attribute.
You can also add attributes that are required for the resource’s discovery. While non-discovery attributes only need to be added to the
properties section, new discovery attributes are added both to the
properties section of the shell-definition.yaml, and to the
capabilities section’s
properties. We’ll add an attribute called “my discovery attribute”.
capabilities: concurrent_execution: type: cloudshell.capabilities.SupportConcurrentCommands auto_discovery_capability: type: cloudshell.capabilities.AutoDiscovery properties: my discovery attribute: type: boolean default: true
Let’s make sure the attribute was added to the shell. In the Inventory dashboard, select Discover from the resource’s more options menu. The attribute should be listed on the resource.
Note that if we’re adding a discovery attribute that is already included in the shell’s standard, we only need to define it in the
capabilities section.
Now let’s add a simple command that prints “hello world” to the Output console. In the driver.py file, add the command.
def hello_world(self): return "hello world" pass
In the drivermetadata.xml file, add a category for the command and a display name. You need to do this if you want to expose the command in CloudShell Portal.
<Category Name="My Commands"> <Command Description="" DisplayName="Hello World" Name="hello_world" /> </Category>
In a CloudShell sandbox, hover over the resource and click Commmands. The command is displayed in the resource’s commands pane.
And running the command prints the message to the Output window.
So far in this example, we discussed how to create attributes that are specific to the shell. However, CloudShell also includes global attributes that are not isolated to a specific shell and can be used among different CloudShell elements. You can add these global attributes to shells that are already installed on CloudShell using the
SetCustomShellAttribute API method which connects to CloudShell, searches for the shell by name, and adds the attribute to it.
For example, this script adds the Execution Server Selector attribute (with a default value) to our shell:
import cloudshell.api.cloudshell_api as api username = 'admin' password = 'admin' server = '192.168.85.9' domain = 'Global' session = api.CloudShellAPISession( username=username, password=password, domain=domain, host=server ) session.SetCustomShellAttribute( modelName='Cisco IOS Router 2G', attributeName='Execution Server Selector', defaultValue='NY Test' )
After shell installation, the attribute is added to the shell’s resources.
| https://devguide.quali.com/shells/8.1.0/customizing-shells.html | CC-MAIN-2018-34 | refinedweb | 2,453 | 54.93 |
The Singleton design pattern is an area of C++ philosophy where I happen to be in complete disagreement with the rest of the C++ world. Singleton classes, as described in Gamma Et Al. are unnecessary; they add excess verbiage to code, and they are a symptom of bad design. I have described my reasons for saying this in my writeup called Don't Use Singleton Classes.
My challenge to you, the reader: describe for me a class that must
be implemented using the form of the Singleton Pattern in Gamma Et Al.. Alternatively, describe one that is simpler, or works more efficiently,
using that pattern. Put writeups to answer the challenge in this node and I will do my best to answer them. Let me know you did with a /msg.
You're going to have to work really hard to convince me that any singular object should have its behavior and singularity merged into a singleton class.
Suggested further reading: Arthur J. Riel, Object-Oriented Design Heuristics, 1996, Addison-Wesley, ISBN 0-201-63385-X.
I agree with Core's comments (Although I wish they'd been in the other node) for every OO language except C++. In C++ you can declare any global variable you want, exactly the way you do in C. You make things like singletons to enforce their working correctly. In C, you're working without a net.
I must challenge several points put forward by Rose Thorn:
The number of classes in your program does not determine maintenance burden all by itself. Remember that small classes are easier to maintain. A singleton class that has two members for each function (one for the actual behavior and one to forward to the singular instance) is twice as complex as the server class or the client class separately. Regardless of whether we separate the client and server or merge them, we have the same amount of code to maintain!
The client class is the real thing; it's the class the outside world uses. Suppose that not all of your class's behavior is singular. Where do you put the non-singular behavior? In the client class of course.
Separation between the domain model and the implementation model is a Good Thing, not a Bad Thing. If by "re-engineering" you mean supplying a different implmentation, all you have to do is implement the server class interface to perform the behavior the client class expects. This is easier when you have separate classes. If you want to change the interface, you need to change both members under any model.
Polymorphism is not the be-all and end-all of Object-Oriented Programming. Many C++ gurus will tell you that encapsulation is far more important. Encapsulation is about presenting an interface to the world, a contract between the class designer and the class user. If the contract can be satisfied without polymorphism, polymorphic member functions need not apply. Remember that Bjarne Stroustrup's original version of C with Classes, which made OOP something more than an academic toy, did not allow for polymorphism. Virtual functions were added later.
I'd like to see a real model of a polymorphic class where every object must have its own behavior. For our theoretical Person class, we must be talking about an abstract base class Person from which a Joe class, a Jim class, and a MarySue class inherit. We want to pass Person objects into some other function somewhere, otherwise the Person class is meaningless. Joe has a magnetic personality, and along comes his disciple Bob, who acts exactly like Joe. All of a sudden, our assumption that there can be only one Joe is gone.
(As an aside, It could be argued that a person's behavior is determined by our internal state, which comes from our initial genetic material [constructor arguments] plus outside environmental influences [mutator functions, input and output]. The internal processes that turn state into behavior (chemistry) remain the same.)
For a different example, suppose that Ford and GM have quite different accounting practices. In an accounting application, we can model this with a FordAccountingSystem class and a GeneralMotorsAccountingSystem class. But, suppose Ford hired GM's CFO away form them. The new CFO replaces Ford's accounting system with an exact duplicate of GM's. We might be able to model Ford's new accounting system with a copy of General Motors's accounting system, except that we're that with the same AccountingSystem class as before. But if it's a singleton, we're stuck with separating the interface from the implementation again. The problem lies in having a GeneralMotorsAccountingSystem class rather than a SpiffyNewAccountingSystem class which can be applied to both Ford and GM.
They're nothing more than a hack to get global variables in object oriented languages, and everyone knows it, but noone is willing to admit it.
As far as a time when a singleton is necessary, try having a program where you need an instance of an abstract class Person: you need every person to have their own behaviour, so they need their own class. If any two instances of person have the same class, then you have something seriously wrong with your program: a matter-transporter has probably malfunctioned. Of course, you needn't use a class when there is only one of a kind, as you won't need to allocate versions dynamically. But then you don't get polymorphic behaviour. While in this scenario you could use a variant of the visitor pattern to do the kind of things you might do with mixins, you lose the ability to very strongly enforce your uniqueness constraints.
If you don't want polymorphism, don't use an OO implementation. Polymorphism is what give OO it's encapsulation, because method calls are referentially transparent. The use of classes as namespaces is something that is pretty much idiosyncratic to C++ derived languages. Certainly Objective C and Common Lisp don't do it. Namespace division is done by a separate mechanism. The use of private members is not the same as information hiding, and inheritance and polymorphism are not the same thing.
You CAN implement singleton classes nicely: every public method delegates to a private method which is invoked on the single "real" member of the class. Java example follows:
class Foo
{
private static Foo theInstance;
public Foo()
{
if(theInstance==null)
{
theInstance = this;
}
}
public void frob(int i)
{
theInstance.priv_frob(i);
}
private void priv_frob(int i)
{
//do whatever
}
}
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://everything2.com/title/Singleton+Classes+are+necessary%252C+you+idiot+cheese%2521 | CC-MAIN-2018-09 | refinedweb | 1,104 | 62.58 |
view raw
I recently came across code which was similar to the following:
#include <stdio.h>
class Example {
private:
enum {
BufSize = 4096,
MsgSize = 200 * 1024,
HeaderFieldLen = 16
};
public:
int getBufSize() {
return BufSize;
}
};
int main() {
Example ex;
printf("%d\n", ex.getBufSize());
return 0;
}
struct
const
There are several ways of naming numerical constants in order to avoid magic numbers. Using enumerators are one of them.
The advantage of this method over regular
const variables is that enumerators are not variables. Therefore, they are not stored as variables during run-time, they are simply used by the compiler, at compile-time.
[from the comments]
So this usage would be in some ways similar to using preprocessor macros to define constants?
The downside of macros is (mainly) type safety. Macros have no type, so the compiler cannot check for you whether the types match where you use them. Also, while macros are used in C, they are very rarely used in C++ because we have better tools at our disposal.
In C++11, a better way to name these constants is to use
constexpr members.
constexpr int BufSize = 4096; constexpr int MsgSize = 200 * 1024; constexpr int HeaderFieldLen = 16;
The above code replaces the following.
enum { BufSize = 4096, MsgSize = 200 * 1024, HeaderFieldLen = 16 }; | https://codedump.io/share/DqGOTomgGvDK/1/using-enum-to-store-numerical-constants | CC-MAIN-2017-22 | refinedweb | 210 | 64.1 |
PATCH method. 36 * <p> 37 * The HTTP PATCH method is defined in <a 38 *RF5789</a>: 39 * </p> 40 * <blockquote> The PATCH 41 * method requests that a set of changes described in the request entity be 42 * applied to the resource identified by the Request- URI. Differs from the PUT 43 * method in the way the server processes the enclosed entity to modify the 44 * resource identified by the Request-URI. In a PUT request, the enclosed entity 45 * origin server, and the client is requesting that the stored version be 46 * replaced. With PATCH, however, the enclosed entity contains a set of 47 * instructions describing how a resource currently residing on the origin 48 * server should be modified to produce a new version. 49 * </blockquote> 50 * 51 * @since 4.2 52 */ 53 @NotThreadSafe 54 public class HttpPatch extends HttpEntityEnclosingRequestBase { 55 56 public final static String METHOD_NAME = "PATCH"; 57 58 public HttpPatch() { 59 super(); 60 } 61 62 public HttpPatch(final URI uri) { 63 super(); 64 setURI(uri); 65 } 66 67 public HttpPatch(final String uri) { 68 super(); 69 setURI(URI.create(uri)); 70 } 71 72 @Override 73 public String getMethod() { 74 return METHOD_NAME; 75 } 76 77 } | http://hc.apache.org/httpcomponents-client-ga/httpclient/xref/org/apache/http/client/methods/HttpPatch.html | CC-MAIN-2017-04 | refinedweb | 196 | 52.83 |
bobtemplates.plone 3.0.0a3
Templates for Plone projects.
bobtemplates.plone
bobtemplates.plone provides mr.bob templates to generate packages for Plone projects.
Note: bobtemplates.plone supports plonecli, which is the recommended way of creating Plone packages.
Features
Package created with bobtemplates.plone use the current best-practices when creating an add-on.
Provided templates
- addon
- theme_package
- buildout
Provided subtemplates
These templates are meant to be used inside a package which was created by the addon template.
- theme
- content_type
- vocabulary
Compatibility
Add-ons created with bobtemplates.plone are tested to work in Plone 4.3.x and Plone 5. They should also work with older versions but that was not tested. It should work on Linux, Mac and Windows.
Documentation
Full documentation for end users and developers can be found in the “docs” folder.
For easy usage see: plonecli
It is also available online at
Installation
Use in a buildout
[buildout] parts += mrbob [mrbob] recipe = zc.recipe.egg eggs = mr.bob bobtemplates.plone
This creates a mrbob-executable in your bin-directory. Call it from the src-directory of your Plone project like this.
../bin/mrbob bobtemplates.plone:addon -O collective.foo
Installation in a virtualenv bobtemplates.plone:addon -O collective.foo
See the documentation of mr.bob for further information.
Contribute
- Issue Tracker:
- Source Code:
- Documentation:
Support
If you are having issues, please let us know. We have a Gitter channel here: plone/bobtemplates.plone
Contributors
This package was originally based on bobtemplates.niteoweb and bobtemplates.ecreall
- Maik Derstappen [MrTango]
- Philip Bauer [pbauer]
- Cédric Messiant [cedricmessiant]
- Vincent Fretin [vincentfretin]
- Thomas Desvenain [thomasdesvenain]
- Domen Kožar [iElectric]
- Nejc Zupan [zupo]
- Patrick Gerken [do3cc]
- Timo Stollenwerk [timo]
- Johannes Raggam [thet]
- Sven Strack [svx]
- Héctor Velarde [hvelarde]
- Aurore Mariscal [AuroreMariscal]
- Víctor Fernández de Alba [sneridagh]
- Alexander Loechel [loechel]
Changelog
3.0.0a3 (2017-10-30)
- Fix #222 default travis setup is broken. [jensens, pbauer]
- Add template registration for mr.bob/plonecli for all provided templates [MrTango]
- Fix content_type and theme sub templates [MrTango]
- fix in themes.rst changed plone_addon to addon [pigeonflight]
3.0.0a2 (2017-10-01)
- Cleanup Package - remove unnecessary files from past versions [loechel]
- Add vocabulary subtemplate [MrTango]
3.0.0a1 (2017-09-26)
- Refacturing to support subtemplates [MrTango]
- Add theme and content_type subtemplates [MrTango]
- Add missing plone namespace, to avoid conflicts with other bobtemplate packages [MrTango]
- Removed bootstrap-buildout.py, Update barceloneta less files for theme_package [agitator]
- Fixed i18n attributes for View/Edit actions in dexterity type xml. [maurits]
- Testing of generated skeletons integrated with tox and pytest. [loechel]
2.0.0 (2017-08-28)
- Set the zope_i18n_compile_mo_files environment variable. [maurits]
- Fixed i18n attributes for View/Edit actions in dexterity type xml. [maurits]
- Separate theme template from addon template, we now have plone_addon and plone_theme_package
- Update barceloneta less files to 1.7.3 for plone_theme_package [agitator]
- Removed bootstrap-buildout.py and added DEVELOP.rst [agitator]
- Update barceloneta less files to 1.7.3 for plone_theme_package [agitator]
- Fixed i18n attributes for View/Edit actions in dexterity type xml. [maurits]
- Seperate theme template from addon template, we now have plone_addon and plone_theme_package [MrTango]
- Update pins in the generated buildout.cfg [ale-rt]
- Change default values for code analysis’ return-status-codes directive: it is now False on development and True on CI. [hvelarde]
- Pin flake8 to latest (3.3.0) to allow use of latest pycodestyle (2.3.1) [fulv]
- Imrove wording [svx]
- Add requirements.txt and update README.txt to use it [MrTango]
- Make cleanup hook windows friendly. [gforcada]
- Move LICENSE.rst out of docs folder into top level. [gforcada]
- Get rid of the last two code violations on generated package [sneridagh]
- Comment the toolbar rule by default in backend.xml and add a comment on how to add it properly if backend.xml is used. Declaring the toolbar rule twice causes the toolbar JS stop working properly [sneridagh]
1.0.5 (2016-10-16)
- Use same lines width than package name for title ## [AuroreMariscal]
- Get rid of travis.cfg configuration as its use is no longer considered best practice. [hvelarde]
- Update bootstrap-buildout.py to latest version. [hvelarde]
- Fix imports to follow conventions. [hvelarde]
- Avoid usage of double quotes on strings. [hvelarde]
- Avoid usage of invokeFactory. [hvelarde]
- Remove dependency on unittest2 as package is not intended to be compatible with Python 2.6. [hvelarde]
- Use selenium v2.53.6. [hvelarde]
- Use plone:static instead of browser:resourceDirectory to allow ttw-overrrides. [pbauer]
- Bump flake8 version to 3.x. [gforcada]
- Update theme template, include complete working Barceloneta resources + grunt setup [MrTango]
1.0.4 (2016-07-23)
- Upgrade some pinns. [pbauer]
- Upgrade to 5.0.5 and test against that. [pbauer]
- Add i18n:attributes for action nodes in FTI profile. [thet]
- Pin versions of coverage/createcoverage [staeff]
- Default to Plone 5.0.4. [jensens]
- Validate type name input (fixes #81). [pbauer]
- Git ignore .installed.cfg and mr.developer.cfg by default. [jensens]
- isort style checks are enabled, but no config was set. i Added config according to [jensens]
- Ordered sections of generated FTI xml into semantical block and added comments for each block. [jensens]
- Bump setuptools version to 21.0.0 in buildout.cfg.bob [staeff]
- Configure buildout to install all recommended codeanalysis plugins [staeff]
1.0.3 (2016-04-13)
- Fix Plone default version (Plone 4.3.9). [timo]
1.0.2 (2016-04-13)
- Create uninstall profile also for Plone 4.3.x, since it already depends on Products.CMFQuickInstallerTool >= 3.0.9. [thet]
- Update Plone versions to 4.3.9 and 5.0.4. [thet]
- Update robot test framework versions including Selenium to work with recent firefox releases. [thet]
- Replaced import steps by post_handlers. Needs GenericSetup 1.8.2 or higher. This is included by default in Plone 4.3.8 and 5.0.3 but should be fine to use on older Plone versions. [maurits]
- Removed .* from the .gitignore file. This would ignore the .gitkeep files, which would mean some directories are not added when you do git add after generating a new project. [maurits]
- Note about disabled z3c.autoinclude in test layer setup. [thet]
- Remove the xmlns:five namespace, as it is not used at all. [thet]
- Fix build failure on Plone 4.x due to plone.app.contenttypes pulled in and having a plone.app.locales >= 4.3.9 dependency in it’s depending packages. [thet]
- Declare the xml encoding for all GenericSetup profile files. Otherwise the parser has to autodetect it. Also add an xml version and encoding declaration to theme.xml. [thet]
- Add “(uninstall)” to the uninstall profile title. Otherwise it cannot be distinguished from the install profile in portal_setup. [thet]
- Simplify concatenation of .rst files for setup.py. [thet]
- Update .gitignores in repository to exclude lib64, pip-selfcheck.json and all .* except necessary. Update .gitignore.bob in templates with these changes too. Add .gitattributes in repository for union-merge CHANGES.rst files. [thet]
- Update docs and README [svx]
1.0.1 (2015-12-12)
- Register locales directory before loading dependencies to avoid issues when overriding translations. [hvelarde]
1.0 (2015-10-02)
- Upgrade to Plone 4.3.7 and 5.0. [timo]
- Avoid pyflakes warnings for long package names. [maurits]
1.0b1 (2015-09-17)
- Always start with 1.0a1. No more 0.x releases please. [timo]
- Use Plone minor version for setup.py classifier. So 4.3 instead of 4.3.6. [maurits]
- Enabled robot part in generated package. [maurits]
- Add depedency on plone.testing 5.0.0. Despite the major version number, this change does not contain breaking changes. [do3cc]
- Fix #84 Make travis cache the egg directory of the generated package. [jensens]
- Update tests to use Plone 5.0b3. [jensens]
- Remove unittest2 dependency. [gforcada]
0.11 (2015-07-24)
- Fix update.sh [pbauer]
- Add i18ndude to buildout [pbauer]
- Fix package-creation on Windows. Fixes #72. [pbauer]
- Add packagename to licence. [pbauer]
- Add uninstall-profile for Plone 5. [pbauer]
- Fix indentation to follow the conventions of plone.api. [pbauer]
- Move badges from pypin to shields.io. [timo]
- Fix coverage on travis template. [gil-cano]
- Enable code analysis on travis and fail if the code does not pass. [gforcada]
0.10 (2015-06-15)
- Add check-readme script that detects Restructured Text issues. [timo]
- Use only version up to minor version in setup.py of package #56. [tomgross]
- Use class method to load ZCML in tests. [tomgross]
- Upgrade default Plone version to 4.3.6. [timo]
- Add zest.releaser to package buildout. [timo]
- Update README according to Plone docs best practice. [do3cc, timo]
- Add flake8-extensions to code-analysis. [timo]
- Upgrade Selenium to 2.46.0. [timo, pbauer]
- Don’t create a type-schema unless it is needed. [pbauer]
0.9 (2015-03-24)
- Add Theme package type with simple bootstrap-based theme. [timo]
- Add Dexterity package type. [timo]
- Remove example view. [timo]
- Remove question for keywords. [timo]
- Remove question for locales. [timo]
- Remove questions for version and license. [timo]
- Remove questions for profile, setuphandler, and testing. [timo]
- Unify buildout configuration in buildout.cfg [timo]
- Fix bootstrap command in travis.yml. [timo]
0.8 (2015-02-06)
- Add includeDependencies. This fixes #23. [timo]
0.7 (2015-02-05)
- Use latest buildout-bootstrap.py. [timo]
- Fix failing nosetests. [timo]
- Add test that creates an add_on and runs all its tests and code analysis. [timo]
- Run tests on travis. [timo]
- Run code analysis on travis. Build fails on PEP8 violations. [timo]
- Add code analysis. [timo]
- Remove z2.InstallProducts. Not needed any longer. [timo]
- Use testing best practices and follow common naming conventions. [timo]
- Remove testing profile. Global testing state is considered an anti-pattern. [timo]
- Add example robot test. [timo]
- Add travis and pypip.in badges. [timo]
- Run code analysis on the generated addon as well within the tests to make sure we always ship 100% PEP8 compliant code. [timo]
- Add REMOTE_LIBRARY_BUNDLE_FIXTURE to acceptance test fixture. [timo]
0.6 (2015-01-17)
- Use PLONE_APP_CONTENTTYPES_FIXTURE for tests on when using Plone 5. [pbauer]
0.5 (2015-01-17)
- Remove useless base-classes for tests. Use ‘layer = xxx’ instead. [pbauer]
- Fix some minor code-analysis issues. [pbauer]
- Added .editorconfig file. [ale-rt]
0.4 (2014-12-08)
- Remove grok. [pbauer]
- Fix missed removals when testing was deselected. [pbauer]
- Only use jbot when there is a profile and a browser layer. [pbauer]
- Get username and email from git. [do3cc]
0.3 (2014-12-07)
- Pinn robotframework to 2.8.4 to fix package-tests. [pbauer]
- Add browserlayer to demoview to allow multiple addons. [pbauer]
- Fix creation of nested packages (wrong __init__.py). [pbauer]
0.2 (2014-12-07)
- Fix documentation [pbauer]
0.1 (2014-12-07)
- Get namespace, name and type from target-dir. [pbauer]
- Remove obsolete plone_addon_nested. Auto-nest package in after-render hook. [pbauer]
- Add many new features. Most of them are optional. [pbauer]
- Initial import based on bobtemplates.ecreall by cedricmessiant, vincentfretin and thomasdesvenain. [pbauer]
- Author: Plone Foundation
- Keywords: web plone zope skeleton project
- License: GPL version 2
- Categories
- Development Status :: 5 - Production/Stable
- Environment :: Console
- Intended Audience :: Developers
- License :: OSI Approved :: GNU General Public License v2 (GPLv2)
- Natural Language :: English
- Operating System :: OS Independent
- Programming Language :: Python
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3.4
- Programming Language :: Python :: 3.5
- Programming Language :: Python :: 3.6
- Programming Language :: Python :: Implementation :: CPython
- Topic :: Software Development :: Code Generators
- Topic :: Utilities
- Package Index Owner: mrtango, plone, pbauer, timo
- Package Index Maintainer: timo
- DOAP record: bobtemplates.plone-3.0.0a3.xml | https://pypi.python.org/pypi/bobtemplates.plone | CC-MAIN-2018-05 | refinedweb | 1,897 | 54.69 |
# Dagaz: A new Beginning
***It runs south and circles north, circling, circling to run with its wind
And according to its circuits the wind returns;
All the rivers run into the sea — and the sea does not overflow,
To the place where the rivers run, — There they continue to run;
[The book of Ecclesiastes](https://www.biblegateway.com/passage/?search=Ecclesiastes+1)***
In 1998, a completely unique, for its time, application was developed that allows you to reduce the process of developing an abstract board game (or puzzle) to a small text description language, vaguely reminiscent of [Lisp](https://en.wikipedia.org/wiki/Lisp_(programming_language)). This project was called [Zillions of Games](http://www.zillions-of-games.com/). It created a furor among fans of board games. Currently, over 2,000 applications have been created using this technology.
It quickly became clear that ZoG has many drawbacks. I already [wrote](https://habr.com/ru/post/221779/) about this in Habr and I will not repeat myself. Let me just say that the developers did not take into account the features of a huge number of existing games and some important options were hard-coded, so that their change became extremely problematic. Greg Schmidt, in 2007, tried to rectify the situation by releasing the [Axiom Development Kit](http://www.zillions-of-games.com/cgi-bin/zilligames/submissions.cgi?do=show;id=1452), but its tight integration with ZoG does not permit solving all the problems.
Project [Ludi](http://cambolbro.com/cv/publications/ciaig-browne-maire-19.pdf) pointed out new frontiers, using the universal game “engine” and [genetic algorithms](https://en.wikipedia.org/wiki/Genetic_algorithm) to automate the process of development of new board games. Unfortunately, this approach was initially envisaged as a deliberate simplification of the game mechanics and of the level of the AI employed. Discussion of the goals of this project is beyond the scope of this article, but some of its technical solutions, no doubt, served as the starting point for my own development.
My goal is the development of a more versatile and user-friendly “engine” for the creation of abstract board games. For almost a year I have been studying the possibility of ZoG and Axiom and learned a great deal about their limitations. I think I can solve their problems by creating a more universal and cross-platform solution. On the progress of work on this project I shall report.
Openness and modularity
-----------------------
Perhaps the main drawback of ZoG is its closure. The product was assembled “once and forever” under one single platform — Windows. Were it open-source code, one could try to port it under Linux, Android, iOS … Another problem is its monolithicness.
In ZoG there are the beginnings of modularity, allowing for connection to the games DLL, including custom implementations of the AI. Axiom goes a little further, allowing you to run applications in the mode autoplay, without using the ZoG kernel. Notwithstanding the serious limitation of this solution (supporting applications only for two players), this example demonstrates how modularity would be helpful! The opportunity to organize a game with two bots (using different AI settings) and to collect statistics on a large number of games can not be overestimated. But how much better it would be if the product has been fully modular!
* Move generation module
* Move execution module
* Control module
* AI module
* Visualization module
All work describing the games is to be performed by the move generation module. This is the “heart” of the project. Transfer of all tasks not connected with this function to other modules will make it as simple as possible. You can improve this module, without looking at AI issues and user interaction. You can completely change the format of the description of games or add support for the descriptions in the format of ZoG, Axiom and Ludi. Modularity is the basis of flexibility of the solution!
The move execution module is the custodian of the game state. Information on the current game state is transferred to all other modules on demand. For reasons which I will give below, execution progress must pass through the generation module, whose task is the formation of a command in terms of module execution. Also, the task of the move generation module is the primary configuration of the game space, based on the description of the game.
The control module is, in fact, the application itself. It asks the move generation module for a list of possible moves and changes the game state, passing the selected move to the move execution module. The control module can be connected to play one or more AI bots. As many as you need (and possibly different)! The type of control unit is determined by the division of tasks. This may be autoplay to collect game statistics, game server (it can control several state stores, leading a large number of gaming sessions) or individual applications for playing offline.
The ability to connect different implementations of AI will improve the quality of the game. It is understood that the modules for the game of chess and Go should use different approaches. Games with incomplete information and games using random data also require an individual approach. Universal implementation of AI will be equally bad play all the games! Modular connection AI will allow to compare the “strength” of the algorithms, including a game mode “to each other.” Since the AI architecture is separated from the storage state of the game, one instance of the gaming bot can support an unlimited number of gaming sessions simultaneously.
Visualization of game process may also be varied. The first thing that comes to mind are 2D and 3D implementations. The platform for which the application is being developed, is also important. Less obvious is that visualization may be an important part of the game! For example, in the game [Surakarta](https://glukkazan.github.io/elimination/surakarta.htm), taking pieces will be completely non-obvious in the absence of proper animation of moves.
[](https://glukkazan.github.io/elimination/surakarta.htm)
In general, modularity seems a good idea for such a project, and open source code will enable everyone who wishes to participate in the project. At the present time, I do not set myself commercial purposes, but I think that, if desired, I will find a way to make money without closing the source code.
The game space
--------------
Before starting the show, you need to set the stage. The board is not just a place where the pieces are arrayed. Beside that, the direction of movement of the pieces can be determined (in fact, the connections between the board positions) play areas (e.g., areas for conversion of pieces) forbidden fields, etc. Here is how the definition of the chess board looks in the ZoG-realization:
**Defining the board in ZoG**
```
(define Board-Definitions
(image "images\Chess\SHaag\Chess8x8.bmp" "images\Chess\Chess8x8.bmp")
(grid
(start-rectangle 5 5 53 53)
(dimensions
("a/b/c/d/e/f/g/h" (49 0)) ; files
("8/7/6/5/4/3/2/1" (0 49)) ; ranks
)
(directions (n 0 -1) (e 1 0) (s 0 1) (w -1 0)
(ne 1 -1) (nw -1 -1) (se 1 1) (sw -1 1)
)
)
(symmetry Black (n s)(s n) (nw sw)(sw nw) (ne se)(se ne))
(zone
(name promotion-zone)
(players White)
(positions a8 b8 c8 d8 e8 f8 g8 h8)
)
(zone
(name promotion-zone)
(players Black)
(positions a1 b1 c1 d1 e1 f1 g1 h1)
)
(zone
(name third-rank)
(players White)
(positions a3 b3 c3 d3 e3 f3 g3 h3)
)
(zone
(name third-rank)
(players Black)
(positions a6 b6 c6 d6 e6 f6 g6 h6)
)
)
```
You may notice that besides the game settings, here are the settings associated with the visualization. I am firmly convinced that these settings do not belong here. In implementing a visualization module, multiple settings can be used and different settings may be required. Moreover, simulation games can work without any visualization module at all (like autoplay in Axiom). Indeed, since Axiom is used to visualize ZoG, the definition does not contain anything superfluous:
**Defining the board in Axiom**
```
{board
8 8 {grid}
board}
{directions
-1 0 {direction} n
1 0 {direction} s
0 1 {direction} e
0 -1 {direction} w
-1 -1 {direction} nw
1 -1 {direction} sw
-1 1 {direction} ne
1 1 {direction} se
directions}
{symmetries
Black {symmetry} n s
Black {symmetry} nw sw
Black {symmetry} ne se
symmetries}
```
Unfortunately, Axiom also does not have a way to determe game zones (the location of game zones must be determined manually in the code). This is not Axiom's only simplification. Definition of the board in this project can not contain more than one grid and this grid must be two-dimensional. The board, thus defined, is a one-dimensional array, but for the convenience of the programmer, synonyms are defined for each of the spaces as follows:

Compared with the more flexible scheme of grid definition in ZoG, these restrictions are quite uncomfortable (especially given the fact that the imposed naming scheme used these fields for the very purpose of visualization). Fortunately, it is possible to define a board of arbitrary shape. Both Axiom and ZoG provide an opportunity to identify element-wise each position on the board along with the ability to determine the links between arbitrary pairs of positions. Using this approach, we can define a board of any topology. Its only drawback is the extreme verbosity and complexity of the description.
In addition to the location of the pieces on the board and in the reserve, the system should have the ability to store attributes for individual pieces and for the spaces on the board. A good example of the need to use the attributes of a rule of “[castling](https://en.wikipedia.org/wiki/Castling)” in [chess](https://en.wikipedia.org/wiki/Chess). This is a difficult move, which includes simultaneous movement of the king and a rook, allowed, provided that neither of these pieces have moved before performing this move. An attribute could be used to store a Boolean tag showing whether the piece has ever moved. Field attributes can also find some interesting applications.
It should be noted that attributes are not just variables but part of the game state. An attribute value may be changed by the execution of a turn (including by the AI module) and should be available for all subsequent turns, but not for turns performed in another branch of the game. Currently, ZoG supports storing boolean attributes of pieces. Axiom storage attributes are not supported, but you can add to the definition of the board a description of variables and arrays. These variables can be used, such as counters of the quantity of captured pieces:
```
{board
5 18 {grid}
{variable} WhitePieces
{variable} BlackPieces
board}
```
Yet another limitation of ZoG and of Axiom is the rule that each position of the board can contain no more than one piece. If any piece completes a move to a position occupied by another piece, the piece previously occupying the position is automatically considered to be “eaten”. This rule goes well with the “chess” principle of taking pieces and serves to simplify description of this game, but complicates the implementation of games such as “[bashni checkers](http://www.iggamecenter.com/info/en/bashni.html)” and “[tavreli](http://www.iggamecenter.com/info/en/tavreli.html)”.

In these games, pieces can be arranged in “columns”. Such a “column” can be moved all together, as a single piece. After some reflection, I decided that it was better not to abandon the automatic implementation of the “Chess” capture, but to improve mechanisms for moving groups of pieces. Indeed, for the implementation of the “pillars”, you can always add to board another dimension (this is especially easy, as long as the visualization module is separated from the move generating module and from the AI, then you can use any logic whatsoever for rendering the three-dimensional board into its two-dimensional visualization). An additional argument in favor of this decision was that “high-stacked” movement of pieces is not the only type of group travel. For example, in “[Pentago](https://s3-eu-west-1.amazonaws.com/mosigra.product.other/522/067/pentago.pdf)” board fragments can be rotated together with the pieces mounted thereon.

Summarizing, I can say that, for my game framework, I decided to take all the best that has been thought up in ZoG, Axiom, and Ludi, and add whatever, in my opinion, they lack.
Move generation
---------------
Move generation is akin to [non-deterministic programming](https://en.wikipedia.org/wiki/Nondeterministic_programming). The task of the move generator is providing, upon request, a list of all possible moves from the current position. Which move from this list will be selected by a player or AI is not its function. Let's see how the generation of moves is done in ZoG. As an example, we take the move generation macro for a long-range piece (a queen or bishop). This is how it is used in determining moves for these pieces:
```
(piece
(name Bishop)
(image White "images\Chess\SHaag\wbishop.bmp" "images\Chess\wbishop.bmp"
Black "images\Chess\SHaag\bbishop.bmp" "images\Chess\bbishop.bmp")
(moves
(slide ne)
(slide nw)
(slide se)
(slide sw)
)
)
```
As a parameter, a macro is passed the direction of movement on the board. If you do not consider the possibility of installing new pieces on the board, the generation of a move looks simple. For each of the pieces on the board, all possible moves according to the rules are calculated. Then the magic begins…
Each of the definitions can add to the list a number of possible moves! Adding a move to the list is done with the command add (at the same time positioning each moving piece on the board). I already [wrote](https://habr.com/ru/post/221779/) about how this architectural solution is extremely poor. The command for the formation of the move should be separated from the commands that manipulate pieces (as was done in Axiom). Let's see how the macro works:
```
(define slide
( $1
(while empty?
add
$1
)
(verify not-friend?)
add
))
```
First, the displacement is performed by one cell, in the given direction, then in a cycle the space reached is checked for the absence of the pieces on it, a move is formed, and the arrangement proceeds to another cell in the same direction. If you stop here, the piece can “slide” through empty cells, but how can you take enemy pieces?
Very simple! Having run the command verify, the verication that the field is not occupied by a friendly piece, we form another add command, completing the move. If on this cell was located an enemy piece, it will be taken automatically (as on one space of the board, at one time, you cannot have more than one piece). If the piece was friendly, the calculation of the move will abort with the command verify (violation of the conditions specified in this command immediately terminates the calculation of the current move).
n both ZoG and Axiom one can move only one's own pieces (or rather, moving the opponent's pieces is possible, but only if specified in the mode of calculation of a move of one of one's own pieces). I find this an extremely inconvenient restriction, because there are many games in which you can directly move the opponent's piece (in “[Stavropol Checkers](https://glukkazan.github.io/checkers/stavropol-checkers.htm)”, for example). It would be more consistent to perform the move calculation for all pieces, regardless of their affiliation. In the macro that determines the move, one would need only to add one check to allow moving only one's own pieces:
```
(define slide
( (verify friend?)
$1
(while empty?
add
$1
)
(verify not-friend?)
add
))
```
Important is the ability to perform a move consisting of several “partial” moves. In implementations of drafts, this ability is used to perform “chain” captures:
```
(define checker-jump
($1 (verify enemy?)
capture
$1
(verify empty?)
(if (not-in-zone? promotion-zone)
(add-partial jumptype)
else
(add-partial King jumptype)
)
)
)
```
The partial move command is formed with add-partial (for this command, as well as for the command add, there is a variation of the move, with “transformation” of the pieces). Such a move is always part of a larger, “composite” move. As a rule, for subsequent moves, a “mode” is set, which the continuation should implement. So in checkers, a capture can continue only with following captures but not with a “soft” (non-capturing) move.
**Note**
In ZoG, implementation of partial moves is poor. Trying to execute the add-partial command in a cycle causes an error. As a result, the capture performed by a checker king can be realized only in the following very awkward manner:
```
(define king-jump-1
($1 (while empty?
$1
)
(verify enemy?)
capture
$1
(verify empty?)
(add-partial jumptype)
)
)
(define king-jump-2
($1 (while empty?
$1
)
(verify enemy?)
capture
$1
(verify empty?)
$1
(verify empty?)
(add-partial jumptype)
)
)
```
And so on, until king-jump-7! Let me remind you that in most varieties of checkers with a “long-range” king, the king, after each capture, can stop on any space of a continuous chain of empty spaces following the captured piece. There is, incidentally, one variant of this game in which the “chain” capture rule is formulated differently. That is just what I like about checkers — everyone can find a variant to one's liking.
Such a system of description of the rules is very flexible, but sometimes more complex logic is required. For example, if the piecee, during “partial” progress should not re-pass through a previously traversed field, it is logical to use the flags associated with positions on the board. Having visited a space, we set a flag, so subsequently not to go to this space again:
```
(verify (not-position-flag? my-flag))
(set-position-flag my-flag true)
```
In addition to “positional” flags, in ZoG you can use global flags. These capabilities should not be confused with the attributes of pieces. Unlike the latter, these are not part of the game state. Unfortunately, both attributes of pieces and flags in ZoG can only be boolean (in Axiom attributes are not even supported). This limitation makes it difficult to perform operations associated with the various types of counting. For example, in [this](http://zillions-of-games.com/cgi-bin/zilligames/submissions.cgi?do=show;id=2233) little puzzle, I had to use for “counting” pieces, caught in a “fork”, a pair of boolean flags (the exact number I did not need, as long as the pieces were more than one).
Another thing to fix is the lack of a clear “life cycle” in the execution of the move. All flags are automatically reset before starting the move, but it would be easier to identify clearly the initialization phase. In my opinion, in the calculation of the move, there should occur the following phases:
1. Initialization of variables and checking preconditions for the composite move
2. Initialization of variables and checking preconditions for the partial move
3. Generation of the partial move
4. Checking postconditions of the partial move
5. Generating, completing, and checking postconditions of the composite move
6. Checking for the termination conditions of the game
The group of steps from the second to the fourth, in the full composite move, may be repeated many times. The idea of pre- and post-conditions, which I call invariants, I took from the project Ludi. I tell you more about the use of invariants later.
On the importance of notation
-----------------------------
Generation of all possible moves from the position is only half of the story. To control the game state requires a compact presentation of the generated moves. In ZoG, for this purpose, ZSG-notation is used. Here is an account of a possible beginning of a chess game in this form:
```
1. Pawn e2 - e4
1. Pawn e7 - e5
2. Knight g1 - f3
2. Knight b8 - c6
3. Bishop f1 - c4
3. Knight g8 - f6
4. King e1 - g1 Rook h1 - f1 @ f1 0 0 @ g1 0 0
4. Pawn d7 - d5
5. Pawn e4 x d5
5. Knight f6 x d5
```
This script is close to the usual [chess notation](https://en.wikipedia.org/wiki/Chess_notation) and generally user friendly. Only white's fourth move may cause some confusion. So in ZSG it looks like [castling](https://en.wikipedia.org/wiki/Castling). The part of the description of the move before the '@' character is quite clear; it is the simultaneous movement of the rook and the king, but what follows? Thus, in ZSG, it looks like a reset of attributes of the pieces is required in order to prevent the possibility of repeated castling.
**Note**
ZoG uses its ZSG-notation particularly in order to show the course of the game in a form understandable to the player. On the right of the board, a «Moves List» sub-window may be open. This list can be used to navigate through the recorded game. This list is not very convenient, because a branching tree view of alternative games is not supported. The part of the recorded turns associated with changes in attributes of pieces, is not displayed to the user.
The recording of a move in ZSG-notation should contain full information sufficient to correctly change the game state. If information about a change of attributes is lost, in a game according to such a record, a move could be incorrectly repeated (for example, the player would have the opportunity to re-execute the castling). Unfortunately, in DLL-extensions (such as Axiom), extended information can not be transmitted.
Working with DLL-extensions, ZoG is forced to make a quite cunning manipulation when positioning to a selected move (for example, when you roll back a move). From [each] previous position [working from the beginning of the game], all possible moves are generated, and then, within that list, one must search for a move with the [corresponding] ZSG representation. The [side effects of each] generated move is are applied to [each successive] game state, as it is possible to perform side-effects not reflected in the move's ZSG-representation.
The situation is aggravated by the fact that the only way to get to the game state at the time of a move in the past, is the consistent application of all the moves from the beginning of the game, to the initial state of the board. In really [complex cases](https://habr.com/ru/post/234587/), this kind of navigation does not occur quickly. Another disadvantage of ZSG-notation can be illustrated by the recording of the following move in the game of [Go](https://en.wikipedia.org/wiki/Go_(game)):
```
1. White Stone G19 x A19 x B19 x C19 x D19 x E19 x F19
```
Here, in the position G19, a white stone is placed that captures a group of black stones. Since all the pieces involved in the performance of the placement must be mentioned in the ZSG-performance, the record of the turn may seem very long (in Go, one drop may capture up to 360 stones). To what this may lead, I wrote [earlier](https://habr.com/ru/post/235483/). The buffer size allocated for recording the ZoG move, may not be enough. Moreover, if for some reason the order of removal of stones changes (in the process of development of the game it happens), an attempt to apply a move, from an old order of captures, will fail.
Fortunately, there is a simple way to deal with all these problems. Let's look at how to define moves of pieces in ZRF:
```
(piece
(name Pawn)
(image White "images\Chess\SHaag\wpawn.bmp" "images\Chess\wpawn.bmp"
Black "images\Chess\SHaag\bpawn.bmp" "images\Chess\bpawn.bmp")
(moves
(Pawn-capture nw)
(Pawn-capture ne)
(Pawn-move)
(En-Passant e)
(En-Passant w)
)
)
```
Names of moves, defined in ZoG macros, are inaccessible as a generator of moves. But what prevents us from giving up on macros and making descriptions of the moves with their names? Here's how the record would look for a chess game:
```
1. e2 - e4 Pawn-move
1. e7 - e5 Pawn-move
2. g1 - f3 leap2 n nw
2. b8 - c6 leap2 n ne
3. f1 - c4 slide nw
3. g8 - f6 leap2 n nw
4. e1 - g1 O-O
4. d7 - d5 Pawn-move
5. e4 x d5 Pawn-capture nw
5. f6 x d5 leap2 w nw
```
**Note**
Astute readers may notice that in the moves for “black” I used directions not appropriate to the actual directions on the chessboard. This is connected with the fact that “symmetries” are defined for black:
```
(symmetry Black (n s)(s n) (nw sw)(sw nw) (ne se)(se ne))
```
Roughly speaking, then, what for white is “north”, for black is “south”, and vice versa.
The benefits of such a record is not obvious, but it has one important advantage. All moves are described in a uniform manner and these descriptions do not contain anything extra (the names of descriptions of moves, of course, could be made more “descriptive”). In the description of castling one managed to get rid of both the changes of attributes and of the description of the rook move (this description is no longer dependent on the implementation details of the move). An even clearer usefulness of such records exists in the case of the game of Go:
```
1. G19 drop-to-empty White Stone
```
And that's it! If the opponent's stones are taken in accordance with the rules of the game, there is no need to list them all in the move description. It is sufficient to indicate the initial and final space of displacement (possibly with a sign to take), the name of the executing move and the line of parameters passed to it. Of course, in order to perform a move according to this description, for decoding, it is necessary to access the move generation module, but ZoG does so!
Another possibility, which one should support, appears in the functionality of “partial” moves. Here is an example from “[Russian checkers](https://glukkazan.github.io/checkers/russian-checkers.htm)”:
```
1. Checker g3 - f4
1. Checker f6 - g5
2. Checker e3 - d4
2. partial 2 Checker g5 - e3 = XChecker on f4
2. Checker e3 - c5 = XChecker on d4 x d4 x f4
```
Here the blacks, on their second move, take two pieces on d4 and f4. A preliminary “transformation” of these pieces to XChecker is a feature of this implementation and serves to prevent the re-taking of “defeated” pieces on the same move. The phrase “partial 2” describes the beginning of “composite” course, which consists of two “partial” moves. This form of description is inconvenient, because at the time of generation of the first move, the length of the sequence of “partial” moves may not be known. Here's how this description will look in a new format:
```
1. g3 - f4 checker-shift nw
1. f6 - g5 checker-shift ne
2. e3 - d4 checker-shift nw
2. + g5 - e3 checker-jump nw
2. + e3 - c5 checker-jump sw
2. +
```
Implementation details related to the “transformation” of pieces are irrelevant. The capture of pieces is also not specified, as in checkers, capture occurs as a “side effect” of the piece's move and not according to the “principle of chess.” Partial progress will be encoded with the symbol “+” at the beginning of the line. A lone “+” indicates the completion of a “composite move” (in fact, this is the usual “partial” move, containing a missing move, an empty string).
Thus, using named rules for the implementation of moves, one has managed to create a universal notation, fully satisfying our requirements. Of course, it has nothing to do with either the standard chess or with any other notation, but it just so happens that the conventional notation for chess, checkers and other games, too, have nothing to do with each other. The visualization module can always convert the move record into a more familiar form accepted for a particular game. Conversion can also be into some universal form, like [SGF (Smart Game Format)](https://en.wikipedia.org/wiki/Smart_Game_Format).
The life cycle of the game
--------------------------
In addition to information about placing pieces on the board, the sequence of turns is a compnent part of the game state, a variable in the game process. In the simplest (and most common) case, for storing this information one bit will suffice, but ZoG provides a few more opportunities to implement more complex cases. Here is how a description of a sequence of moves could look for the game [Splut!](http://www.iggamecenter.com/info/en/splut.html):
```
(players South West North East)
(turn-order
South
West West
repeat
North North North
East East East
South South South
West West West
)
```
In this game, each player makes three moves at a time, but if you were to give the first player the opportunity to make three moves from the initial position, he would be able to destroy one of the opponent's pieces, which would give him a significant advantage. For this reason, the first player should make only one move (it gives an opportunity to prepare to attack an opposing player, but not attack him), the second — two moves (this is also not enough to attack an opposing player), after which each player always makes three moves.

The label repeat indicates the beginning of a cyclically repeating sequence of moves. If it does not appear, the entire description is repeated cyclicly. ZoG does not allow the use of the label repeat more than once. Another important feature is the specification of the turn order. Here's how a description of the sequence of turns for a game in which each player performs two turns (the first move — moving pieces, the second — capturing opponent's pieces) might look:
```
(players White Black)
(turn-order
(White normal-move)
(White capture-move)
(Black normal-move)
(Black capture-move)
)
```
There is one more capability associated with the description of moving someone else's pieces, but it is very inconvenient to use. The problem is that such a description has no alternative. If the description states that the move should be done by an enemy piece, the player must perform this move! In ZoG it is impossible to describe a choice of moving his own or someone else's piece. If such an capability is needed in a game (such as in “[Stavropol Checkers](https://glukkazan.github.io/checkers/stavropol-checkers.htm)”), it is necessary to make all the pieces neutral (creating for this purpose a player who does not participate in the game) and determine for all players the opportunity to move a neutral piece. I have said above that it is much easier by default to allow all players the ability to move any pieces (their own as well as their opponent's) adding the necessary checks in the move generation algorithms.
As you can see, the range of options provided by ZoG for description of the sequence of turns is extremely limited. Axiom also fails to add new features, because it (usually) runs over ZoG. Ludi, in this respect, is even poorer. In order to maximize unification of the rules of the game (required for the possibility of using generic algorithms), in this project, all descriptive capabilities have been deliberately simplified, which has brought about an elimination of whole layers of games.

"[Bao Swahili](https://en.wikipedia.org/wiki/Bao_(game))” is a good example of a game with a complex life cycle. In this game, the are two phases with rules for move execution which differ significantly. At the beginning of the game, part of the stones is “in the hand” of each player. While there are still stones “in hand”, stones are put into wells, one stone at a time. When the stones “in hand” run out, the second phase of the game begins, with the distribution of inserted stones. One cannot say that this game can not be described in ZRF (the description language of ZoG), but because of the limitations of ZoG, this implementation would be extremely confusing (which certainly is not best for the quality of AI work). Let's see how the description of such a game would look in an “ideal world”:
```
(players South North)
(turn-order
(turn-order
(South p-i-move)
(North p-i-move)
)
(label phase-ii)
(turn-order
(South p-ii-move)
(North p-ii-move)
)
)
```
Here, each turn-order list determines its repeating sequence of moves (distinguishing itself by mode of move execution). The keyword label defines a label to which a transition can be made during the generation of the latest move. You may notice that here we proceed from the implicit assumption that such a transition always occurs after the move of the second player (otherwise it would violate the sequence of moves). How to make the transition to the next phase at an arbitrary time?
```
(players South North)
(turn-order
(turn-order
(South p-i-move)
(North p-i-move)
)
(turn-order
(labels - phase-ii)
(South p-ii-move)
(labels phase-ii -)
(North p-ii-move)
)
)
```
Here, labels are carried in the loop body and comprise two names. Label names in the labels lists appear in the order of transfer of players in the list of players. The name used for the transition is determined by which player made the last move. If this was the North, it will transition to the first label, otherwise, to the second. If any of the names in the labels will not be used, the corresponding position can be filled by a dash.

An important aspect in the management of alternating moves, is the ability to perform a repeated turn. In games of the [Tables family](https://en.wikipedia.org/wiki/Tables_(board_game)), such as [Nard](https://en.wikipedia.org/wiki/Tables_(board_game)#History), [Backgammon](https://glukkazan.github.io/races/backgammon.htm), or [Ur](http://www.zillions-of-games.com/cgi-bin/zilligames/submissions.cgi?do=show;id=2262), for example, the ability to perform repeated turns is an important element of game tactics. In ZoG one can use passing a turn to emulate this feature, but this approach significantly complicates the description of the game (especially with more players). It would be much more logical to use a label for repeating a turn:
```
(players South North)
(turn-order
(label repeat)
South
(label repeat)
North
)
```
The game having jumped to the label repeat, the player will again play his turn (the label closest to the current position in the list of turns will take effect). I like the approach of [Perl](https://en.wikipedia.org/wiki/Perl) in its implicit definitions. Implicit generation of control structures can significantly simplify game description. Inasmuch as repeated moves can be used in many games, the labels repeat, anticipating possible repetion of any turn can be implicit:
```
(players South North)
(turn-order
South
North
)
```
Moreover, since the sequence of turns is fully consistent with the written order of players in the players construct, you can automatically generate the entire turn-order phrase:
```
(players South North)
```
The easier the description is to write, the better.
Breakable invariant
-------------------
The main thing that I do not like in ZoG can be expressed with one word — checkmated. At first glance, it's just a condition (very common in games of the [chess family](https://en.wikipedia.org/wiki/Checkmate)) linking the end of the game with the formation a mate situation. Alas, on closer examination, the simplicity shows itself to be deceptive. Use of this keyword means not only the performance, after each move, of a check for the completion of the game, but also imposes upon the player certain “behavior”.
From the usual [Shogi](https://en.wikipedia.org/wiki/Shogi), this game differs only in the number of players. Unfortunately, this difference is enough to make the job of determining checkmate (and everything associated with this “magic” word) incorrect. Verifying being in check is performed only with relation to one of the players. As a result, the king may come to be under attack, and be eaten [by a combination of opponents' turns even when not left in “check”]! That this is not optimal will be reflected in the work of the AI.
If this problem seems insignificant, it is worth remembering coalitions are usually formed in four-player games “pair against pair”. In the case of the formation of coalitions, we must consider that pieces friendly to the king do not threaten him! So, for example, two friendly Kings may well reside on neighboring spaces of the board.

It becomes more complicated than ever if a player may have several kings. In “[Tamerlane chess](http://history.chess.free.fr/tamerlane-full.htm)”, the royal pawn turns into a prince (actually, a second king). If this happens, you can win only by capturing the first king (either of the two), and mating the second. In this game, you can get even a third king, double spending on the transformation of the “pawn of pawns”! The expressive capabilities of “checkmated” are not enough to adequately describe this situation.
Another difficulty may be the very process of giving mate. So in Mongolian chess ([Shatar](https://en.wikipedia.org/wiki/Shatar)), the result of attempted mate depends on the order in which the pieces execute sequential “check”. The result can prove to be either win or a draw (such as mate by a pawn), or even a defeat (horse mate forbidden, but you can give check). Slightly less exotic, in this regard, is Japanese Shogi. In this game, it is forbidden to give mate with a dropped pawn, but you can give check with a dropped pawn and give check mate with a moved pawn.
**Note**
There is one more important point worth mentioning. In some games, such as Rhythmomagic, there can be several different ways to end the game. The most obvious way to win, involving the destruction of the opponent's pieces, is also the least preferred. For more significant victory, one must arrange one's pieces on enemy territory in a certain pattern.
One should distinguish between the types of victories (and defeats and draws) at the level of game description, since the type of game ending may matter to the player. In addition, it should be possible to assign numerical priorities to the various game endings. Upon simultaneous fulfillment several completion conditions, the one that has the highest priority should count.
Obviously, one must separate the logic of verification of game end from the test for the king's having fallen into check, which is an [invariable rule](https://en.wikipedia.org/wiki/Invariant_(mathematics)) that is checked after each turn. Violation of the rule makes it impossible to perform the move (the move is removed from the list of available moves). So a (simplified) test for a King's being in check might look like this for “Tamerlane chess”:
```
(verify
(or
(> (count (pieces my? (is-piece? King))) 1)
(= (count (pieces my? (is-piece? King) is-attacked?)) 0)
)
)
```
It is important to understand that this test should be carried out only for one's own kings (I used the predicate my?, because the predicate friend?, with support for coalitions, will be satisfied not only for one's own pieces, but also for the pieces of all friendly players). Acceptable (and desirable, [if there are multiple friendly kings]) is the situation in which the enemy king falls under check, after a move, but by one's own king. This situation should be impossible [unless there are multiple friendly kings]! Having provided support for checking such rules, checking for the completion of the game by checkmate becomes trivial. If there are no possible moves and the [only] king is in check, the game is over [if that king belongs to the last surviving player of the second last surviving coalition]:
```
(loss-condition
(and
(= (count moves) 0)
(= (count (pieces my? (is-piece? King)) 1)
(> (count (pieces my? (is-piece? King) is-attacked?)) 0)
)
)
```
The ability to determine invariants will be useful in other games, such as in [checkers](https://glukkazan.github.io/checkers/international-checkers.htm). The greatest difficulty in the implementation of games of this family, is linked to the implementation of the “majority rule”. In almost all drafts games, capturing is compulsory. Also, in most games of this family, there is a characteristic completion of “chain captures” in a single turn. The checker, having captured, continues to take other pieces, if possible. In most games, the player is required to carry out chain captures to the end, but there are exceptions to this rule, for example, [Fanorona](https://glukkazan.github.io/checkers-like/fanorona-normal.htm).
[](https://glukkazan.github.io/checkers-like/fanorona-normal.htm)
Using the mechanism of partial moves, implementating a “chain capture” is quite simple. Difficulties arise when, in addition, one imposes a condition under which, of all the possible options, one must choose a chain in which a maximal number of pieces are captured. In ZoG this logic must be implemented from scratch at the level of “hardcoding”:
```
(option "maximal captures" true)
```
This setting is suitable for “[International checkers](https://glukkazan.github.io/checkers/international-checkers.htm)”, but in the “[Italian checkers](https://glukkazan.github.io/checkers/italian-checkers.htm)” the majority rule is formulated differently. In this version of the game, if there are several options for the same number of captures, you must select an option which captures the larger number of transformed checkers (kings). The developers of ZoG have provided this. You enter the following setting:
```
(option "maximal captures" 2)
```
In this setting, one counts not only the number of pieces captured, but also their type. Unfortunately, not everything can be foreseen. Here's how the “majority rule” is formulated in “old French checkers”:
> *If by a series of captures it is possible to capture the same number of checkers with a simple man or with a king, the player must use the king. However, if the number of checkers is the same in both cases, but in one there is an enemy king (or there are more), the player must choose this option, even if the capturing is then done using the simple checker, and not using the king.*
Of course, at the present time, almost no one plays this version of checkers, but its very existence clearly demonstrates the shortcomings of “hardcoded” implementation. Using the mechanism of invariants allows for all possible options for the “majority rule” in a universal manner. For the “[old French checkers](http://www.checkerschest.com/checkers-games/french-checkers.htm)” implementation would be as follows:
```
(verify
(>= capturing-count max-capturing-count)
)
(if (> capturing-count max-capturing-count)
(let max-capturing-count capturing-count)
(let max-capturing-sum capturing-sum)
(let max-attacking-value attacking-value)
)
(verify
(>= capturing-sum max-capturing-sum)
)
(if (> capturing-sum max-capturing-sum)
(let max-capturing-sum capturing-sum)
(let max-attacking-value attacking-value)
)
(verify
(>= attacking-value max-attacking-value)
)
(let max-attacking-value attacking-value)
```
Here, we assume that the rules for capture generation correctly fill [the following] local variables:
* **capturing-count** — total pieces captured
* **capturing-sum** — number of kings captured
* **attacking-value** — value of piece capturing
Associated with each of these variables is a value-accumulator, stored in a variable with the prefix max. The three checks are executed serially. Violation of any of the verify conditions immediately interrupts the generation of the next turn option (the capture is not stored in the list of possible turns). Since the checks performed are associated with variable values, it is not sufficient [to test only the current new capture option]. Each test generates a “bendable rule” associated with the generated capture [which may revise the accumulated maximum value]. After each change in any accumulator, all associated rules must be checked again [for every option in the list]. If any of the conditions are breached for a previously generated option, that option must be removed from the list of possible turn options.
Conclusion
----------
This is translation of my article of 2014 year. Since then, I have rethought a lot and the [Dagaz project](https://glukkazan.github.io/) has become a reality, but I did not change almost anything in the text. This article was translated by my friend [Howard McCay](http://www.zillions-of-games.com/cgi-bin/zilligames/submissions.cgi?searchauthor=505) and I am grateful to him for the work done. | https://habr.com/ru/post/481868/ | null | null | 7,575 | 53.21 |
>>?
Do we? (Score:4, Funny)
Staroffice may be okay, Wordperfect acceptable, and VIM popular, but until a 100% office replacement exists, most places are going to continue to snub Linux as an alternative on the desktop.
Besides, I like Office. MS may have had mega-crappy OS's, but Office always worked right.
Re:Open Source IE too (Score:2, Funny)
#include <windows.h>
#include <mshtlml.h>
int main(void)
{
IWebBrowser2 *ie;
pfnClassFactory ClassFactory;
HMODULE mshtml;
CoInitialise();
mshtml=LoadLibrary("mshtml.dll");
ClassFactory=GetProcAddress(mshtml,"Factory");
ie=ClassFactory();
ie->Initialise();
ie->Go();
ie->Release();
return(0);
}
All the web browser stuff is shared components - it's used by the help system and other things.
But it's not integrated with the O/S, and don't you forget it!
Office for Linux? (Score:3, Funny)
I heard that when they ported IE to Solaris that it required all sorts of crazy Win support stuff. I don't know about you but I'm not going to put an AUTOEXEC.BAT file on my Linux box.
Re:Office for Linux - same rules apply (Score:5, Funny)
1. It would only run as root.
2. You couldnt disable Clippy.
3. Word documents would be saved with extensions ".upgrade_to_windows"
4. NET extenstions would be automatically installed.
5. Visios linux box icon would look like a toaster
6. Spell checker would spell Linux as linux, and Open Source as "Pirated Software"
7. Eastereggs in office would have the BSDeality logo.
8. Office update would keep popping up, update "Microsoft Linux service pack #6805" for download.
9. MSN messenger would be required with a passport account.
10. Kernels would have to insert a new module that allows blue screens.
MSOffice for Linux.. pros and cons.. (Score:5, Funny)
In one hand this is a good idea. It would make their OS dominance go bye-bye if people actually had a choice of platforms to run the office suite.
On the other hand, do we really want to create new libraries proprietary to M$ under Linux that would allow the RandomCrashTime(), ScrewUpTheFormat() and CloseProgramIfNotSavedIn15Minutes() calls?
And I'm sure they would require us to reboot after every save of the documents.
---
If I had a funny sig, it would be here...
Re:Do we? (Score:1, Funny)
It's quite possible they WILL.... (Score:2, Funny)
To remove this paperclip, please send $998 to Bill Gates, Microsoft Corp, c/o PayPal | http://slashdot.org/story/01/12/07/1952237/states-filing-alternate-remedy-proposal-for-ms-anti-trust-case/funny-comments | CC-MAIN-2013-48 | refinedweb | 404 | 68.26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.