text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Have you ever noticed that when we search for any sentence in a search engine and start filling up the widget, it shows the rest of the thing automatically or gives us the recommendation for the rest of the queries? How do they do these things? Here in the picture, one thing comes into the part relation extraction. Relation extraction models are specially made for completing the task of predicting attributes for entities and relationships between the words and sentences. For example, extracting entities from the sentence “Elon Musk founded Tesla” – “elon musk” “founded” “Tesla” to understand the sentence “Elon Musk is founder of Tesla” is similar to the first sentence. Thus, relation extraction is one of the key components of some of the natural language processing tasks.
Knowledge graph construction plays a huge part; we can find many relation facts by using relation extraction. Using those facts, we can expand the knowledge graph, a path for machine learning models to understand the human world. Also, instead of NLP, it can be used for question answering, recommendation systems, search engines, and summarization applications.
THE BELAMY
Sign up for your weekly dose of what's up in emerging technology.
There are various types of data sets available for this type of modeling like DocRED, TACRED, ACE 2005. And also, various models and frameworks are already trained on them like COKE, SSAN etc. A basic structure of any simple architecture, which we call HRERE (Heterogeneous Representation for Neural Relation Extraction):
Roughly there can be four kinds of relation extraction tasks we can perform using different algorithms.
- Sentence level relation extraction is the most commonly occurring relation extraction task. In the given sentence, we find the relation between two tagged entities using a predefined relation set. TACRED is the most commonly used dataset for this task.
- Bag-Level relation extraction – in this task, we use existing knowledge graphs to find the relationship between the entities. There are many knowledge graphs available like freebase and wikidata. Which already consists of the relation between the entities. The most common data we use in this kind of task are NYT10.
- Few-Shot Relation Extraction – this type of learning is used to adapt a modelfaster to the new relationship. We can also say it works on the concept of the learning process of humans by using very small dataset’s samples. FewRel is largely used as data in this kind of task of relation extraction. The following image can understand the basic methodology.
- Document-level relation extraction- this task focuses on extracting intra sentence relations for single entity pairs to accelerate learning. In DocRED we can find the datasets for this type of method. The methodology can be shown in the following image.
What is OpenNRE?
OpenNRE is an open-source toolkit to perform neural relation extraction. The following image can understand the basic algorithm of NRE.
OpenNRE provides a framework for the implementation of relation extraction models as an open-source and extensible toolkit. This package provides facilities to
- New people working on relation extraction.
- Developers by providing an easy-to-use environment which can be deployed in any production without training of the model. And also, the performance of the models is very high.
- Researchers can easily use the packages for their experiment purposes.
In the package, there are various models which are trained on wiki80, wiki180, TACRED datasets. To know more about the datasets, readers can go through this link. This package provides high performing models based on convolutional neural networks and BIRT algorithms.
Code Implementation: OpenNRE
Let’s get a quick start of the OpenNRE using google colab.
Before installing the toolkit, we will need to start a GPU for the notebook; this will help us provide speed to our programs. To start the GPU, we can directly go to the runtime button; after this, click on the change runtime type. Next, you will get the widget hardware accelerator, select the GPU and click on the save.
Here in the colab we are going to clone the package; that’s why we need to mount our drive in the notebook. The below example will show you how to mount the drive on your runtime using an authorization code.
from google.colab import drive drive.mount('/content/drive')
Output:
Now we can start our trial of the package:
We can clone the OpenNRE repository using the following command.
!git clone
output:
To use the models we have cloned, we will need to direct the notebook to the folder where our package is available.
cd OpenNRE
Output:
Installing the package using its requirement.txt file.
!pip install -r requirements.txt
Output:
Now we are prepared to use OpenNRE.
Importing the OpenNRE.
import opennre
Loading a pre-trained model from the package mode with CNN encoders on Wiki180 datasets
model = opennre.get_model('wiki80_cnn_softmax')
Output:
This will take a few minutes to load the model. After loading it, we will use the model using the inferred command for relation extraction.
model.infer({'text': 'He was the son of ashok, and grandson of rahul.', 'h': {'pos': (18, 46)}, 't': {'pos': (78, 91)}})
Output:
Here we can see the model’s prediction is a child and the confidence score of about 32%; the score is low, but the prediction is almost satisfying because the model has told us the son is related to the child.
Let’s check for another model of the package.
model = opennre.get_model('wiki80_bert_softmax')
Output:
Here I have loaded a model from the package, which is made with the birt encoder trains on the wiki180 data.
Checking the performance of the model.
model.infer({'text': 'He was the son of ashok, and grandson of rahul.', 'h': {'pos': (18, 46)}, 't': {'pos': (78, 91)}})
Output:
Here we can see that it has predicted the right again, and with high confidence it has told us that Rahul is a father of Ashok and Ashok is a father of him. And the confidence of the model is around 99% which is also very satisfying.
One more model is inbuilt with the package named wiki180_bertentity_softmax, which is also made with a BIRT encoder.
model = opennre.get_model('wiki80_bertentity_softmax')
Output:
Checking the performance of the model.
model.infer({'text': 'He was the son of ashok, and grandson of rahul.', 'h': {'pos': (18, 46)}, 't': {'pos': (78, 91)}})
Output:
This model also says that the relationship of the son word is with the child word and the confidence of the prediction is also good.
Here we have seen in the article that all the pre-trained models of the OpenNRE package are high performing and the model’s predictions are also good. Unlike the other packages, it is very easy to use. If we go to the example folder of the package, we will see there are various models available, which we can edit and use according to our requirements. Because it is open-source, there will be no charges in the deployment of the pretrained model. I encourage you to go through the toolkit’s documentation and try to perform more with the OpenNRE.
References:
- Documents of the OpenNRE.
- Google colab for codes. | https://analyticsindiamag.com/beginners-guide-to-opennre-toolkit/ | CC-MAIN-2022-33 | refinedweb | 1,192 | 63.9 |
stash@{0}: WIP on submit: 6ebd0e2... Update git-stash documentation stash@{1}: On master: 9cc0589... Add git-stash
git-stash - Stash the changes in a dirty working directory away.).
Restore the changes recorded in the stash on top of the current working tree state. When no <stash> is given, applies the latest one. The working directory must match the index.
This operation can fail with conflicts; you need to resolve them by hand in the working tree..
Remove all the stashed states. Note that those states will then be subject to pruning, and may be difficult or impossible to recover.
Remove a single stashed state from the stash list. When no <stash> is given, it removes the latest one. i.e. stash@\{0}
Remove a single stashed state from the stash list and apply on top of the current working tree state. When no <stash> is given, stash@\{0} is assumed. See also apply.
Create a stash (which is a regular commit object) and return its object name, without storing it anywhere in the ref namespace. apply apply # ... continue hacking ...
You can use git stash save --keep-index when'
Written by Nanako Shiraishi <nanako3@bluebottle.com> | https://www.kernel.org/pub/software/scm/git/docs/v1.6.2.3/git-stash.html | CC-MAIN-2016-40 | refinedweb | 196 | 76.62 |
Hi.
You need to build the logic to support that requirement. It would make sense to build some recursive logic that uses the IContentRepository's GetChildren and Get<> methods. It enables you to recursively work your way through your Episerver page tree - hint: start with ContentReference.StartPage and work down from that.
Please be aware that it potentially can get costly dependent on your page tree depth. Often, what I see is the need to list all nodes (pages in your context) beneath current page in order to provide navigation options to end users. To do this, you need to use the PageRouteHelper to access the current page your Block is currently used on. That will give you the context.
All above mentioned stuff has to be executed in the Controller belonging to your BlockType. I'm assuming you use MVC.
Hope that helped you to move forward.
Maybe you can get inspired by this code that I made on the fly. Be aware that it's not tested and only acts as a simplified example of how to interact with IContentRepository. It gives you the recursive feature but does not build the tree-structure.
public class ContentHelper { private readonly IContentRepository _contentRepository; public ContentHelper() { this._contentRepository = ServiceLocator.Current.GetInstance<IContentRepository>(); } public void FindDescendantsOfType<T>(ContentReference reference, ICollection<T> descendants) where T : PageData { var children = this._contentRepository.GetChildren<T>(reference); foreach (var child in children) { descendants.Add(child); FindDescendantsOfType(child.ContentLink, descendants); } } }
Casper Aagaard Rasmussen
i have a requirement to add a full episerver menu tree in a block type..if i add a new page in menu it should reflect in blocktype also. | https://world.episerver.com/forum/legacy-forums/Episerver-7-CMS/Thread-Container/2016/7/add-a-episerver-menu-menu-tree-to-a-blocktype/ | CC-MAIN-2020-45 | refinedweb | 273 | 57.98 |
Web Developer
Welcome to the first part of my JSWorld Conference 2022 summary series, in which I share a summary of all the talks in four parts with you.
After this part which contains the first three talks, you can read the second part here,:
Colin Ihrin - Engineer at Deno
Deno is a simple, modern, and secure runtime for JavaScript and TypeScript similar to Node.js that uses V8 and is built in Rust. It is created by Ryan Dahl, the original inventor of Node.js, who talked about JSConf EU in 2018, where he talks about 10 Things he regrets about node.js.
Some of the topics he talked about were module resolution through node_modules, the lack of attention to security, and various deviations from how browser’s worked. and he set out to fix all these mistakes in Deno.
In this talk, Colin will explore Deno’s tech stack that allows the project to move fast, why Deno is betting on browser compatibility, and how they intend to provide compatibility with the existing JavaScript ecosystem.
Deno - A modern runtime for JavaScript and TypeScript
Node.js has been around since 2009, predates much of modern JavaScript like CommonJS vs. ECMAScript Modules and Callbacks vs. Promises, and does have a huge ecosystem with a lot of legacy code that can slow progress and standards compliance. The goal of Deno was to address some of the shortcomings of Node.
June 2018: Deno introduced at JSConf EU
August 2018: Deno v0.1.0 released, rewritten in Rust — previously written in Golang.
Having to garbage collected languages inside the same process might not be the best thing.
May 2020: Deno v1.0.0 released
March 2021: The Deno company was announced
June 2021: Deno Deploy announced
Q3 2022: Deno Deploy reaches GA
Deno has caught up with node rapidly in terms of Github starts:
Deno is built from Rust crates. deno_core includes rusty_v8 and deno_ops.V8 is a C++ project, so they came up with a layer surrounding that called rusty_v8, and now everything outside of v8 itself is Rust code. There is also a layer called deno_ops which provides an API for performing operations with rusty-v8.
deno_runtime includes deno_core mentioned above plus deno_console, deno_crypto, deno_fetch, deno_web, Tokio, etc.
Deno CLI Includes deno_runtime and also an integrated toolchain. This is what you download and run — on Linux, macOS, and Windows — and it is distributed as a single executable file. Everything you need to run Deno programs is included.
The idea is that Deno is pretty much battery included compare to historically Node.js which was not.
Deno - A modern runtime for JavaScript and TypeScript
Modules maintained by the core team for fs, http, streams, uuid, wasi, etc. It is similar to Core modules in Node.js, and it’s guaranteed to be maintained and work with Deno.
It is recommended to pin a version because there may be breaking changes with new versions. Example:
import { copy } from "[email protected]/fs/mod.ts"; await copy("source.txt", "destination.txt");
It works like in a browser, there is no CommonJS and it’s ESM, and there is no package.json, node_modules, or index.js.
Runtime fetches, caches, and compiles automatically, so there is no separate npm install or things like that.
deno.land/x is a hosting service for Deno scripts.
import { serve } from "[email protected]/http/server.ts"; function handler(_req: Request): Response { return new Response("Hello, Wolrd!"); } serve(handler); // deno run --allow-net server.ts
Deno prefers web platform APIs when possible. A number of the APIs that are supported: URL, FormData, fetch, Blob, console, File, TextEncoder, WebSocket, WebGPU, Local Storage, etc.
Deno does have these tools out of the box in comparison to Node.js that doesn't have, like Linter, Formatter, Version Manager, Documentation Gen., Task Runner, Bundler, Packager, and TypeScript.
You can see all subcommands with
deno -help
Deno does support TypeScript out of the box, caches transpired files for future use and The TypeScript compiler is snapshotted onto Deno.
SWC is used for transpilation and bundling which is:
20x faster in single threaded app execution and up to 70x faster in multi threaded than babel.
Deno cannot access the outside world by default and Permission must be explicitly granted.
Some of Permissions CLI Flags: —alow-env, —allow-hrtime, —allow-net, —allow-ffi, —allow-read, —allow-run, —allow-write, —allow-all/ -A, etc.
createRequire(...) is provided to create a
require function for loading CJS modules. It also sets supported globals.
import { createRequire } from "[email protected]/node/module.ts"; const require = createRequire(import.meta.url); // Loads native module polyfill. const path = require("path"); // Loads extensionless module. const cjsModule = require("./my_mod"); // Visits node_modules. const leftPad = require("left-pad"); // deno run —compat —unstable —alow-read ./node-code.mjs
up to date V8, Web crypto, reportError API, Mocking utilities, Snapshot testing, etc.
Announcing the Web-interoperable Runtimes Community Group
There is a new W3C Community Group called Web-interoperable Runtimes Community Group also known as WinterCG, which is a Collaboration between non-browser JavaScript runtimes.
This group is not aiming to create new APIs, but give a voice to server side runtimes at the table where all the discussions about specifications happen.
Deno Deploy is essentially a globally distributed JavaScript VM, basically a V8 isolate cloud. It’s something similar to lambda functions but at the Edge (We will talk about Edge) and is currently available in over 30 regions globally and still growing. It is Built on the same open-source web APIs but with some tweaks for the cloud, also is integrated with netlify functions and supabase functions.
Negar Jamalifard - Software Developer at Lightspeed Commerce
CSS Houdini is one of the most recent changes in the CSS world that could change a lot of our old methods for updating CSS.. —[MDN]
In this talk, she started with the basics of how browsers normally render CSS and then got to the point that how it would be different with the upcoming changes in CSS.
CSS Houdini - CSS: Cascading Style Sheets | MDN
One of the important parts of the browsers are rendering engines, for example:
Blink in Chromium-based browsers
Gecko in Firefox
Webkit in Safari
All of These browsers go through the same set of flow from the point that they receive an initial CSS file to the point that they can actually render something on the screen.
Parsing
Whenever the rendering engine encounters a link to a CSS file in an HTML file, it will download the CSS file in the background and then go through this process:
Bytes → Characters → Tokens → Nodes → Object Model and it will generate two Object Models:
DOM (Document Object Model)
CSSOM (CSS Object Model) At this stage, the browser has all the information about the data structure of the page and the styling of the page, but to be able to render something on the screen, it needs to merge these two pieces of information. That’s what happens in the next step:
Render tree
Render tree is another tree-like structure of all the visible elements that suppose to be rendered on the screen.
Layout
At this stage, the browser tries to find the geometry and coordination of each element within the viewport and basically draws some sort of map for itself to know where the elements should go.
Paint
It paints Backgrounds, border colors, etc. and at this point, we see something on the screen.
Within all of these Phases, there are only two points that developers have access to an API: DOM and some parts of CSSOM.
A couple of years ago a group of people from W3C and other web communities agreed on a manifesto called Extensible Web Manifesto.
The main idea of this manifesto is to move the focus from creating high-level APIs to providing low-level APIs and exposing the underlying layers.
As result, a lot of changes and new APIs came onto the web, and the name of all these new APIs in the CSS world is CSS Houdini and it is going to enable developers to have access to every phase of the rendering process and extend its behavior.
Typed OM is an upgraded enhanced version of the CSSOM. It will add types to CSS values in the form of JavaScript objects. On top of that, it will provide a set of semantic APIs that makes working with CSS in JavaScript more pleasant.
const fontSize = getComputedStyle(box).getPropertyValue("font-size"); // 16px (string)
const fontSize = box.computedStyleMap().get("font-size")
const x = CSS.percent(-50); const y = CSS.percent(-50); const translation = new CSSTranslate(x, y); box.attributeStyleMap.set( "transform", new CSSTransformValue([translation]) );
Custom property or also known as CSS Variables was a cool feature, but it had some downsides. For example, the browser didn’t know how to animate them, because it didn’t have enough information about these properties to animate them.
With all the information that we can provide to the browsers through this API, they can now apply transitions and animations to these custom properties.
CSS.registerProperty({ name: '--my-color', syntax: '<color>', inherits: false, initialValue: 'pink', });
@property --my-color { syntax: '<color>'; inherits: false; initialValue: 'pink'; }
We can now write scripts and pass them to the rendering engine and the rendering engine run it on a specific rendering phase, but we can not write those script in the main javascript body for two reasons:
These scripts should not have access to the DOM environment, because when a rendering engine is for example at the layout phase, it assumes that DOM is not going to change.
The rendering engine should be able to run these scripts on other threads than the main thread.
The Solution is a Woklet. Worklets (pretty much like web workers) are independent scripts that run during the rendering process. They are run by the rendering engine and are thread-agnostic.
Houdini has introduced three worklets:
Paint Worklet or Paint API
Animation Worklet or Animation API
Layout Worklet or Layout API
and that’s how we use it:
registerPaint
CSS.paintWorklet.addModule('worklet.js')
A paint Worklet:
class ImagePainter { static get inputProperties() { return ["--myVariable"]; } static get inputArguments() { return ["<color>"]; } static get contextOptions() { return {alpha: true}; } // The only mandatory method paint( ctx, // CanvasRenderingContext2D size, // The geometry of the painting area properties, // List of custom propertes args // List of arguments passed to paint(..) ) { // Painting logic } } // Register our class under a specific name registerPaint('my-image', ImagePainter);
Writing a paint Worklet is very much like writing canvas, but what makes it super interesting is that from now on the creative developers can build these pain Worklets and then share them as npm packages and then lazy developers like me can just go ahead and install them in our apps and as easy as that you have new possibilities in your css, its like having plugins in your css.
Useful Links:
Dexter (Alexander Essleink) - Senior Frontend Engineer at Passionate People
As a developer with a passion for those sweet fresh stacks who likes to play around new technologies, Dexter goes over his "perfect" stack, using graphQL, SvelteKit, Docker, and Github. He likes to control the things that he makes to be able to understand all the parts to know how to fix and change them.
I've been honing this stack for many projects, and it's a dream.
In the developing I like to be in the flow state when you not really get in to the details. It allows me to keep creating new things and to keep playing around with those things.
A simple VPS dunning Debian.
I do some things with the clouds, but mostly i just like to run on my own server with the versions i control and i got to choose who accesses my systems.
He uses containers to separate parts and docker-compose to control the bits.
He believes this is:
The one database to rule them all, its Powerful, “scaleable” and functional.
to manage the data in Postgres:
Svelte
I like how easy it is to get started and to make some components and to get growing with it.
Gitlab or Github CI
Then he shows some example projects and how these Tech stack works, which is hard to summarize in this article, but let me know if you want me to dive deeper into it in another post.
In the end, he talks about serverless and the reason why he is not a fan of that:
Because you are not in control of where your software is running, your file system calls, what api limit you have or what access control there is.
He believes often it’s not necessary to worry about the scalability that serverless gives you because “You are not Google”.
The most of the projects we build are not necessary to be in google scale. Its just simple things for 100 or 1000 people, a simple website with less traffic, And then if you are being successful its always easier to scale than to scale primilarly, to learn about all the details and limits.. If you control all the parts or make all the parts really simple it would be much easier and you can be much more creative.
I hope you enjoyed this part and it can be as valuable to you as it was to me.
You can read the second part here, the third part here, and the last part here, where I summarized the rest of the talks which are about:
Encode, Stream, and Manage Videos With One Simple Platform | https://hackernoon.com/jsworld-conference-2022-part-i?ref=hackernoon.com | CC-MAIN-2022-33 | refinedweb | 2,261 | 60.55 |
I would like to get the way information for and another way with the python osmapi library.
This is my source code so far (3600148838 is the area id for "United States"):
import overpy
import osmapi
api = osmapi.OsmApi(api="", username = "me", password = "*")
api = overpy.Overpass()
result = api.query("""
area(3600148838)->.searchArea;
(
way["addr"](area.searchArea);
);
out body;
""")
for way in result.ways:
print(api.WayGet(way))
I'm getting (overpy.Way id=300589944 nodes=[3046663136, 3046663138, 3046663129, 3046663123, 3046663136]) in Python. How can I extract just the way from the overpy.Way class?
(overpy.Way id=300589944 nodes=[3046663136, 3046663138, 3046663129, 3046663123, 3046663136])
asked
12 Feb, 03:31
norcross
26●2●2●5
accept rate:
0%
No responses so far?
Impatient much?
You might get more responses if you provide more detail about what you're trying to do. Exactly what "way information" are you looking to get and what will you be doing with the result? How does an address search of the entire US relate to your first sentence about retrieving information about a specific way? Why are you connecting to the OSM editing API and then querying the Overpass API?
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
api ×228
python ×41
osmapi_python ×3
overpy ×3
question asked: 12 Feb, 03:31
question was seen: 127 times
last updated: 13 Feb, 17:41
Get center point from Overpass via overpy
How does one pull the full zoom images from the Cycle Map Repository on to his computer?
Using python to get a list of ways from a bounding box
Query Key,Value Pair From Nominatim "Coverage"
Overpass returns 429 every time for some queries, but not for others
How can I check easily if a point is in a street or not.
Extract nodes from way
Is it easy to extract the shape of country across API?
How to extract partial data for large regions ?
Is there a web-browser-friendly way to obtain previous versions of large OSM objects?
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/67972/how-to-use-osmapiwayget | CC-MAIN-2019-18 | refinedweb | 366 | 72.16 |
Warning: You are browsing the documentation for Symfony 3.0, which is no longer maintained.
Read the updated version of this page for Symfony 5.3 (the current stable version).
File
File¶
Validates that a value is a valid “file”, which can be one of the following:
- A string (or object with a
__toString()method) path to an existing file;
- A valid
Symfony\Component\HttpFoundation\File\Fileobject (including objects of class
Symfony\Component\HttpFoundation\File\UploadedFile).
This constraint is commonly used in forms with the FileType form field.
Basic Usage¶
This constraint is most commonly used on a property that will be rendered
in a form as a FileType field. For
example, suppose you’re creating an author form where you can upload a “bio”
PDF for the author. In your form, the
bioFile property would be a
file
type. The
Author class might look as follows:
// src/AppBundle/Entity/Author.php namespace AppBundle\Entity; use Symfony\Component\HttpFoundation\File\File; class Author { protected $bioFile; public function setBioFile(File $file = null) { $this->bioFile = $file; } public function getBioFile() { return $this->bioFile; } }
To guarantee that the
bioFile
File object is valid and that it is
below a certain file size and a valid PDF, add the following:
- Annotations
- YAML
- XML
- PHP
The
bioFile property is validated to guarantee that it is a real file.
Its size and mime type are also validated because the appropriate options
have been specified.
Options¶
maxSize¶
type:
mixed
If set, the size of the underlying file must be below this file size in order to be valid. The size of the file can be given in one of the following formats:
For more information about the difference between binary and SI prefixes, see Wikipedia: Binary prefix.
binaryFormat¶
type:
boolean default:
null
When
true, the sizes will be displayed in messages with binary-prefixed
units (KiB, MiB). When
false, the sizes will be displayed with SI-prefixed
units (kB, MB). When
null, then the binaryFormat will be guessed from
the value defined in the
maxSize option.
For more information about the difference between binary and SI prefixes, see Wikipedia: Binary prefix.
mimeTypes¶
type:
array or
string
If set, the validator will check that the mime type of the underlying file is equal to the given mime type (if a string) or exists in the collection of given mime types (if an array).
You can find a list of existing mime types on the IANA website.
maxSizeMessage¶
type:
string default:
The file is too large ({{ size }} {{ suffix }}). Allowed maximum size is {{ limit }} {{ suffix }}.
The message displayed if the file is larger than the maxSize option.
mimeTypesMessage¶
type:
string default:
The mime type of the file is invalid ({{ type }}). Allowed mime types are {{ types }}.
The message displayed if the mime type of the file is not a valid mime type per the mimeTypes option.
disallowEmptyMessage¶
type:
string default:
An empty file is not allowed.
This constraint checks if the uploaded file is empty (i.e. 0 bytes). If it is, this message is displayed.
notFoundMessage¶
type:
string default:
The file could not be found.
The message displayed if no file can be found at the given path. This error
is only likely if the underlying value is a string path, as a
File object
cannot be constructed with an invalid file path.
notReadableMessage¶
type:
string default:
The file is not readable.
The message displayed if the file exists, but the PHP
is_readable function
fails when passed the path to the file.
uploadIniSizeErrorMessage¶
type:
string default:
The file is too large. Allowed maximum size is {{ limit }} {{ suffix }}.
The message that is displayed if the uploaded file is larger than the
upload_max_filesize
php.ini setting.
uploadFormSizeErrorMessage¶
type:
string default:
The file is too large.
The message that is displayed if the uploaded file is larger than allowed by the HTML file input field.
uploadErrorMessage¶
type:
string default:
The file could not be uploaded.
The message that is displayed if the uploaded file could not be uploaded for some unknown reason, such as the file upload failed or it couldn’t be written to disk.. | https://symfony.com/index.php/doc/3.0/reference/constraints/File.html | CC-MAIN-2021-31 | refinedweb | 682 | 63.8 |
Help:Transwiki
From Wikibooks, the open-content textbooks collection
Transwiki is the process of moving pages between two wikimedia wikis.
[edit] What Transwiki is
The transwiki namespace is used as a temporary location for contents.
[edit] What Transwiki is not
The transwiki namespace is not the correct place to add new contributions. Pages in the transwiki namespace should not be edited, except to sychron. Because of the nature of the transwiki system, it is not the correct place for experiments or tests of wikiformatting or the wiki software. If you want to test something, do so in the Sandbox instead.
[edit] Import Tool
New pages in the transwiki namespace from Wikipedia should only be added by an admin using the Import tool (see Wikibooks:Import Policy for details). The import tool allows wikibookians to bring new materials to Wikibooks, while at the same time preserving the original edit history of the page. Maintaining this edit history is important under the terms of the GFDL. To request a page be imported to Wikibooks from other Wikimedia projects post your request to Requests for Import.
[edit] Copy and Paste Transwikis
Pages transwikied from projects from which we do not have the import tool enabled need to be carried out using the copy-and-paste transwiki process. The following steps need to be followed in order to comply with the copyrights:
- Open the edit window of the page to be copied from the other project, and copy all the text there.
- Create a new page in the Transwiki: namespace (do this by pointing your browser to the page (if a page already exists there, choose another name).
- Paste the text into the edit window, then save.
- Make a note of the entry on the Transwiki log here on wikibooks.
- Either copy the contribution history of the copied page from the other project, and paste this onto the Transwiki module's talk page (alternatively, if the material is unlikely to be deleted on the other project, just give a link to that page's history).
[edit] Transwiki procedures
Import can only be carried out by wikibooks administrators. See Wikibooks:Import Policy for the rules regarding transwikis from wikipedia.
[edit] Moving pages out of the Transwiki namespace
[edit] As a book stub
If the transwikied module is to be used as the basis for a new book, the new book should be categorised and tagged with {{New book}}..
[edit] As a chapter
If the transwikied module seems appropriate for an alreadyised Manual of Style like wikipedia does.
[edit] Cleanup process (on wikibooks)
After a page has found a new home, a few steps should be taken to clean up the process. The page should be checked for double-redirects, all links pointing to the transwiki namespace should be updated to point to the module's new location and if possible, any links made from the project of origin should be updated, in order to attract contributors who might improve the content.
[edit] See Also
Wikibooks' Naming Policy for suggestions on naming pages. | http://en.wikibooks.org/wiki/Help:Transwiki | crawl-002 | refinedweb | 507 | 58.01 |
Namespace planning for DNS
Updated: January 21, 2005
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
Namespace planning for DNS
Before you begin using DNS on your network, decide on a plan for your DNS domain namespace. Coming up with a namespace plan involves making some decisions about how you intend to use DNS naming and what goals you are trying to accomplish in using DNS. Some questions you might have at this stage include the following:
- Have you previously chosen and registered a DNS domain name for use on the Internet?
- Are you going to set up DNS servers on a private network or the Internet?
- Are you going to use DNS to support your use of Active Directory?
- What naming requirements do you need to follow when choosing DNS domain names for computers?
Each of these questions is further discussed in the following sections.
Choosing your first DNS domain name
When setting up DNS servers, it is recommended that you first choose and register a unique parent DNS domain name that can be used for hosting your organization on the Internet, for example, "microsoft.com". This name is a second-level domain within one of the top-level domains used on the Internet. For a list and description of the most common top-level domains used on the Internet, see Top-level domains.
Once you have chosen your parent domain name, you can combine this name with a location or organizational name used within your organization to form other subdomain names. For example, if a subdomain were added, such as the itg.example.microsoft.com domain tree (for resources used by the information technology group at your organization), additional subdomain names could be formed using this name. For instance, a group of programmers working on electronic data interchange (EDI) in this division could have a subdomain named edi.itg.example.microsoft.com. Likewise, another group of workers providing support in this division might use support.itg.example.microsoft.com.
Before you decide on a parent DNS domain name for your organization to use on the Internet, perform a search to see if the name is already registered to another organization or person. The Internet DNS namespace is currently managed by the Internet Network Information Center (InterNIC). In the future, other domain name registrars might also be available.
For more information, see Interoperability issues.
DNS namespace planning for Active Directory
If you are using Active Directory, you need to first establish a namespace plan. Before a DNS domain namespace can be properly implemented, the Active Directory structure needs to be available. Therefore, begin with the Active Directory design and support it with the appropriate DNS namespace. Upon review, if you detect unforeseen or undesirable consequences for either of your plans, make revisions as needed.
Active Directory domains are named with DNS names. When choosing DNS names to use for your Active Directory domains, start with the registered DNS domain name suffix that your organization has reserved for use on the Internet, such as microsoft.com and combine this name with either geographical or divisional names used in your organization to form full names for your Active Directory domains.
For example, the test group at Microsoft might call their domain test.example.microsoft.com. This method of naming ensures that each Active Directory domain name is globally unique. And, once employed, this naming method also makes it easy to use existing names as parents for creating additional subdomains and further grow the namespace to accommodate new departments within your organization.
For a small organization using only a single domain or a small multidomain model, planning can be straightforward and follow an approach similar to the previous examples. For a larger organization using a more complex domain model, refer to additional Microsoft resources. For more information, see Using the Windows Deployment and Resource Kits.
Caution
- When planning your DNS and Active Directory namespace, it is recommended that you use a differing set of distinguished names that do not overlap as the basis for your internal and external DNS use. As an example, assuming your organization's parent domain name is "example.microsoft.com"
For internal DNS names usage you could use a name such as "internal.example.microsoft.com"
For external DNS names usage you could use a name such as "external.example.microsoft.com"
By keeping your internal and external namespaces separate and distinct in this way, you enable simplified maintenance of configurations such as domain name filter or exclusion lists.
Choosing names
It is strongly recommended that you use only characters in your names that are part of the Internet standard character set permitted for use in DNS host naming. Allowed characters are defined in Request for Comments (RFC) 1123 as follows: all uppercase letters (A-Z), lowercase letters (a-z), numbers (0-9), and the hyphen (-).
For organizations with a prior investment in Microsoft NetBIOS technology, existing computer names might conform to the NetBIOS naming standard. If this is the case, consider revising the names of your computers to the Internet DNS standard.
The process of adjusting your naming conventions might prove to be time consuming. To ease the transition from NetBIOS names to DNS domain names, the DNS Server service includes support for extended ASCII and Unicode characters. However, this additional character support can only be used in a network environment with computers running Windows 2000 or a product in the Windows Server 2003 family. This is because most other DNS resolver client software is based on RFC 1123, the specification that standardizes Internet host naming requirements. If a nonstandard DNS domain name is entered during setup, a warning message appears recommending that a standard DNS name be used instead.
In networks running Windows NT 4.0 and earlier, a NetBIOS name is used to identify a computer running a Windows operating system. In networks with computers running Windows 2000 or a product in the Windows Server 2003 family, a computer can be identified in any of the following ways:
- A NetBIOS computer name, which is optional and is used for interoperability with earlier Windows systems.
- The full computer name. This is the default name for the computer.
In addition to these, a computer might also be identified by the FQDN comprised of the computer (host) name and a connection-specific domain name, where one is configured and applied for a specific network connection on the computer.
The full computer name is a concatenation of both the computer name and the primary DNS suffix for the computer. The DNS domain name for the computer is part of the System properties for the computer and is not related to any specifically installed networking components. However, computers that do not use either networking or TCP/IP do not have a DNS domain name.
The following table is a comparison of NetBIOS and DNS computer names..
In this way, the System properties page can have the following settings on the Computer Name tab (or its related dialog boxes):
Important
- By default, the primary DNS suffix portion of a computer's fully qualified domain name (FQDN) or the Lightweight Directory Access Protocol (LDAP).
For more information, see Programming interfaces and Directory access protocol.
Note
- In addition to the primary DNS domain name for a computer, connection-specific DNS names can be configured. For more information, see Configuring multiple names.
Because DNS host names are encoded in UTF-8 format, they do not necessarily have only 1 byte per character. ASCII characters are 1 byte each, but the size of extended characters is more than 1 byte.
Some non-Microsoft resolver software supports only the characters listed in RFC 1123. If you have any non-Microsoft.
Integration planning: Supporting multiple namespaces
In addition to support for the internal DNS namespace, many networks require support for resolving external DNS names, such as those used on the Internet. The DNS Server service provides ways to integrate and manage disjointed namespaces where both external and internal DNS names are resolved on your network.
In deciding how to integrate namespaces, determine which of the following scenarios most closely resembles your situation and proposed use of DNS:
- An internal DNS namespace, used only on your own network.
- An internal DNS namespace with referral and access to an external namespace, such as referral or forwarding to a DNS server on the Internet.
- An external DNS namespace, used only on a public network such as the Internet.
If you decide to limit the use of DNS as a name service within your private namespace, there are no restrictions on how it is designed or implemented. For example, you can choose any DNS naming standard, configure DNS servers to be valid root servers for your network's DNS distributed design, or form an entirely self-contained DNS domain tree structure and hierarchy.
When you start providing either referral to an external DNS namespace or full DNS service on the Internet, you need to consider compatibility between your private and external namespaces. Additionally, Internet service requires the registration of parent domain names for your organization.
For more information on planning secure integration of Internet DNS names referral through your secured private network, see the Microsoft Windows Resource Kits Web site.
Notes
- When you add and connect to a DNS server for the first time using the DNS console, the root hints file (Cache.dns) is automatically primed for use on your network. This occurs transparently and does not require that you take any further action for this to be done.
Depending on how you use DNS, you can update this file in one of the following ways:
- If you are connected to the Internet, you can either edit or update the local root hints file when the Internet root hints file (Named.root) is updated and released by the owners of the Internet root zone. For a current copy of this file, you can use anonymous FTP to the InterNIC Web site.
- If you are not connected to the Internet, you can remove the default resource records contained in this file and replace them with NS and A resource records for the DNS authoritative servers at the root domain of your site. For root domain servers on your private network, you can safely remove this file entirely because servers operating at this domain level do not require or use a cache of root hints.
- If you delete the Cache.dns file for a private root domain server when you have configured the DNS Server service using the From file method, you also need to remove the cache directive from the server boot file before restarting the DNS Server service.
- Web addresses can change, so you might be unable to connect to the Web site or sites mentioned here. | http://technet.microsoft.com/en-gb/library/cc759036(WS.10).aspx | CC-MAIN-2014-15 | refinedweb | 1,808 | 50.46 |
Setting Out to C++
In this chapter you’ll learn about the following:
- Creating a C++ program
- The general format for a C++ program
- The #include directive
- The main() function
- Using the cout object for output
- Placing comments in a C++ program
- How and when to use endl
- Declaring and using variables
- Using the cin object for input
- Defining and using simple functions
When you construct a simple home, you begin with the foundation and the framework. If you don’t have a solid structure from the beginning, you’ll have trouble later filling in the details, such as windows, door frames, observatory domes, and parquet ballrooms. Similarly, when you learn a computer language, you should begin by learning the basic structure for a program. Only then can you move on to the details, such as loops and objects. This chapter gives you an overview of the essential structure of a C++ program and previews some topics—notably functions and classes—covered in much greater detail in later chapters. (The idea is to introduce at least some of the basic concepts gradually en route to the great awakenings that come later.)
C++ Initiation
Let’s begin with a simple C++ program that displays a message. Listing 2.1 uses the C++ cout (pronounced “see-out”) facility to produce character output. The source code includes several comments to the reader; these lines begin with //, and the compiler ignores them. C++ is case sensitive; that is, it discriminates between uppercase characters and lowercase characters. This means you must be careful to use the same case as in the examples. For example, this program uses cout, and if you substitute Cout or COUT, the compiler rejects your offering and accuses you of using unknown identifiers. (The compiler is also spelling sensitive, so don’t try kout or coot, either.) The cpp filename extension is a common way to indicate a C++ program; you might need to use a different extension, as described in Chapter 1, “Getting Started with C++.”
Listing 2.1 myfirst.cpp
// myfirst.cpp -- displays a message #include <iostream> // a PREPROCESSOR directive int main() // function header { // start of function body using namespace std; // make definitions visible cout << "Come up and C++ me some time."; // message cout << endl; // start a new line cout << "You won't regret it!" << endl; // more output return 0; // terminate main() } // end of function body
After you use your editor of choice to copy this program (or else use the source code files available online from this book’s web page—check the registration link on the back cover for more information), you can use your C++ compiler to create the executable code, as Chapter 1 outlines. Here is the output from running the compiled program in Listing 2.1:
Come up and C++ me some time. You won't regret it!
You construct C++ programs from building blocks called functions. Typically, you organize a program into major tasks and then design separate functions to handle those tasks. The example shown in Listing 2.1 is simple enough to consist of a single function named main(). The myfirst.cpp example has the following elements:
- A preprocessor #include directive
- A function header: int main()
- A using namespace directive
- A function body, delimited by { and }
- Statements that uses the C++ cout facility to display a message
- A return statement to terminate the main() function
Let’s look at these various elements in greater detail. The main() function is a good place to start because some of the features that precede main(), such as the preprocessor directive, are simpler to understand after you see what main() does.
Features of the main() Function
Stripped of the trimmings, the sample program shown in Listing 2.1 has the following fundamental structure:
int main() { statements return 0; }
These lines state that there is a function called main(), and they describe how the function behaves. Together they constitute a function definition. This definition has two parts: the first line, int main(), which is called the function header, and the portion enclosed in braces ({ and }), which is the function body. (A quick search on the Web reveals braces also go by other names, including “curly brackets,” “flower brackets,” “fancy brackets,” and “chicken lips.” However, the ISO Standard uses the term “braces.”) Figure 2.1 shows the main() function. The function header is a capsule summary of the function’s interface with the rest of the program, and the function body represents instructions to the computer about what the function should do. In C++ each complete instruction is called a statement. You must terminate each statement with a semicolon, so don’t omit the semicolons when you type the examples.
Figure 2.1 The main() function.
The final statement in main(), called a return statement, terminates the function. You’ll learn more about the return statement as you read through this chapter.
The Function Header as an Interface
Right now the main point to remember is that C++ syntax requires you to begin the definition of the main() function with this header: int main(). This chapter discusses the function header syntax in more detail later, in the section “Functions,” but for those who can’t put their curiosity on hold, here’s a preview.
In general, a C++ function is activated, or called, by another function, and the function header describes the interface between a function and the function that calls it. The part preceding the function name is called the function return type; it describes information flow from a function back to the function that calls it. The part within the parentheses following the function name is called the argument list or parameter list; it describes information flow from the calling function to the called function. This general description is a bit confusing when you apply it to main() because you normally don’t call main() from other parts of your program. Typically, however, main() is called by startup code that the compiler adds to your program to mediate between the program and the operating system (Unix, Windows 7, Linux, or whatever). In effect, the function header describes the interface between main() and the operating system.
Consider the interface description for main(), beginning with the int part. A C++ function called by another function can return a value to the activating (calling) function. That value is called a return value. In this case, main() can return an integer value, as indicated by the keyword int. Next, note the empty parentheses. In general, a C++ function can pass information to another function when it calls that function. The portion of the function header enclosed in parentheses describes that information. In this case, the empty parentheses mean that the main() function takes no information, or in the usual terminology, main() takes no arguments. (To say that main() takes no arguments doesn’t mean that main() is an unreasonable, authoritarian function. Instead, argument is the term computer buffs use to refer to information passed from one function to another.)
In short, the following function header states that the main() function returns an integer value to the function that calls it and that main() takes no information from the function that calls it:
int main()
Many existing programs use the classic C function header instead:
main() // original C style
Under classic C, omitting the return type is the same as saying that the function is type int. However, C++ has phased out that usage.
You can also use this variant:
int main(void) // very explicit style
Using the keyword void in the parentheses is an explicit way of saying that the function takes no arguments. Under C++ (but not C), leaving the parentheses empty is the same as using void in the parentheses. (In C, leaving the parentheses empty means you are remaining silent about whether there are arguments.)
Some programmers use this header and omit the return statement:
void main()
This is logically consistent because a void return type means the function doesn’t return a value. However, although this variant works on some systems, it’s not part of the C++ Standard. Thus, on other systems it fails. So you should avoid this form and use the C++ Standard form; it doesn’t require that much more effort to do it right.
Finally, the ISO C++ Standard makes a concession to those who complain about the tiresome necessity of having to place a return statement at the end of main(). If the compiler reaches the end of main() without encountering a return statement, the effect will be the same as if you ended main() with this statement:
return 0;
This implicit return is provided only for main() and not for any other function.
Why main() by Any Other Name Is Not the Same
There’s an extremely compelling reason to name the function in the myfirst.cpp program main(): You must do so. Ordinarily, a C++ program requires a function called main(). (And not, by the way, Main() or MAIN() or mane(). Remember, case and spelling count.) Because the myfirst.cpp program has only one function, that function must bear the responsibility of being main(). When you run a C++ program, execution always begins at the beginning of the main() function. Therefore, if you don’t have main(), you don’t have a complete program, and the compiler points out that you haven’t defined a main() function.
There are exceptions. For example, in Windows programming you can write a dynamic link library (DLL) module. This is code that other Windows programs can use. Because a DLL module is not a standalone program, it doesn’t need a main(). Programs for specialized environments, such as for a controller chip in a robot, might not need a main(). Some programming environments provide a skeleton program calling some nonstandard function, such as _tmain(); in that case there is a hidden main() that calls _tmain(). But your ordinary standalone program does need a main(); this books discusses that sort of program.
C++ Comments
The double slash (//) introduces a C++ comment. A comment is a remark from the programmer to the reader that usually identifies a section of a program or explains some aspect of the code. The compiler ignores comments. After all, it knows C++ at least as well as you do, and, in any case, it’s incapable of understanding comments. As far as the compiler is concerned, Listing 2.1 looks as if it were written without comments, like this:
#include <iostream> int main() { using namespace std; cout << "Come up and C++ me some time."; cout << endl; cout << "You won't regret it!" << endl; return 0; }
C++ comments run from the // to the end of the line. A comment can be on its own line, or it can be on the same line as code. Incidentally, note the first line in Listing 2.1:
// myfirst.cpp -- displays a message
In this book all programs begin with a comment that gives the filename for the source code and a brief program summary. As mentioned in Chapter 1, the filename extension for source code depends on your C++ system. Other systems might use myfirst.C or myfirst.cxx for names.
The C++ Preprocessor and the iostream File
Here’s the short version of what you need to know. If your program is to use the usual C++ input or output facilities, you provide these two lines:
#include <iostream> using namespace std;
There are some alternatives to using the second line, but let’s keep things simple for now. (If your compiler doesn’t like these lines, it’s not C++98 compatible, and it will have many other problems with the examples in this book.) That’s all you really must know to make your programs work, but now let’s take a more in-depth look.
C++, like C, uses a preprocessor. This is a program that processes a source file before the main compilation takes place. (Some C++ implementations, as you might recall from Chapter 1, use a translator program to convert a C++ program to C. Although the translator is also a form of preprocessor, we’re not discussing that preprocessor; instead, we’re discussing the one that handles directives whose names begin with #.) You don’t have to do anything special to invoke this preprocessor. It automatically operates when you compile the program.
Listing 2.1 uses the #include directive:
#include <iostream> // a PREPROCESSOR directive
This directive causes the preprocessor to add the contents of the iostream file to your program. This is a typical preprocessor action: adding or replacing text in the source code before it’s compiled.
This raises the question of why you should add the contents of the iostream file to the program. The answer concerns communication between the program and the outside world. The io in iostream refers to input, which is information brought into the program, and to output, which is information sent out from the program. C++’s input/output scheme involves several definitions found in the iostream file. Your first program needs these definitions to use the cout facility to display a message. The #include directive causes the contents of the iostream file to be sent along with the contents of your file to the compiler. In essence, the contents of the iostream file replace the #include <iostream> line in the program. Your original file is not altered, but a composite file formed from your file and iostream goes on to the next stage of compilation.
Header Filenames
Files such as iostream are called include files (because they are included in other files) or header files (because they are included at the beginning of a file). C++ compilers come with many header files, each supporting a particular family of facilities. The C tradition has been to use the h extension with header files as a simple way to identify the type of file by its name. For example, the C math.h header file supports various C math functions. Initially, C++ did the same. For instance, the header file supporting input and output was named iostream.h. But C++ usage has changed. Now the h extension is reserved for the old C header files (which C++ programs can still use), whereas C++ header files have no extension. There are also C header files that have been converted to C++ header files. These files have been renamed by dropping the h extension (making it a C++-style name) and prefixing the filename with a c (indicating that it comes from C). For example, the C++ version of math.h is the cmath header file. Sometimes the C and C++ versions of C header files are identical, whereas in other cases the new version might have a few changes. For purely C++ header files such as iostream, dropping the h is more than a cosmetic change, for the h-free header files also incorporate namespaces, the next topic in this chapter. Table 2.1 summarizes the naming conventions for header files.
Table 2.1 Header File Naming Conventions
In view of the C tradition of using different filename extensions to indicate different file types, it appears reasonable to have some special extension, such as .hpp or .hxx, to indicate C++ header files. The ANSI/ISO committee felt so, too. The problem was agreeing on which extension to use, so eventually they agreed on nothing.
Namespaces
If you use iostream instead of iostream.h, you should use the following namespace directive to make the definitions in iostream available to your program:
using namespace std;
This is called a using directive. The simplest thing to do is to accept this for now and worry about it later (for example, in Chapter 9, “Memory Models and Namespaces”). But so you won’t be left completely in the dark, here’s an overview of what’s happening.
Namespace support is a C++ feature designed to simplify the writing of large programs and of programs that combine pre-existing code from several vendors and to help organize programs. One potential problem is that you might use two prepackaged products that both have, say, a function called wanda(). If you then use the wanda() function, the compiler won’t know which version you mean. The namespace facility lets a vendor package its wares in a unit called a namespace so that you can use the name of a namespace to indicate which vendor’s product you want. So Microflop Industries could place its definitions in a namespace called Microflop. Then Microflop::wanda() would become the full name for its wanda() function. Similarly, Piscine::wanda() could denote Piscine Corporation’s version of wanda(). Thus, your program could now use the namespaces to discriminate between various versions:
Microflop::wanda("go dancing?"); // use Microflop namespace version Piscine::wanda("a fish named Desire"); // use Piscine namespace version
In this spirit, the classes, functions, and variables that are a standard component of C++ compilers are now placed in a namespace called std. This takes place in the h-free header files. This means, for example, that the cout variable used for output and defined in iostream is really called std::cout and that endl is really std::endl. Thus, you can omit the using directive and, instead, code in the following style:
std::cout << "Come up and C++ me some time."; std::cout << std::endl;
However, many users don’t feel like converting pre-namespace code, which uses iostream.h and cout, to namespace code, which uses iostream and std::cout, unless they can do so without a lot of hassle. This is where the using directive comes in. The following line means you can use names defined in the std namespace without using the std:: prefix:
using namespace std;
This using directive makes all the names in the std namespace available. Modern practice regards this as a bit lazy and potentially a problem in large projects. The preferred approaches are to use the std:: qualifier or to use something called a using declaration to make just particular names available:
using std::cout; // make cout available using std::endl; // make endl available using std::cin; // make cin available
If you use these directives instead of the following, you can use cin and cout without attaching std:: to them:
using namespace std; // lazy approach, all names available
But if you need to use other names from iostream, you have to add them to the using list individually. This book initially uses the lazy approach for a couple reasons. First, for simple programs, it’s not really a big issue which namespace management technique you use. Second, I’d rather emphasize the more basic aspects about learning C++. Later, the book uses the other namespace techniques.
C++ Output with cout
Now let’s look at how to display a message. The myfirst.cpp program uses the following C++ statement:
cout << "Come up and C++ me some time.";
The part enclosed within the double quotation marks is the message to print. In C++, any series of characters enclosed in double quotation marks is called a character string, presumably because it consists of several characters strung together into a larger unit. The << notation indicates that the statement is sending the string to cout; the symbols point the way the information flows. And what is cout? It’s a predefined object that knows how to display a variety of things, including strings, numbers, and individual characters.(An object, as you might remember from Chapter 1, is a particular instance of a class, and a class defines how data is stored and used.)
Well, using objects so soon is a bit awkward because you won’t learn about objects for several more chapters. Actually, this reveals one of the strengths of objects. You don’t have to know the innards of an object in order to use it. All you must know is its interface—that is, how to use it. The cout object has a simple interface. If string represents a string, you can do the following to display it:
cout << string;
This is all you must know to display a string, but now take a look at how the C++ conceptual view represents the process. In this view, the output is a stream—that is, a series of characters flowing from the program. The cout object, whose properties are defined in the iostream file, represents that stream. The object properties for cout include an insertion operator (<<) that inserts the information on its right into the stream. Consider the following statement (note the terminating semicolon):
cout << "Come up and C++ me some time.";
It inserts the string “Come up and C++ me some time.” into the output stream. Thus, rather than say that your program displays a message, you can say that it inserts a string into the output stream. Somehow, that sounds more impressive (see Figure 2.2).
Figure 2.2 Using cout to display a string.
The Manipulator endl
Now let’s examine an odd-looking notation that appears in the second output statement in Listing 2.1:
cout << endl;
endl is a special C++ notation that represents the important concept of beginning a new line. Inserting endl into the output stream causes the screen cursor to move to the beginning of the next line. Special notations like endl that have particular meanings to cout are dubbed manipulators. Like cout, endl is defined in the iostream header file and is part of the std namespace.
Note that the cout facility does not move automatically to the next line when it prints a string, so the first cout statement in Listing 2.1 leaves the cursor positioned just after the period at the end of the output string. The output for each cout statement begins where the last output ended, so omitting endl would result in this output for Listing 2.1:
Come up and C++ me some time. You won't regret it!
Note that the Y immediately follows the period. Let’s look at another example. Suppose you try this code:
cout << "The Good, the"; cout << "Bad, "; cout << "and the Ukulele"; cout << endl;
It produces the following output:
The Good, theBad, and the Ukulele
Again, note that the beginning of one string comes immediately after the end of the preceding string. If you want a space where two strings join, you must include it in one of the strings. (Remember that to try out these output examples, you have to place them in a complete program, with a main() function header and opening and closing braces.)
The Newline Character
C++ has another, more ancient, way to indicate a new line in output—the C notation \n:
cout << "What's next?\n"; // \n means start a new line
The \n combination is considered to be a single character called the newline character.
If you are displaying a string, you need less typing to include the newline as part of the string than to tag an endl onto the end:
cout << "Pluto is a dwarf planet.\n"; // show text, go to next line cout << "Pluto is a dwarf planet." << endl; // show text, go to next line
On the other hand, if you want to generate a newline by itself, both approaches take the same amount of typing, but most people find the keystrokes for endl to be more comfortable:
cout << "\n"; // start a new line cout << endl; // start a new line
Typically, this book uses an embedded newline character (\n) when displaying quoted strings and the endl manipulator otherwise. One difference is that endl guarantees the output will be flushed (in, this case, immediately displayed onscreen) before the program moves on. You don’t get that guarantee with "\n", which means that it is possible on some systems in some circumstances a prompt might not be displayed until after you enter the information being prompted for.
The newline character is one example of special keystroke combinations termed “escape sequences”; they are further discussed in Chapter 3, “Dealing with Data.”
C++ Source Code Formatting
Some languages, such as FORTRAN, are line-oriented, with one statement to a line. For these languages, the carriage return (generated by pressing the Enter key or the Return key) serves to separate statements. In C++, however, the semicolon marks the end of each statement. This leaves C++ free to treat the carriage return in the same way as a space or a tab. That is, in C++ you normally can use a space where you would use a carriage return and vice versa. This means you can spread a single statement over several lines or place several statements on one line. For example, you could reformat myfirst.cpp as follows:
#include <iostream> int main () { using namespace std; cout << "Come up and C++ me some time." ; cout << endl; cout << "You won't regret it!" << endl;return 0; }
This is visually ugly but valid code. You do have to observe some rules. In particular, in C and C++ you can’t put a space, tab, or carriage return in the middle of an element such as a name, nor can you place a carriage return in the middle of a string. Here are examples of what you can’t do:
int ma in() // INVALID -- space in name re turn 0; // INVALID -- carriage return in word cout << "Behold the Beans of Beauty!"; // INVALID -- carriage return in string
(However, the raw string, added by C++11 and discussed briefly in Chapter 4, does allow including a carriage return in a string.)
Tokens and White Space in Source Code
The indivisible elements in a line of code are called tokens (see Figure 2.3). Generally, you must separate one token from the next with a space, tab, or carriage return, which collectively are termed white space. Some single characters, such as parentheses and commas, are tokens that need not be set off by white space. Here are some examples that illustrate when white space can be used and when it can be omitted:
Figure 2.3 Tokens and white space.
return0; // INVALID, must be return 0; return(0); // VALID, white space omitted return (0); // VALID, white space used intmain(); // INVALID, white space omitted int main() // VALID, white space omitted in () int main ( ) // ALSO VALID, white space used in ( )
C++ Source Code Style
Although C++ gives you much formatting freedom, your programs will be easier to read if you follow a sensible style. Having valid but ugly code should leave you unsatisfied. Most programmers use styles similar to that of Listing 2.1, which observes these rules:
- One statement per line
- An opening brace and a closing brace for a function, each of which is on its own line
- Statements in a function indented from the braces
- No whitespace around the parentheses associated with a function name
The first three rules have the simple intent of keeping the code clean and readable. The fourth helps to differentiate functions from some built-in C++ structures, such as loops, that also use parentheses. This book alerts you to other guidelines as they come up. | https://www.informit.com/articles/article.aspx?p=1758797&seqNum=2 | CC-MAIN-2022-40 | refinedweb | 4,524 | 61.06 |
SortColumn
Since: BlackBerry 10.0.0
#include <bb/pim/contacts/ContactConsts>
The SortColumn class represents the columns that can be used to sort contacts.
You can use the SortColumn::Type enumeration to specify the columns that should be used to sort contacts. For example, you can use a SortColumn::Type enumeration value in ContactListFilters::setSortBy() to sort contacts by first name, last name, or company name.
Overview
Public Types Index
Public Types
An enumeration of possible columns that can be used to sort contacts.
BlackBerry 10.0.0
- FirstName 0
Indicates that contacts should be sorted by FirstName.
- LastName 1
Indicates that contacts should be sorted by LastName.Since:
BlackBerry 10.0.0
- CompanyName 2
Indicates that contacts should be sorted by CompanyName.Since:
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/cascades/bb__pim__contacts__sortcolumn.html | CC-MAIN-2020-34 | refinedweb | 144 | 53.07 |
ColdFusion DateTimeFormat() Utility Function
I love ColdFusion's DateFormat() and TimeFormat() functions; they are hugely useful. And, most of the time, I use them independently of each other. But, often enough, I use DateFormat() followed by TimeFormat(). Wouldn't it be cool if ColdFusion had a DateTimeFormat() function that would accept both a date and a time mask? I think it would be, and here's what it might look like:
<cffunction name="DateTimeFormat" access="public" returntype="string" output="false" hint="Formats the given date with both a date and time format mask."> <!--- Define arguments. ---> <cfargument name="Date" type="date" required="true" hint="The date/time stamp that we are formatting." /> <cfargument name="DateMask" type="string" required="false" default="dd-mmm-yyyy" hint="The mask used for the DateFormat() method call." /> <cfargument name="TimeMask" type="string" required="false" default="h:mm TT" hint="The mask used for the TimeFormat() method call." /> <cfargument name="Delimiter" type="string" required="false" default=" at " hint="This is the string that goes between the two formatted parts (date and time)." /> <!--- Return the date/time format by concatenating the date and time formatting separated by the given delimiter. ---> <cfreturn ( DateFormat( ARGUMENTS.Date, ARGUMENTS.DateMask ) & ARGUMENTS.Delimiter & TimeFormat( ARGUMENTS.Date, ARGUMENTS.TimeMask ) ) /> </cffunction>
Using the above ColdFusion user defined function, you could easily format date/time values:
<!--- DateTimeFormat called with all possible defaulted arguments. ---> #DateTimeFormat( Now() )# <!--- DateTimeFormat called with explicit date and time masks and default delimiter. ---> #DateTimeFormat( Now(), "mmm d, yyyy", "h:mm TT" )# <!--- DateTimeFormat called with explicit date and time masks as well as an explicit delimiter. ---> #DateTimeFormat( Now(), "mmm d, yyyy", "h:mm TT", " at the time of " )#
The above code would give us the following output:
22-May-2007 at 7:24 AM
May 22, 2007 at 7:24 AM
May 22, 2007 at the time of 7:24 AM
I'm sure this has been done before, but I just really hope that one day Adobe adds it to the ColdFusion built-in function list.
Tweet This Woot woot — you rock the party that rocks the body!
Reader Comments
Yup, Ray did it back in 2001 and has it on his cflib.org site
cflib.org hasn't got much press lately and it's easy to forget the wealth of udf's on the site.
@Chris,
It's funny, right after I posted this, I did a quick Google search to see if this was out there already (as I assumed it was) and there it was, CFLib as the second search result.
2001... I guess that means I'm only 6 years behind the Jedi :) The force is weak with me.
Heh, people _do_ forget about CFLib, so it is no big deal. ;) I need to get off my ass and release that new version that I've been talking about since 2004. ;)
Ben, good job anyway. If nothing else you made me aware of the existing function at cflib. Thanks!
@Ray,
Some of the best meals I have ever eaten were cooking over many many hours... that doesn't exactly translate to web sites, other than to say, good things are worth the wait.
@Ray,
Are you ever going to just integrate cflib into riaforge?
No plans. I really look at RIAForge as more for 'projects', not single CFCs/UDFs/tags. Now that is NOT official RIAForge policy. Shoot, even I have a single CFC up there I believe. But in GENERAL those are my thoughts. (Have to treat carefully here. I admin both sites, but Adobe is the official owner of RIAForge. Etc etc etc.)
Thanks a lot Ben this was really helpful.
Changed the Defaults to mm/dd/yyyy HH:mm:ss
It worked like a charm.
@Del,
Glad you are finding this useful.
Thanks for the function to format date and time together. However I found a similar method if you just want to display the date and time.
By doing this:
#dateformat(created_date, "dd-MMM-yyyy")# #timeformat(created_date, "h:mm:ss tt")#
Hope this helps out.
@Deepesh,
This function was actually meant to replace having to use two functions separately. But, of course, you should use which ever you prefer. trying to do the logic to do the math to display the correct time of the post in the database based on where in the world they're located. I keep getting the times an hour off. When the date/time stamp is saved to the database in a new post, I use the dateformat and timeformat. Are those two functions pulling the coldfusion server time or the computer's local clock time? If it's the server time, how do I figure out what the server time (or even timezone of the server) actually is using CF code? Thanks Ben! - Justin analyzer, along with a few other CFWheels-related issues, I was notified that my UDF now conflicts with the new internal DateTimeFormat() function within CF itself. Added in CF10... so now "I Gots Tuh Modifiez Muh Code."
Thanks for the work, and thanks to all the others who provided more links to follow. More information and examples are great things to have! | https://www.bennadel.com/blog/717-coldfusion-datetimeformat-utility-function.htm | CC-MAIN-2020-50 | refinedweb | 859 | 65.62 |
Networking Services Library Functions nis_objects(3NSL)
NAME
nis_objects - NIS+ object formats
SYNOPSIS
cc [ flag ... ] file ... -lnsl [ library ... ]
/usr/include/rpcsvc/nis_objects.x
DESCRIPTION
Common Attributes
The NIS+ service uses a variant record structure to hold the
contents of the objects that are used by the NIS+ service.
These objects all share a common structure which, is a 64 bit
number that uniquely identifies this instance of the object
on this server. This member is filled in by the server when
the object is created and changed by the server when the
object is modified. When used in conjunction with the
object's name and domain it uniquely identifies the object
in the entire NIS+ namespace.
The second member, zo_name, contains the leaf name of the
object. This name is never terminated with a `.' (dot).
When an object is created or added to the namespace, the
client library will automatically fill in this field and
the domain name from the name that was passed to the func-
tion.
zo_domain contains the name of the NIS+ domain to which
this object belongs. This information is useful when track-
ing the parentage of an object from a cache. When used in
conjunction with the members zo_name and zo_oid, it
uniquely identifies an object. This makes it possible to
always reconstruct the name of an object by using the code
fragment
SunOS 5.8 Last change: 10 Nov 1999 1
Networking Services Library Functions nis_objects(3NSL)
sprintf(buf,"%s.%s", obj=>zo_name, obj=>zo_domain);
The zo_owner and zo_group members contain the NIS+ names
of the object's principal owner and group owner, respec-
tively. Both names must be NIS+ fully qualified names. How-
ever, neither name can be used directly to identify the
object they represent. This stems from the condition that
NIS+ uses itself to store information that it exports.
The zo_owner_dir.domain.
The query will return to the server credential information
about principal for all flavors of RPC authentication that
are in use by that principal. When an RPC request is made to
the server, the authentication flavor is extracted from the
request and is used to find out the NIS+ principal name of
the client. For example, if the client is using the
AUTH_DES_name=netname,auth_type=AUTH_DES],cred.org_dir.domain.
This query will return an entry which contains a principal
name in the first column. This NIS+ principal name is used
to control access to NIS+ objects.
The group owner for the object is treated differently. The
group owner member is optional (it should be the null string
if not present) but must be fully qualified if present. A
group name takes the form
group.domain.
which the server then maps into a name of the form
SunOS 5.8 Last change: 10 Nov 1999 2
Networking Services Library Functions nis_objects(3NSL)
group.groups_dir.domain.
The purpose of this mapping is to prevent NIS+ group names
from conflicting with user specified domain or table names.
For example, if a domain was called engineering.foo.com.,
then without the mapping a NIS+ group of the same name to
represent members of engineering would not be possible. The
contents of groups are lists of NIS+ principal names which
are used exactly like the zo_owner name in the object. See
nis_groups(3NSL) for more details.
The zo_access member contains the bitmask of access rights
assigned to this object. There are four access rights
defined, and four are reserved for future use and must be
zero. This group of 8 access rights can be granted to four
categories of client. These categories are the object's
owner, the object's group owner, all authenticated clients
(world), and all unauthenticated clients (nobody). Note
that access granted to ``nobody'' is really access granted
to everyone, authenticated and unauthenticated clients.
The zo_ttl member contains the number of seconds that the
object can ``live'' in a cache before it is expired. This
value is called the time to live for this object. This
number is particularly important on group and directory
(domain) objects. When an object is cached, the current
time is added to the value in bene-
fits are reversed for setting the time to a small value.
Generally setting the value to 43200 (12 hrs) is reasonable
for things that change day to day, and 3024000 is good for
things that change week to week. Setting the value to 0
will prevent the object from ever being cached since it
would expire immediately.
The zo_data member is a discriminated union with the follow-
ing members:
zotypes zo_type;
union {
SunOS 5.8 Last change: 10 Nov 1999 3
Networking Services Library Functions nis_objects(3NSL) signi-
ficant detail.
Directory Objects
The first type of object is the directory object. This
object's variant part is defined as follows:
enum nstype {
UNKNOWN = 0,
NIS = 1,
SUNYP = 2,
DNS = 4,
X500 = 5,
DNANS = 6,
XCHS = 7,
}
typedef enum nstype nstype;
struct oar_mask {
uint_t oa_rights;
SunOS 5.8 Last change: 10 Nov 1999 4
Networking Services Library Functions nis_objects(3NSL) com-
posed using the zo_name and zo_domain members. For other
name services, this name will be a name that they under-
stand..
SunOS 5.8 Last change: 10 Nov 1999 5
Networking Services Library Functions nis_objects(3NSL)
The do_servers structure contains two members.
do_servers_val is an array of nis_server structures;
do_servers_len is the number of cells in the array. The
nis_server structure is designed to contain enough informa-
tion such that machines on the network providing name ser-
vices can be contacted without having to use a name service.
In the case of NIS+ servers, this information is the name of
the machine in name, its public key for authentication in
pkey, and a variable length array of endpoints, each of
which describes the network endpoint for the rpcbind daemon
on the named machine. The client library uses the addresses
to contact the server using a transport that both the client
and server can communicate on and then queries the rpcbind
daemon to get the actual transport address that the server
is using.
Note that the first server in the con-
tains two members: oa_rights specifies the access rights
allowed for objects of type oa_otype. These access rights
are used for objects of the given type in the directory when
they are present in this array.
The granting of access rights for objects contained within a
directory is actually two-tiered. If the directory object
itself grants a given access right (using the zo_access
member in the nis_object structure representing the direc-
tory), then all objects within the directory are allowed
that access. Otherwise, the do_armask structure is examined
to see if the access is allowed specifically for that type
of structure. This allows the administrator of a namespace
to set separate policies for different object types, for
example, one policy for the creation of tables and another
policy for the creation of other directories. See nis+(1)
for more details.
SunOS 5.8 Last change: 10 Nov 1999 6
Networking Services Library Functions nis_objects(3NSL)
Link Objects
Link objects provide a means of providing aliases or sym-
bolic speci-
fied, the nis_lookup(3NSL) function will always return
non-entry objects.
Group prin-
cipals. For a complete description of how group objects are
manipulated see nis_groups(3NSL).
Table Objects
The NIS+ table object is analogous to a YP map. The differ-
ences stem from the access controls, and the variable sche-
mas that NIS+ allows. The table objects data structure is
defined as follows:
SunOS 5.8 Last change: 10 Nov 1999 7
Networking Services Library Functions nis_objects(3NSL)
member contains a string that identifies the
type of entries in this table. NIS+ does not enforce any
policies as to the contents of this string. However, when
entries are added to the table, the NIS+ service will check
to see that they have the same ``type'' as the table as
specified by this member.
The structure(3NSL) for information on these flags.
In addition to checking the type, the service will check
that the number of columns in an entry is the same as those
in the table before allowing that entry to be added.
SunOS 5.8 Last change: 10 Nov 1999 8
Networking Services Library Functions nis_objects(3NSL)
Each column has associated with it a name in tc_name, a set
of flags in tc_flags, and a set of access rights in
tc_rights. The name should be indicative of the contents of
that column.
The TA_BINARY flag indicates that data in the column is
binary (rather than text). Columns that are searchable can-
not speci-
fied combi-
nation;
SunOS 5.8 Last change: 10 Nov 1999 9
Networking Services Library Functions nis_objects(3NS struc-
tures. set are presumed to
contain binary data. The server will ensure that the column
in the table object specifies binary data prior to allowing
the entry to be added. When modifying entries in a table,
only those columns that have changed need be sent to the
server. Those columns should each have the EN_MODIFIED flag
set to indicate this to the server.
SEE ALSO
nis+(1), nis_groups(3NSL), nis_names(3NSL),
nis_server(3NSL), nis_subr(3NSL), nis_tables(3NSL)
SunOS 5.8 Last change: 10 Nov 1999 10 | http://www.manpages.info/sunos/nis_objects.3.html | crawl-003 | refinedweb | 1,539 | 61.87 |
Finance: Stock analysis.
1. The accompanying Excel file provides you with monthly price and dividend data for CVS Caremark (Ticker symbol: CVS) from Yahoo Finance. (If you want to see where the data came from, go to Yahoo Finance. Type in the ticker, CVS. Then, click historical prices.)
2. Expand the spreadsheet and compute monthly HPRs for each of the 24 months using the HPR formula
Make sure to include the dividends.
3. Calculate arithmetic and geometric mean monthly HPRs for CVS.
4. Continue to expand the spreadsheet to calculate the standard deviation of the monthly HPRs .
5. Calculate the annualized version of your answers from parts 3 and 4. Your answers should be the annualized versions of only the final answers of parts 3 and 4, not annualized versions of each monthly HPR.
Note:
You may notice the
The problem deals with determining the Holding period return (HPR) from stock prices. | https://brainmass.com/economics/finance/finance-stock-analysis-387977 | CC-MAIN-2017-22 | refinedweb | 152 | 65.93 |
When.
Rust has two distinct terms that relate to the module system: ‘crate’ and ‘module’. A crate is synonymous with a ‘library’ or ‘package’ in other languages. Hence “Cargo” as the name of Rust’s package management tool: you ship your crates to others with Cargo. Crates can produce an executable or a library, depending on the project.
Each crate has an implicit root module that contains the code for that crate. You can then define a tree of sub-modules under that root module. Modules allow you to partition your code within the crate itself.
As an example, let’s make a phrases crate, which will give us various phrases in different languages. To keep things simple, we’ll stick to ‘greetings’ and ‘farewells’ as two kinds of phrases, and use English and Japanese (日本語) as two languages for those phrases to be in. We’ll use this module layout:
+-----------+ +---| greetings | | +-----------+ +---------+ | +---| english |---+ | +---------+ | +-----------+ | +---| farewells | +---------+ | +-----------+ | phrases |---+ +---------+ | +-----------+ | +---| greetings | | +----------+ | +-----------+ +---| japanese |--+ +----------+ | | +-----------+ +---| farewells | +-----------+
In this example,
phrases is the name of our crate. All of the rest are
modules. You can see that they form a tree, branching out from the crate
root, which is the root of the tree:
phrases itself.
Now that we have a plan, let’s define these modules in code. To start, generate a new crate with Cargo:
$ cargo new phrases $ cd phrases
If you remember, this generates a simple project for us:
$ tree . . ├── Cargo.toml └── src └── lib.rs 1 directory, 2 files
src/lib.rs is our crate root, corresponding to the
phrases in our diagram
above.
To define each of our modules, we use the
mod keyword. Let’s make our
src/lib.rs look like this:
mod english { mod greetings { } mod farewells { } } mod japanese { mod greetings { } mod farewells { } }
After the
mod keyword, you give the name of the module. Module names follow
the conventions for other Rust identifiers:
lower_snake_case. The contents of
each module are within curly braces (
{}).
Within a given
mod, you can declare sub-
mods. We can refer to sub-modules
with double-colon (
::) notation: our four nested modules are
english::greetings,
english::farewells,
japanese::greetings, and
japanese::farewells. Because these sub-modules are namespaced under their
parent module, the names don’t conflict:
english::greetings and
japanese::greetings are distinct, even though their names are both
greetings.
Because this crate does not have a
main() function, and is called
lib.rs,
Cargo will build this crate as a library:
$ cargo build Compiling phrases v0.0.1 () $ ls target/debug build deps examples libphrases-a7448e02a0468eaa.rlib native
libphrases-hash.rlib is the compiled crate. Before we see how to use this
crate from another crate, let’s break it up into multiple files.
If each crate were just one file, these files would get very large. It’s often easier to split up crates into multiple files, and Rust supports this in two ways.
Instead of declaring a module like this:fn main() { mod english { // contents of our module go here } }
mod english { // contents of our module go here }
We can instead declare our module like this:fn main() { mod english; }
mod english;
If we do that, Rust will expect to find either a
english.rs file, or a
english/mod.rs file with the contents of our module.
Note that in these files, you don’t need to re-declare the module: that’s
already been done with the initial
mod declaration.
Using these two techniques, we can break up our crate into two directories and seven files:
$ tree . . ├── Cargo.lock ├── Cargo.toml ├── src │ ├── english │ │ ├── farewells.rs │ │ ├── greetings.rs │ │ └── mod.rs │ ├── japanese │ │ ├── farewells.rs │ │ ├── greetings.rs │ │ └── mod.rs │ └── lib.rs └── target └── debug ├── build ├── deps ├── examples ├── libphrases-a7448e02a0468eaa.rlib └── native
fn main() {
mod english;
mod japanese;
}
src/lib.rs is our crate root, and looks like this:
mod english; mod japanese;
These two declarations tell Rust to look for either
src/english.rs and
src/japanese.rs, or
src/english/mod.rs and
src/japanese/mod.rs, depending
on our preference. In this case, because our modules have sub-modules, we’ve
chosen the second. Both
src/english/mod.rs and
src/japanese/mod.rs look
like this:
mod greetings; mod farewells;
Again, these declarations tell Rust to look for either
src/english/greetings.rs and
src/japanese/greetings.rs or
src/english/farewells/mod.rs and
src/japanese/farewells/mod.rs. Because
these sub-modules don’t have their own sub-modules, we’ve chosen to make them
src/english/greetings.rs and
src/japanese/farewells.rs. Whew!
The contents of
src/english/greetings.rs and
src/japanese/farewells.rs are
both empty at the moment. Let’s add some functions.
Put this in
src/english/greetings.rs:
fn hello() -> String { "Hello!".to_string() }
Put this in
src/english/farewells.rs:
fn goodbye() -> String { "Goodbye.".to_string() }
Put this in
src/japanese/greetings.rs:
fn hello() -> String { "こんにちは".to_string() }
Of course, you can copy and paste this from this web page, or just type something else. It’s not important that you actually put ‘konnichiwa’ to learn about the module system.
Put this in
src/japanese/farewells.rs:
fn goodbye() -> String { "さようなら".to_string() }
(This is ‘Sayōnara’, if you’re curious.)
Now that we have some functionality in our crate, let’s try to use it from another crate.
We have a library crate. Let’s make an executable crate that imports and uses our library.
Make a
src/main.rs and put this in it (it won’t quite compile yet):
extern crate phrases; fn main() { println!("Hello in English: {}", phrases::english::greetings::hello()); println!("Goodbye in English: {}", phrases::english::farewells::goodbye()); println!("Hello in Japanese: {}", phrases::japanese::greetings::hello()); println!("Goodbye in Japanese: {}", phrases::japanese::farewells::goodbye()); }
The
extern crate declaration tells Rust that we need to compile and link to
the
phrases crate. We can then use
phrases’ modules in this one. As we
mentioned earlier, you can use double colons to refer to sub-modules and the
functions inside of them.
(Note: when importing a crate that has dashes in its name "like-this", which is
not a valid Rust identifier, it will be converted by changing the dashes to
underscores, so you would write
extern crate like_this;.)
Also, Cargo assumes that
src/main.rs is the crate root of a binary crate,
rather than a library crate. Our package now has two crates:
src/lib.rs and
src/main.rs. This pattern is quite common for executable crates: most
functionality is in a library crate, and the executable crate uses that
library. This way, other programs can also use the library crate, and it’s also
a nice separation of concerns.
This doesn’t quite work yet, though. We get four errors that look similar to this:
$ cargo build Compiling phrases v0.0.1 () src/main.rs:4:38: 4:72 error: function `hello` is private src/main.rs:4 println!("Hello in English: {}", phrases::english::greetings::hello()); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ note: in expansion of format_args! <std macros>:2:25: 2:58 note: expansion site <std macros>:1:1: 2:62 note: in expansion of print! <std macros>:3:1: 3:54 note: expansion site <std macros>:1:1: 3:58 note: in expansion of println! phrases/src/main.rs:4:5: 4:76 note: expansion site
By default, everything is private in Rust. Let’s talk about this in some more depth.
Rust allows you to precisely control which aspects of your interface are
public, and so private is the default. To make things public, you use the
pub
keyword. Let’s focus on the
english module first, so let’s reduce our
src/main.rs
to just this:
extern crate phrases; fn main() { println!("Hello in English: {}", phrases::english::greetings::hello()); println!("Goodbye in English: {}", phrases::english::farewells::goodbye()); }
In our
src/lib.rs, let’s add
pub to the
english module declaration:
pub mod english; mod japanese;
And in our
src/english/mod.rs, let’s make both
pub:
pub mod greetings; pub mod farewells;
In our
src/english/greetings.rs, let’s add
pub to our
fn declaration:
pub fn hello() -> String { "Hello!".to_string() }
And also in
src/english/farewells.rs:
pub fn goodbye() -> String { "Goodbye.".to_string() }
Now, our crate compiles, albeit with warnings about not using the
japanese
functions:
$ cargo run Compiling phrases v0.0.1 () src/japanese/greetings.rs:1:1: 3:2 warning: function is never used: `hello`, #[warn(dead_code)] on by default src/japanese/greetings.rs:1 fn hello() -> String { src/japanese/greetings.rs:2 "こんにちは".to_string() src/japanese/greetings.rs:3 } src/japanese/farewells.rs:1:1: 3:2 warning: function is never used: `goodbye`, #[warn(dead_code)] on by default src/japanese/farewells.rs:1 fn goodbye() -> String { src/japanese/farewells.rs:2 "さようなら".to_string() src/japanese/farewells.rs:3 } Running `target/debug/phrases` Hello in English: Hello! Goodbye in English: Goodbye.
pub also applies to
structs and their member fields. In keeping with Rust’s
tendency toward safety, simply making a
struct public won't automatically
make its members public: you must mark the fields individually with
pub.
Now that our functions are public, we can use them. Great! However, typing out
phrases::english::greetings::hello() is very long and repetitive. Rust has
another keyword for importing names into the current scope, so that you can
refer to them with shorter names. Let’s talk about
use.
use
Rust has a
use keyword, which allows us to import names into our local scope.
Let’s change our
src/main.rs to look like this:
extern crate phrases; use phrases::english::greetings; use phrases::english::farewells; fn main() { println!("Hello in English: {}", greetings::hello()); println!("Goodbye in English: {}", farewells::goodbye()); }
The two
use lines import each module into the local scope, so we can refer to
the functions by a much shorter name. By convention, when importing functions, it’s
considered best practice to import the module, rather than the function directly. In
other words, you can do this:
extern crate phrases; use phrases::english::greetings::hello; use phrases::english::farewells::goodbye; fn main() { println!("Hello in English: {}", hello()); println!("Goodbye in English: {}", goodbye()); }
But it is not idiomatic. This is significantly more likely to introduce a
naming conflict. In our short program, it’s not a big deal, but as it grows, it
becomes a problem. If we have conflicting names, Rust will give a compilation
error. For example, if we made the
japanese functions public, and tried to do
this:
extern crate phrases; use phrases::english::greetings::hello; use phrases::japanese::greetings::hello; fn main() { println!("Hello in English: {}", hello()); println!("Hello in Japanese: {}", hello()); }
Rust will give us a compile-time error:
Compiling phrases v0.0.1 () src/main.rs:4:5: 4:40 error: a value named `hello` has already been imported in this module [E0252] src/main.rs:4 use phrases::japanese::greetings::hello; ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error: aborting due to previous error Could not compile `phrases`.
If we’re importing multiple names from the same module, we don’t have to type it out twice. Instead of this:fn main() { use phrases::english::greetings; use phrases::english::farewells; }
use phrases::english::greetings; use phrases::english::farewells;
We can use this shortcut:fn main() { use phrases::english::{greetings, farewells}; }
use phrases::english::{greetings, farewells};
pub use
You don’t just use
use to shorten identifiers. You can also use it inside of your crate
to re-export a function inside another module. This allows you to present an external
interface that may not directly map to your internal code organization.
Let’s look at an example. Modify your
src/main.rs to read like this:
extern crate phrases; use phrases::english::{greetings,farewells}; use phrases::japanese; fn main() { println!("Hello in English: {}", greetings::hello()); println!("Goodbye in English: {}", farewells::goodbye()); println!("Hello in Japanese: {}", japanese::hello()); println!("Goodbye in Japanese: {}", japanese::goodbye()); }
Then, modify your
src/lib.rs to make the
japanese mod public:
pub mod english; pub mod japanese;
Next, make the two functions public, first in
src/japanese/greetings.rs:
pub fn hello() -> String { "こんにちは".to_string() }
And then in
src/japanese/farewells.rs:
pub fn goodbye() -> String { "さようなら".to_string() }
Finally, modify your
src/japanese/mod.rs to read like this:
pub use self::greetings::hello; pub use self::farewells::goodbye; mod greetings; mod farewells;
The
pub use declaration brings the function into scope at this part of our
module hierarchy. Because we’ve
pub used this inside of our
japanese
module, we now have a
phrases::japanese::hello() function and a
phrases::japanese::goodbye() function, even though the code for them lives in
phrases::japanese::greetings::hello() and
phrases::japanese::farewells::goodbye(). Our internal organization doesn’t
define our external interface.
Here we have a
pub use for each function we want to bring into the
japanese scope. We could alternatively use the wildcard syntax to include
everything from
greetings into the current scope:
pub use self::greetings::*.
What about the
self? Well, by default,
use declarations are absolute paths,
starting from your crate root.
self makes that path relative to your current
place in the hierarchy instead. There’s one more special form of
use: you can
use super:: to reach one level up the tree from your current location. Some
people like to think of
self as
. and
super as
.., from many shells’
display for the current directory and the parent directory.
Outside of
use, paths are relative:
foo::bar() refers to a function inside
of
foo relative to where we are. If that’s prefixed with
::, as in
::foo::bar(), it refers to a different
foo, an absolute path from your
crate root.
This will build and run:
$ cargo run Compiling phrases v0.0.1 () Running `target/debug/phrases` Hello in English: Hello! Goodbye in English: Goodbye. Hello in Japanese: こんにちは Goodbye in Japanese: さようなら
Rust offers several advanced options that can add compactness and
convenience to your
extern crate and
use statements. Here is an example:
extern crate phrases as sayings; use sayings::japanese::greetings as ja_greetings; use sayings::japanese::farewells::*; use sayings::english::{self, greetings as en_greetings, farewells as en_farewells}; fn main() { println!("Hello in English; {}", en_greetings::hello()); println!("And in Japanese: {}", ja_greetings::hello()); println!("Goodbye in English: {}", english::farewells::goodbye()); println!("Again: {}", en_farewells::goodbye()); println!("And in Japanese: {}", goodbye()); }
What's going on here?
First, both
extern crate and
use allow renaming the thing that is being
imported. So the crate is still called "phrases", but here we will refer
to it as "sayings". Similarly, the first
use statement pulls in the
japanese::greetings module from the crate, but makes it available as
ja_greetings as opposed to simply
greetings. This can help to avoid
ambiguity when importing similarly-named items from different places.
The second
use statement uses a star glob to bring in all symbols from the
sayings::japanese::farewells module. As you can see we can later refer to
the Japanese
goodbye function with no module qualifiers. This kind of glob
should be used sparingly.
The third
use statement bears more explanation. It's using "brace expansion"
globbing to compress three
use statements into one (this sort of syntax
may be familiar if you've written Linux shell scripts before). The
uncompressed form of this statement would be:
use sayings::english; use sayings::english::greetings as en_greetings; use sayings::english::farewells as en_farewells;
As you can see, the curly brackets compress
use statements for several items
under the same path, and in this context
self just refers back to that path.
Note: The curly brackets cannot be nested or mixed with star globbing. | https://doc.rust-lang.org/1.5.0/book/crates-and-modules.html | CC-MAIN-2019-22 | refinedweb | 2,599 | 59.6 |
Query:
How to override equals and hashCode in Java? and what issues/pitfalls must be considered when overriding
equals and
hashCode?
How to override equals and hashCode in Java? Answer #1:
The theory (for the language lawyers and the mathematically inclined):
equals() (javadoc) must define an equivalence().
Use the excellent helper classes EqualsBuilder and HashCodeBuilder from the Apache Commons Lang library. An example:
public class Person { private String name; private int age; // ... @Override public int hashCode() { return new HashCodeBuilder(17, 31). // two randomly chosen prime numbers // if deriving: appendSuper(super.hashCode()). append(name). append(age). toHashCode(); } @Override public boolean equals(Object obj) { if (!(obj instanceof Person)) return false; if (obj == this) return true; Person rhs = (Person).
Answer #2:
There are some issues worth noticing if you’re dealing with classes that are persisted using an Object-Relationship Mapper (ORM) like Hibernate, if you didn’t think this was unreasonably complicated already!
Lazy loaded objects are subclasses
If your objects are persisted using an ORM, in many cases you will be dealing with dynamic proxies to avoid loading object too early from the data store. These proxies are implemented as subclasses of your own class. This means that
this.getClass() == o.getClass() will return
false. For example:
Person saved = new Person("John Doe"); Long key = dao.save(saved); dao.flush(); Person retrieved = dao.retrieve(key); saved.getClass().equals(retrieved.getClass()); // Will return false if Person is loaded lazy
If you’re dealing with an ORM, using
o instanceof Person is the only thing that will behave correctly.
Lazy loaded objects have null-fields
ORMs usually use the getters to force loading of lazy loaded objects. This means that
person.name will be
null if
person is lazy loaded, even if
person.getName() forces loading and returns “John Doe”. In my experience, this crops up more often in
hashCode() and
equals().
If you’re dealing with an ORM, make sure to always use getters, and never field references in
hashCode() and
equals().
Saving an object will change its state
Persistent objects often use a
id field to hold the key of the object. This field will be automatically updated when an object is first saved. Don’t use an id field in
hashCode(). But you can use it in
equals().
A pattern I often use is
if (this.getId() == null) { return this == other; } else { return this.getId().equals(other.getId()); }
But: you cannot include
getId() in
hashCode(). If you do, when an object is persisted, its
hashCode changes. If the object is in a
HashSet, you’ll “never” find it again.
In my
Person example, I probably would use
getName() for
hashCode and
getId() plus
getName() (just for paranoia) for
equals(). It’s okay if there are some risk of “collisions” for
hashCode(), but never okay for
equals().
hashCode() should use the non-changing subset of properties from
equals()
Answer #3:
A clarification about the
obj.getClass() != getClass().
This statement is the result of
equals() being inheritance unfriendly. The JLS (Java language specification) specifies that if
A.equals(B) == true then
B.equals(A) must also return
true. If you omit that statement inheriting classes that override
equals() (and change its behavior) will break this specification.
Consider the following example of what happens when the statement is omitted:
class A { int field1; A(int field1) { this.field1 = field1; } public boolean equals(Object other) { return (other != null && other instanceof A && ((A) other).field1 == field1); } } class B extends A { int field2; B(int field1, int field2) { super(field1); this.field2 = field2; } public boolean equals(Object other) { return (other != null && other instanceof B && ((B)other).field2 == field2 && super.equals(other)); } }
Doing
new A(1).equals(new A(1)) Also,
new B(1,1).equals(new B(1,1)) result give out true, as it should.
This looks all very good, but look what happens if we try to use both classes:
A a = new A(1); B b = new B(1,1); a.equals(b) == true; b.equals(a) == false;
Obviously, this is wrong.
If you want to ensure the symmetric condition. a=b if b=a and the Liskov substitution principle call
super.equals(other) not only in the case of
B instance, but check after for
A instance:
if (other instanceof B ) return (other != null && ((B)other).field2 == field2 && super.equals(other)); if (other instanceof A) return super.equals(other); else return false;
Which will output:
a.equals(b) == true; b.equals(a) == true;
Where, if
a is not a reference of
B, then it might be a be a reference of class
A (because you extend it), in this case you call
super.equals() too.
Answer #4:
Summary:
In his book Effective Java Programming Language Guide (Addison-Wesley, 2001), Joshua Bloch claims that “There is simply no way to extend an instantiable class and add an aspect while preserving the equals contract.” Tal disagrees.
His solution is to implement equals() by calling another nonsymmetric blindlyEquals() both ways. blindlyEquals() is overridden by subclasses, equals() is inherited, and never overridden.
Example:)); } } class ColorPoint extends Point { private Color c; protected boolean blindlyEquals(Object o) { if (!(o instanceof ColorPoint)) return false; ColorPoint cp = (ColorPoint)o; return (super.blindlyEquals(cp) && cp.color == this.color); } }
Note that equals() must work across inheritance hierarchies if the Liskov Substitution Principle is to be satisfied.
Answer #5:
Still amazed that none recommended the guava library for this.
//Sample taken from a current working project of mine just to illustrate the idea @Override public int hashCode(){ return Objects.hashCode(this.getDate(), this.datePattern); } @Override public boolean equals(Object obj){ if ( ! obj instanceof DateAndPattern ) { return false; } return Objects.equal(((DateAndPattern)obj).getDate(), this.getDate()) && Objects.equal(((DateAndPattern)obj).getDate(), this.getDatePattern()); }
Answer #6:
There are two methods in super class as java.lang.Object. We need to override them to custom object.
public boolean equals(Object obj) public int hashCode()
Equal objects must produce the same hash code as long as they are equal, however unequal objects need not produce distinct hash codes.
public class Test { private int num; private String data; public boolean equals(Object obj) { if(this == obj) return true; if((obj == null) || (obj.getClass() != this.getClass())) return false; // object must be Test at this point Test test = (Test)obj; return num == test.num && (data == test.data || (data != null && data.equals(test.data))); } public int hashCode() { int hash = 7; hash = 31 * hash + num; hash = 31 * hash + (null == data ? 0 : data.hashCode()); return hash; } // other methods }
Answer #7:
There are a couple of ways to do your check for class equality before checking member equality, and I think both are useful in the right circumstances.
- Use the
instanceofoperator.
- Use
this.getClass().equals(that.getClass()).
I use #1 in a
final equals implementation, or when implementing an interface that prescribes an algorithm for equals (like the
java.util collection interfaces—the right way to check with with
(obj instanceof Set) or whatever interface you’re implementing). It’s generally a bad choice when equals can be overridden because that breaks the symmetry property.
Option #2 allows the class to be safely extended without overriding equals or breaking symmetry.
If your class is also
Comparable, the
equals and
compareTo methods should be consistent too. Here’s a template for the equals method in a
Comparable class:
final class MyClass implements Comparable<MyClass> { … @Override public boolean equals(Object obj) { /* If compareTo and equals aren't final, we should check with getClass instead. */ if (!(obj instanceof MyClass)) return false; return compareTo((MyClass) obj) == 0; } }
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/how-to-override-equals-and-hashcode-in-java-answered/ | CC-MAIN-2022-40 | refinedweb | 1,260 | 58.69 |
Created attachment 31573 [details]
Classes that hold a reference to WebappClassLoader
I have a problem with unloading casses of a web application when I stop it.
All threads are closed, but all 6.000 classes remain in memory. The GC cannot destroy WebappClassLoader because it is held by a static hashMap of the naming services which is hold by the VM.
See attached image.
The problem occurs only when the application uses the naming service. I identified the first method in my application that triggers the problem:
public class MyBindAuthenticator extends BindAuthenticator {
...
@Override
public DirContextOperations authenticate(Authentication authentication) {
...
List<String> userns=getUserDns(username);
...
}
...
}
When I replace this line by a hardcoded list of strings, then the problem gets triggered by the next call of any naming service method. When I replace the whole authenticate() method by an empty one, then the problem disappears. But I need it for security reason.
Unfortunately, the problematic hashmap (with name securityTokens) is not accessible to me, so I cannot remove the references. I seems that all related classes are part of Catalina and not reachable from outside.
I would appreciate a workaround, if not bugfix is available.
1. Normally that reference should have been cleared by NamingContextListener
The sequence of events is
o.a.c.core.StandardContext.stopInternal()
...
fireLifecycleEvent(Lifecycle.CONFIGURE_STOP_EVENT, null);
-> o.a.c.core.NamingContextListener.lifecycleEvent(..)
Is there any indication in your log files that your Context failed to start up, or failed to stop?
2. What is 'BindAuthenticator'? There is no such class in Tomcat.
Source code for getUserDns() = ?
3. I think there is a bug in NamingContextListener.lifecycleEvent(..):
If its processing of startup event fails, then its 'initialized' field remains to be 'false'. This causes its processing of stop event to exit immediately without proper cleanup. Is it what happened here?
Thanks for your quick answer.
1)While analyzing this problem, I found and solved two problem causes:
I disabled the pooling of the LDAP interface because the pool starts a thread that remains running. Tomcat gave a related warning on application shutdown.
I implemented a thread local ring buffer for log messages (using log4j). This buffer was not cleared on shutdown, so the web app classloader remained in memory.
However, the problem with naming services is still present and I did not find out how to solve it without modifying Tomcat's source code.
2)BindAuthenticator is part of Spring Security. It is used to check the users name and password when he logs in. I had to override the authenticate method because the default method does not work for all users.
3) Good hint, I will check that.
Created attachment 31583 [details]
ContextAccessController cleans up properly
Created attachment 31584 [details]
New heap analysis showing classes that are keep WebAppClassloader in memory
Enabling FINEST logging for org.apache.catalina.core did not gave any helpful message. I still dont see any warning or error message in any logfile. But I have some few more debug messages now.
I managed tro track the event handling and did not find any problem there. Lifecycle.CONFIGURE_START_EVENT and Lifecycle.CONFIGURE_STOP_EVENT are both completely processed by NamingContextLister without exception.
Also the ContextAccessController removes the securityToken properly from the hashmap when I stop the program.
So what I saw today does not match the heap dump that I created a few days ago. Sorry for that, I must have done something wrong without noticing it by myself.
I attached a new screenshot from heap dump analysis, which looks much smaller now. And I see now totally other classes holding references to the classloader.
But these classes are again all part of Tomcat or JVM, am I right? How can I analyze that further?
Created attachment 31585 [details]
New heap analysis after lib upgrade
I upgraded the cssparser.jar library from version 0.9.5 to 0.9.13. Now the picture has changed a lot again.
The WebAppClassloader of my application has not path to GC Root anymore and is now hold by a soft or weak reference.
But that did not finally solve my problem. All classes of my web application still remain in memory even when I force a garbage collection several times. I am still not able to restart the application (except by doubling the PermGenSpace size).
My previous method to analyze the problem cause is not applicable anymore. There is no class anymore that holds a strong reference to the WebAppClassloader. What else can be the problem cause?
All the indications are that the root cause of this issue lies in application code or third party library code rather than Tomcat code. I am therefore resolving this as invalid. The users list is the best place to get help with this issue.
One pointer that may help you move this a little further forward before you post to the users list is that I have seen cases where the JVM fails to correctly report all GC roots. Tracking down the memory leak in these cases is a complete pain. I hit a similar issue with a Spring app and had to build the relevant Spring libraries locally so I could do what was effectively a binary search to track down the line of code that triggered the issue. It turned out to be a memory leak somewhere in the JRE's XML parser but I never could track down exactly what as the JVM never reported the GC root.
Thanks for your helpful answer.
And yes, I agree that the problem cause is obvisouly outside Tomcat.
.
Created attachment 31671 [details]
Screenshot of jvisualvm showing that webappclassloader has no gc root
I tested again with version 7.0.54, the problem still occurs.
The classloader of the stopped web application still remains in memory, but it has no gc root.
The problem occurs only when the after the application did an LDAP communication. It does not happen, when I comment out the related lines of code or when I configure Spring Security to use dummy user accounts (in xml file) instead of LDAP authentication.
Any ideas why the class loader remains in memory? | https://bz.apache.org/bugzilla/show_bug.cgi?id=56472 | CC-MAIN-2018-13 | refinedweb | 1,021 | 57.98 |
We are getting error message like:
No System Alias found for Service <> and user <> or No Service found for namespace <>, name <>, version<>
Please do the following to maintain system alias and gateway flags:
There are two flags related to the System Alias configuration in SAP NetWeaver Gateway.
Depending on the SAP Gateway content scenario and your system landscape you thus set up the system alias. The system alias is the result of the routing for an inbound request on SAP Gateway. It can be a remote or a local system.
1. Local GW Flag
The system that is responsible for processing (managing and storing) the data of an inbound request is the local SAP Gateway instance itself. This option is typically used for testing scenarios, for example when using the BOR Generator locally. If you activate Local SAP GW for a SAP System Alias called LOCAL, the RFC Destination is usually NONE.
2. Local App Flag
There are three main software component of gateway i.e. IW_FND, IW_BEP and GW_CORE. In case of SAP Netweaver release 740, the gateway software component is SAP_GWFND which is comprise of all three above mentioned software component(i.e. IW_BEP, IW_FND and GW_CORE).
Example 1:
When all the three components are in one system i.e. gateway system and backend system is a different system. In this case, System alias configuration should have Local App flag set as IWFND and IWBEP are together and RFC destination should point to SAP back-end. RFC destination will be used by IW_BEP data provider to call the RFC from SAP Backend.
Example 2:
When gateway system is having software component IW_FND and GW_CORE, and software component IW_BEP is deployed separately in Backend system. In this case, System alias configuration should not have Local App flag set, as IW_FND and IW_BEP are in different SAP systems and RFC destination should point to backend system where IWBEP is deployed which is used by IW_FND to route the calls.
Here, IWBEP data provider and metadata provider classes can use the local RFC Function Modules as well.
Example 3:
When all three software components is present in the Backend system.In this case, System alias configuration should have Local App flag set as IWFND and IWBEP are together and RFC destination can be empty i.e. NONE as all the components are in the SAP backend system.
There are two flags under "System Aliases" section in the T-code: "/N/IWFND/MAINT_SERVICE" in SAP NetWeaver Gateway :
These two flags are useful when OData service is using MOC(Multi Origin Composition) mode to connect to the multiple backend systems.
1. Default Flag :
This flag should be selected for the default system which should be used whenever the service is not called as MOC. If you have defined more than one default system alias, the first system is used as the default.
2. Metadata Flag :
For requests in MOC(multiple origin composition) mode, this flag specifies the backend system from which to retrieve the metadata.
The processing mode is as follows:
1. The system searches for all system aliases assigned to a service for a user.
2. If more than one system is found, the one system with the Metadata flag set is chosen.
1. If none of the found systems has the Metadata flag assigned, then the one with the Default flag is chosen.
2. If none of the found systems has the Default flag assigned an error is triggered. | https://www.stechies.com/maintain-sap-system-alias-gateway-flags/ | CC-MAIN-2018-26 | refinedweb | 578 | 60.75 |
Darren Dale wrote:> signbit(-1): -2147483648 > isnan(0.0/0): 1 > isinf(1.0/0): 1As other people have already noted, signbit from math.h is behaving as it should.> Do you know why signbit doesn't yield 1? I wonder if this might be the source > of the problem in Scipy.I actually had a look at scipy today, and it uses its own signbit routine -- so looking at what signbit from math.h does is totally irrelevant. The scipy signbit implementation does say it returns 0 or 1. I extracted the routine from Lib/special/cephes/isnan.c and constructed a little test program attached to the mail. On my machine it works as advertised when defining IBMPC. You can try out the defines for other machine types. On my machine, using DEC also works -- but using MIEEE or leaving all undefined gives the results you're seeing, namely all numbers are reported to have sign 0. So it may be that your machine type was not #defined correctly. Probably Lib/special/cephes/isnan.c should have something like: #if !defined(IBMPC) && !defined(DEC) && !defined(MIEEE) #error "machine type not defined" #endif in front of it to guard against the machine type being undefined (or perhaps this should go in the .h file of 'cephes'). You can try adding this to Lib/special/cephes/isnan.c and see if you still can compile or if it spits out an error. Cheers, Marco Here's the test program with scipy's signbit function: #include <stdio.h> #define IBMPC 1 //#define DEC 1 //#define MIEEE 1 int signbit(x) double x; { union { double d; short s[4]; int i[2]; } u; u.d = x; if( sizeof(int) == 4 ) { #ifdef IBMPC return( u.i[1] < 0 ); #endif #ifdef DEC return( u.s[3] < 0 ); #endif #ifdef MIEEE return( u.i[0] < 0 ); #endif } else { #ifdef IBMPC return( u.s[3] < 0 ); #endif #ifdef DEC return( u.s[3] < 0 ); #endif #ifdef MIEEE return( u.s[0] < 0 ); #endif } } int main() { printf("signbit( 1.0) = %d\n", signbit(1.0)); printf("signbit(-1.0) = %d\n", signbit(-1.0)); printf("signbit( 0.0) = %d\n", signbit(0.0)); printf("signbit(-0.0) = %d\n", signbit(-0.0)); return 0; } -- gentoo-science@g.o mailing list | http://archives.gentoo.org/gentoo-science/message/1f39bc74ba6909b22cd3b685afca13c1 | CC-MAIN-2015-14 | refinedweb | 383 | 71.21 |
29 August 2012 07:13 [Source: ICIS news]
MELBOURNE (ICIS)--?xml:namespace>
The company switched an etac line at its 320,000 tonne/year etac/butyl acetate (butac) swing plant at Jiangmen in
The switch to NPAC output was aimed at reducing the company’s total ethanol consumption, and should continue for another week or so, the source added.
Ethanol is used in combination with acetic acid in etac production.
The prices of ethanol are at yuan (CNY) 6,430-6,600/tonne ($1,013-1,039/tonne) EXW (ex-works) in eastern
Etac prices are stable this week at CNY6,200-6,350/tonne in eastern
Jiangmen Handsome has a 60,000 tonne/year butac plant at Huizhou in
In eastern
($1 = CNY6.35)
With | http://www.icis.com/Articles/2012/08/29/9590627/chinas-jiangmen-handsome-cuts-south-china-etac-output.html | CC-MAIN-2015-18 | refinedweb | 126 | 62.21 |
JavaScript "Masterclass"
Craig Spence 🦄
I do JS at
@phenomnomnominal
What we're going to cover
JavaScript in 2017
⭐️ Language
⭐️ Tooling
⭐️ Community
Writing a server in JavaScript
Writing client-side web applications in JavaScript
JavaScript in 2017
The language has evolved
ES2017 ratified and being implemented in evergreen browsers
Fragmented (but converging) eco-system
Super popular - and in demand!
POWERFUL AF 💯 🔥 👌
JavaScript
var Pizza = (function () { var Pizza = function Pizza (options) { this.flavour = options.flavour; this.size = options.size; }; return Pizza; })(); var Human = (function () { var Human = function Human (options) { this.name = options.name; }; Human.prototype.eat = function (food) { var flavour = food.flavour var foodType = food.constructor.name; alert('Mmmm, ' + flavour + ' ' + foodType); }; return Human; })(); var pizza = new Pizza({ flavour: 'margherita' }); var craig = new Human({ name: 'Craig' }); craig.eat(pizza);
class Pizza { constructor (options = {}) { Object.assign(this, options); } } class Human { constructor (options = {}) { Object.assign(this, options); } eat (food) { let { flavour } = food; alert(`Mmmm, ${flavour} 🍕`); } } let pizza = new Pizza({ flavour: 'margherita' }); let craig = new Human({ name: 'Craig' }); craig.eat(pizza);
ES5
ES2017
JavaScript
ECMA TC-39 work on the ECMAScript specification
New versions to be released annually! 💯💯💯
ES2015 was huge - classes, promises, template strings, destructuring, meta-programming, modules & more!
ES2016, not so huge - Array.prototype.includes, exponentiation operator (**)
ES2017, medium huge? - async/await, shared memory, atomic operations, new Object functions, String padding utilities, some small syntactical additions
Tooling
Tooling
npm ecosystem is HUGE 👍
>
300,000 475,000 modules 👌
>
1,000,000,000 2,600,000,000 downloads a week 😱
Knowing what modules to use can be a nightmare ⁉️
(but that's a whole other topic)
YAY code re-use
BOO dependencies
#notmyleftpad
Community
SUPER ACTIVE! - npm, Github, etc.
Local meetups:
JavaScript NZ
#letswritecode
We're going to write a real-time multi-player game!
Start by going to
Click the "Clone or download" button, then "Download ZIP"
Unzip the project to your computer somewhere, and then open a terminal and navigate to the unzipped project folder.
INSTALLING A DEPENDENCY
It is also where the listing of the projects dependencies will live.
To see what I mean by that, let's add our first dependency to the project. Copy the following command and paste it into a terminal:
npm install --save express
OUR FIRST DEPENDENCY!
So, what exactly have we installed?
STARTING A SERVER
We now need somewhere to write some code. Let's create an index.js file in the root of our project folder, next to package.json.
Within that file, we're going to add three bits of code:
import express from 'express'; let app = express();
app.get('/', (req, res) => { res.send('Hello World!'); });
const server = app.listen(3000, () => { console.log(`Server is running on port ${server.address().port}!`); });
1⃣️
2⃣️
3⃣️
STARTING A SERVER
There's a few things to notice here:
import express from 'express';
We've used the ES2015 module import syntax:
We've created some objects:
let app = ... let server = ...
We've registered a few callbacks:
app.get('/', (req, res) => { ... }); app.listen(3000, () => { ... });
Don't Call me, I'll Call You
The idea of a callback is a very important concept in JavaScript - both on the client and the server.
JavaScript has a conceptually simple execution model. Code runs in a single thread, one instruction after another, until it runs out of things to do.
When a typical JavaScript application starts, event listeners are created. On a web page, these events might be things like a mouse click or keypress. On a web server, these events are typically HTTP requests.
app.get('/', (req, res) => { ... });
This says, when the application receives a GET request, run this function. That function is the callback.
RUNNING THE SERVER
First, let's try running the server!
node index.js
From your terminal, run the following command:
RUNNING THE SERVER
💣🔥💣🔥💣🔥
Did everything break? Good 👌
That's expected, because we are using JavaScript from the future!
While ES2015/2016 have been standardised, they haven't yet been
implemented in all the different JavaScript run-times.
FUTURISING OUR CODE
We need to add a few more dependencies:
npm install --save babel-cli babel-preset-es2015
This time, we've added Babel, and a Babel preset.
Babel is a tool for transforming JavaScript code. The ES2015 preset knows how to turn modern JavaScript into the equivalent code, but only using older, ubiquitous language features.
Now we just need to tell Node to use Babel when it runs the code. We will do that with an npm script.
npm Scripts
There's a lot of tools out there for running build tasks on projects.
The main ones are Gulp, Grunt, Broccoli and Fly - all are roughly equivalent, just slightly different implementations of the same idea.
We're going to go a slightly simpler, but equally valid route - especially for a smaller project like this one. We're going to add an npm script. Open up the package.json file again, and replace the scripts block with the following:
"scripts": { "start": "babel-node --presets es2015 index.js" },
npm Scripts
This means that when the "start" script is executed, it will use babel-node with the es2015 preset to run the index.js file.
npm run start
We run that script by running the following from a terminal:
We should now see the following!
Server is running on port 3000!
SUCCESS! 👏👏👏
Go to in a browser to see it working!
OUR GAME
We're going to make two player tic-tac-toe!
We need three endpoints for it to work:
GET game-state
POST join-game
POST take-turn
STUB END POINTS
Let's add the outline of our endpoints...
app.post('/join-game', (req, res) => { // ... }); app.get('/game-state', (req, res) => { // ... }); app.post('/take-turn', (req, res) => { // ... });
Add the following to our index.js file
⚠️ make sure you add these *after* the line that creates the app
Serving the client
We're going to use a browser to test our API. A working implementation of a Tic Tac Toe UI is included in the /client directory.
Add the following to our index.js file
app.use(express.static('client'));
⚠️ make sure you add this *after* the line that creates the app
#soeasy 😎😎😎
Serving the client
Go to in a browser and you should see something like this:
npm run start
Let's restart the application by running:
REPRESENTING GAME STATE
The game state endpoint is going to return all the information about what is going on in the game. That information will look something like this:
{ "players": [{ "name": "Player One", "symbol": "X" }, { "name": "Player Two", "symbol": "O" }], "moves": [0, 1, 2, 3, 4, 5, 6, 7, 8], "whoseTurn": null, "winner": null }
The "players" array contains information about the players
The "move" array is the list of moves that have been made, where "0" is the top-left box, and "8" is the bottom-right.
The "whoseTurn" or "winner" fields will be updated as each move is made.
GAME STATE
Let's add some code to represent this on the server. We'll make a new file called game-state.js, and add the following to it:
export default class GameState { constructor () { this.players = []; this.moves = []; this.whoseTurn = null; this.winner = null; } addPlayer (player) { if (this.players.length < 2) { this.players.push(player); } } addMove (turn) { let { move } = turn; if (this.moves.length < 9) { this.moves.push(move); } } }
CREATING A NEW GAME STATE
Now let's use our new GameState class.
In index.js, we need to import the class, after where we imported express:
import express from 'express'; import GameState from './game-state';
Notice that we've used a path (starting with ./) rather than just a module name. This tells node that we want to import a local file rather than something from node_modules.
We're going to create a new GameState when the application starts as well:
let app = express(); let gameState = new GameState();
GETTING THE GAME STATE
And finally, let's update our GET game-state endpoint to return the game state as JSON:
app.get('/game-state', (req, res) => { res.json(gameState); });
VIEWING THE GAME STATE
npm run start
Let's restart our application by running the following again:
If we reload the browser, we should be able to view the request from the developer tools. The easiest way to open the dev tools (in Google Chrome) is to "right click" and then click "Inspect":
VIEWING THE GAME STATE
If you go to the "Network" tab and reload the page again, you should be able to see all the requests that the page has made:
⚠️ This might look different if you're not using Google Chrome!
UPDATING THE GAME STATE
So now that we can GET the game state, we need to be able to POST to our server to update it!
We want to be able to send JSON data to the server and use that to update our gameState object. To do that, we need to add another dependency:
npm install --save body-parser
First we need to import our dependency so we can use 'body-parser'. Add the import to the top of the index.js file:
import bodyParser from 'body-parser';
UPDATING THE GAME STATE
Then, we need to tell the app to parse request bodies as JSON:
app.use(bodyParser.json());
⚠️ make sure you add these *after* the line that creates the app
UPDATING THE GAME STATE
Now we can update our POST endpoints to update the gameState object:
app.post('/join-game', (req, res) => { console.log(req.body); let { name, symbol } = req.body; gameState.addPlayer({ name, symbol }); res.status(200).json({ token: 'JOINED GAME' }); }); app.post('/take-turn', (req, res) => { console.log(req.body); let { move } = req.body; gameState.addMove({ move }); res.status(200).json({ }); });
Here we are doing pretty much the same thing both times - get some data off the request body, update the game state, and return a "200" response. We've also added some logging so we can see what is happening.
TESTING OUR POST METHODS
npm run start
If we restart our application by running the following again:
Open in two different browsers (or just two different tabs!), and use the UI to join the game.
TESTING OUR POST METHODS
We can look at the "Network" tab again to inspect the requests:
Input Validation
Currently we have no validation on our endpoints. We need to make sure we validate on the server, so that any bad data that comes in a request doesn't cause any issues. Let's add the following to the start of the addPlayer method of the GameState class:
if (!player.name) { throw new Error('Invalid player: no name'); } if (!player.symbol) { throw new Error('Invalid player: no symbol'); } if (('' + player.symbol).length !== 1) { throw new Error('Invalid player: symbol should be a single character'); }
Let's restart our server and try to join the game with some invalid data!
Input Validation
💣🔥💣🔥💣🔥
Everything broke again! But that's okay, it broke because we told it to! Let's update our endpoint code to handle the error more gracefully. Back in index.js:
app.post('/join-game', (req, res) => { let { name, symbol } = req.body; try { gameState.addPlayer({ name, symbol }); res.status(200).json({ token: 'JOINED GAME' }); } catch (e) { let { message } = e; res.status(400).json({ message }); } });
Now our server will respond with the correct error code, and an error message.
Input Validation
Let's do the same thing for the POST take-turn endpoint too:
app.post('/take-turn', (req, res) => { let { move } = req.body; try { gameState.addMove({ move }); res.status(200).json({ }); } catch (e) { let { message } = e; res.status(400).json({ message }); } });
And we will change the addMove method of the GameState class to throw an error too. Add the following to the start of addMove, after the first line:
if (isNaN(+move) || move < 0 || move > 8) { throw new Error('Invalid turn: move should be a number from 0 to 8'); }
BUsiness rules
We're going to use the same mechanism to fail safely when the users do valid things that are against the rules of the game. Currently, you can keep adding as many people to the game as you want 👩👩👧👦 👩👩👦👦 👩👩👧👧 ! But tic-tac-toe is only a two player game!
if (this.players.length === 2) { throw new Error('This game is already full'); }
Now, because of the code we added before, we will get a sensible error message if too many people try to join a game 👌
Let's change the addPlayer function to fix that. Add this to the top of the function, after the existing validation:
BUsiness rules
We also want to make sure that you can't make a move that the other player has already made.
Let's have a go at adding some validation that makes sure a player can't do that!
Taking Turns
Now we can get our server to manage whose turn it is!
if (this.players.length === 2) { this.whoseTurn = this.players[Math.floor(Math.random() * 2)]; }
Add the following to the end of our addPlayer method to set the initial state of the game:
That will randomly give the first move to one of the two players.
Then we should add the following to the end of the addMove method:
this.whoseTurn = this.players.find(player => player !== this.whoseTurn);
Taking Turns
How can we make sure that the user is who they say they are?
The current implementation will let either player take a turn at any point in the game... 😐 not ideal
Let's have a go at fixing that!
If you want a hint just ask!
Determining The Winner!
Making moves is all well and good, but we really want to be able to see who won!
Let's make a new file called win-checker.js. It's basic outline should look something like this:
export default class WinChecker { checkWin (gameState) { return false; } }
For now it just returns false, but we will fix that soon!
Determining The Winner!
Let's go back to game-state.js and import our new class:
import WinChecker from './win-checker'; let winChecker = new WinChecker();
And then update the addMove method to use it:
let win = winChecker.checkWin(this); if (win) { this.winner = this.whoseTurn; this.whoseTurn = null; } else { this.whoseTurn = this.players.find(player => player !== this.whoseTurn); }
A draw?
What happens if all the moves are done and no one has won?
Back in our game-state.js we need one last bit of code to handle a tied game. At the end of addMove, add the following:
if (!win && this.moves.length === 9) { this.winner = null; this.whoseTurn = null; }
Determining The Winner!
Now let's go back to win-checker.js and make it actually work...
I'm going to leave it up to you to implement this!
I walk through my implementation in the next few slides, so don't look ahead if you want to work it out yourself!
If you want a hint just ask!
If you get stuck with JavaScript syntax, Google and Stack Overflow are your friends!
Determining The Winner!
First we need a representation of the different moves that make a win: ];
Then we need a way to check if a set of moves matches one of the win scenarios. Let's add this to the WinChecker class:
checkWinScenario (winMoves, currentPlayerMoves) { return winMoves.every(move => currentPlayerMoves.indexOf(move) !== -1); }
This just checks that every one of the moves in a particular win scenario is in the list of the players moves.
Determining The Winner!
Now we need to run the checkWinScenario method against each of the possible scenarios. Let's add another function to the WinChecker class:
checkPlayerWin (currentPlayerMoves) { return WIN_SCENARIOS.some(winMoves => this.checkWinScenario(winMoves, currentPlayerMoves)); }
This method goes through each of the possible win scenarios checks if any of them match the current players move. Only one has to match for it to be successful.
All we need now is a way to determine what moves are the current players moves!
Determining The Winner!
We're going to replace the stubbed function we had before with the real implementation:
checkWin (gameState) { let reversedMoves = gameState.moves.slice(0).reverse(); let currentPlayersMoves = reversedMoves.filter((_, i) => !(i % 2)); return this.checkPlayerWin(currentPlayersMoves); }
We're being a bit tricky here, so let's break it down:
We know the last move that happened was made by the current player, as was every second move before that.
So, we reverse the array, and then filter out every odd numbered move (the other players moves). That leaves only the current players moves!
Determining The Winner!
Our finalised WinChecker class should look like this: ]; export default class WinChecker { checkWin (gameState) { let reversedMoves = gameState.moves.slice(0).reverse(); let currentPlayersMoves = reversedMoves.filter((_, i) => !(i % 2)); return this.checkPlayerWin(currentPlayersMoves); } checkPlayerWin (currentPlayerMoves) { return WIN_SCENARIOS.some((winMoves) => this.checkWinScenario(winMoves, currentPlayerMoves)); } checkWinScenario (winMoves, currentPlayerMoves) { return winMoves.every((move) => currentPlayerMoves.indexOf(move) !== -1); } }
Let's RECAP! 😅😅😅
We've done a heap of stuff!
We wrote a basic server!
We added some endpoints.
We added an internal representation of the game state.
We fleshed out our endpoints to update that state.
We added some validation to make sure a user couldn't enter bad data, or break our rules.
👍
👍
👍
👍
👍
We wrote some code that works out if the game is over, and who won!
👍
DONE!
At this point we have a pretty fully featured API (Application Program Interface) for our game!
git checkout server-finished
🌟
If you want to have a crack at implementing the UI, you should checkout the "server-finished" branch"
WRITING THE CLIENT
Now we're going to write the app that consumes our API!
We're going to open the index.html file in the /client directory, and add the following:
<!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>TIC TAC TOE</title> </head> <body> <h1>Tic Tac Toe</h1> <main></main> </body> </html>
REACT!
We're going to write our UI in React! However, there's a million other libraries we could have used, or we could've written it without any libraries at all.
The "best" front-end framework is a hotly debated topic - but it actually doesn't matter. Whatever you pick today will be totally wrong in 6 months anyway! 😅 A good engineer picks the best tool for the job with the information that she has at the time.
🍿🍿🍿
Client-Side Dependencies
<script src=""></script> <script src=""></script> <script src=""></script> <script src=""></script> <script src=""></script>
We're going to add some third-party code that we want to use. This should go at the bottom of the <body> tag, below our <main> tag:
What we've included here are a few shims for new browser APIs, the code for React, and Babel again so we can use ES2015+ in our browser code too.
CSS
We're going to add some CSS just so that when we make our UI it looks a bit (like a tiny bit) better than just default HTML.
Open the styles.css file within the /client directory, and add the following to it:
@import ''; html, body { margin: 0; min-width: 320px; text-align: center; } body, input { font-family: 'Slabo 27px', serif; font-size: 20px; } main { width: 100%; } input { text-align: center; } button { width: 100px; height: 100px; background: none; border: 1px black solid; font-size: 50px; vertical-align: top; }
We also need to add a reference to it in to our HTML, inside the <head> element:
<link rel="stylesheet" type="text/css" href="/styles.css">
TA DA!
If you jump to you should see a website!
There's not much here for now, but we're going to fix that.
👻
Our First Component
With React we build our UI from pieces of functionality called Components. Let's make our first one, which will manage most of the state of our application! We need modify the file called tic-tac-toe.js inside our /client directory. To start with, it should look something like this:
class TicTacToe extends React.Component { constructor () { super(); } render () { return <div></div> } } window.TicTacToe = TicTacToe;
⚠️ We're cheating a bit here - putting things on window is generally a bad idea. We should use proper modules.
Using our component
Now we need to include it in our page. Let's add a few more script tags to the bottom of our <body>
<script type="text/babel" src="/tic-tac-toe.js"></script> <script type="text/babel"> ReactDOM.render(<TicTacToe/>, document.querySelector('main')); </script>
❗️Notice the "text/babel"? That's a pretty sweet hack that tells Babel that it needs to compile these scripts before they are run!
❗️Also see the weird HTML within our JavaScript? That's JSX, the templating language for React. We will see more of that later.
Fleshing It out a bit
This main component is going to do a few thing:
1⃣️
It's going to be responsible for getting the game state from the server.
2⃣️
It's going keep track of the token from the server that identifies a player.
3⃣️
And it's going to keep track of the game state and render the right thing.
constructor () { super(); this.state = { gameState: { players: [] }, token: null }; }
Let's start by setting the initial state in the tic-tac-toe.js file:
Getting the game State
loadGameState () { fetch('/game-state') .then(response => response.json()) .then(gameState => this.setState({ gameState })); }
Let's add another function to our TicTacToe class:
Here we're using fetch, which is a new API that replaces the old janky XMLHttpRequest API. If you look back in our index.html file you'll see we had to include a polyfill for it - this is because it isn't implemented in all browsers yet.
Updating the APP State
Now we need to tell our component to actually do the call to get the data. So we need another function in TicTacToe:
componentDidMount () { this.loadGameState(); setInterval(() => this.loadGameState(), 1000); }
componentDidMount is part of the React component life-cycle. React will call this for us when the component has been successfully created. We are starting an interval that will request the updated game state every second.
If you refresh the page now, and open the Network tab of the developer tools, you should see the page making a request every second!
Rendering to the page
We want to get something to actually show up on the page now, so we need to add the most inportant method to the component - render:
render () { let { gameState, token } = this.state; let gameIsEmpty = gameState.players.length === 0; let waitingForPlayer = gameState.players.length === 1; let gameIsFull = gameState.players.length === 2; let gameIsUnderway = gameIsFull && gameState.whoseTurn; let gameIsWon = gameIsFull && !gameState.whoseTurn && gameState.winner; let gameIsDrawn = gameIsFull && !gameState.whoseTurn && !gameState.winner; }
For now, this doesn't actually render anything 😅, but let's take the opportunity to check out React's error messages. Reload the page and look in the console in Dev tools - it tells us we need to actually return HTML from our render function.
Joining the game
For our bits of state we defined before, we can work out when we want to show the user a form to join the game. Let's add the following to the end of the render method:
if (!token && !gameIsFull) { return <JoinGame onJoinGame={token => this.setToken(token)}/> }
Here you can see that pesky old JSX again. HTML in your JS looks a bit strange at first, but it solves a tricky problem in an interesting way.
<JoinGame> is going to be our next component.
Joining the game
Like before, we're going to modify one of the files inside the /client folder, this time join-game.js:
class JoinGame extends React.Component { constructor (props) { super(); this.props = props; this.state = { name: '', symbol: '' }; } render () { return <div></div> } } window.JoinGame = JoinGame;
This is much the same as last time, but with one main difference - the props parameter. This is how we can provide information to a component from the outside world.
USing The Component
Once again we need to add the script to our index.html file. Add the following before the tic-tic-toe.js script:
<script type="text/babel" src="/join-game.js"></script>
Let's refresh our page again and see if we get any errors!
Passing data to A component
Let's look at the render function of the TicTacToe class again, specifically the bit that uses the <JoinGame> component:
<JoinGame onJoinGame={token => this.setToken(token)}/>
All the extra stuff is what we want to pass through to the component from the outside. In this case, we are passing a function for it to use when the user has successfully joined the game. That function doesn't exist yet, so let's add it to the TicTacToe class:
setToken (token) { this.state.token = token; }
Joining the game
Let's tell the <JoinGame> component how to render itself by adding a render function:
render () { return ( <form onSubmit={e => this.joinGame(e)}> <h2>Join game:</h2> <label>Name:</label> <br/> <input type="text" placeholder="Your name" value={this.state.name} onChange={e => this.handleNameChange(e)} /> <br/> <label>Symbol:</label> <br/> <input type="text" placeholder="X" value={this.state.symbol} onChange={e => this.handleSymbolChange(e)} /> <br/> <input type="submit" value="Join"/> </form> ); }
Joining the game
This looks a bit more like what we are used to, but there's still a few things worth pointing out.
We have some more event handlers being used:
onSubmit={e => this.joinGame(e)} onChange={e => this.handleNameChange(e)} onChange={e => this.handleSymbolChange(e)}
And we have some data binding:
value={this.state.name} value={this.state.symbol}
Joining the game
Let's define those event handlers in our JoinGame class. First for our form values:
handleNameChange (e) { this.setState({ name: e.target.value }); } handleSymbolChange (e) { this.setState({ symbol: e.target.value }); }
And also for our form submit:
joinGame (e) { e.preventDefault(); fetch('/join-game', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify(this.state) }) .then(response => response.json()) .then(data => this.props.onJoinGame(data.token)); }
Whoop!
We should now be able to join the game! Let's reload the page and see what happens.
⚠️ Make sure you enter valid data, since we haven't got any client-side validation yet 😛
Client-side validation
Let's quickly update our code to use the error messages we send back from the server. Swap the final line of the joinGame method with the following:
.then(data => { let { message, token } = data; if (message) { alert(message); } else { this.props.onJoinGame(token) } });
Nothing fancy, but if we get a message back from the server, at least the user will see it!
Waiting...
Cool, now we can join a game! We now need a state for when you're waiting for another player to join. Let's add the following to the end of the render method in the TicTacToe class:
if (token && waitingForPlayer) { return <h2>Waiting for player...</h2> }
Try it out by opening the page in two browser tabs, and joining the game from one, and then the other!
Playing the game
Now for the fun bit! Actually making the game work!
First let's add some code to render our new state (again in the render function of the TicTacToe class):
if (gameIsUnderway) { return <Board token={token} gameState={gameState}></Board> }
You can see we've got a new component, and this time we're passing two bits of data to it, the token that allows us to make moves, and the game state.
The Board Component
Let's do the same thing as before and update the file called board.js:
class Board extends React.Component { constructor (props) { super(); this.state = { gameState: props.gameState, token: props.token }; } render () { return <div></div> } } window.Board = Board;
And let's add it to the index.html file again, above the join-game.js script.
<script type="text/babel" src="/board.js"></script>
Rendering the board
Once again we're going to need a render function:
render () { let { gameState, token } = this.state; let { whoseTurn } = this.state.gameState; let boardState = this.getBoardState(this.state.gameState); return ( <div> <h2>Next move: {whoseTurn.name}</h2> <button onClick={() => this.takeTurn(0)}>{boardState[0]}</button> <button onClick={() => this.takeTurn(1)}>{boardState[1]}</button> <button onClick={() => this.takeTurn(2)}>{boardState[2]}</button> <br/> <button onClick={() => this.takeTurn(3)}>{boardState[3]}</button> <button onClick={() => this.takeTurn(4)}>{boardState[4]}</button> <button onClick={() => this.takeTurn(5)}>{boardState[5]}</button> <br/> <button onClick={() => this.takeTurn(6)}>{boardState[6]}</button> <button onClick={() => this.takeTurn(7)}>{boardState[7]}</button> <button onClick={() => this.takeTurn(8)}>{boardState[8]}</button> </div> ); }
Taking a turn
We need a few functions to make this work. First, we need one to actually tell the server that we've made a move:
takeTurn (move) { let { token } = this.state; if (token) { fetch('/take-turn', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ move, token }) }) .then(response => response.json()) .then(data => { let { message } = data; if (message) { alert(message); } }); } }
Note that a token is required to make a move! This means we can have spectators, but if they click the board nothing will happen.
Updating the board
First of all, we need to tell our component to update its state when new data comes in from outside:
componentWillReceiveProps (nextProps) { this.setState({ gameState: nextProps.gameState }); }
This is another part of the React component life-cycle. This function is called when the data that is passed into the component changed. It is up to us to tell the inner component to update its state.
Updating the board
Now we do some magic to turn the gameState from the server into the moves on the board!
Let's add a getBoardState method to the Board component:
getBoardState (gameState) { let { moves, players, whoseTurn } = gameState; let boardState = [null, null, null, null, null, null, null, null, null]; let otherPlayer = players.find(player => { // Hack ahoy! return JSON.stringify(player) !== JSON.stringify(whoseTurn) }); let reversedMoves = moves.slice(0).reverse(); reversedMoves.map((move, i) => { boardState[move] = i % 2 === 0 ? otherPlayer.symbol : whoseTurn.symbol; }); return boardState; }
Updating the board
let boardState = [null, null, null, null, null, null, null, null, null];
Let's go through the important bits of that line by line. We initialise the board state to 9 null values:
Then we figure out who the other player is by looking for the player who isn't the current player. We stringify here to compare the values, since JavaScript object comparison is done by reference (there's better ways to do this, I promise!)
let otherPlayer = players.find(player => { // Hack ahoy! return JSON.stringify(player) !== JSON.stringify(whoseTurn); });
Updating the board
Lastly, for each move that has been made, we alternate between setting that square to the symbol of each player:
let reversedMoves = moves.slice(0).reverse(); reversedMoves.map((move, i) => { boardState[move] = i % 2 === 0 ? otherPlayer.symbol : whoseTurn.symbol; });
#simple😅😅😅 #notmagic
The end of the game
Now all that's left to do is add states for the end of the game. Let's update the render function of TicTacToe for the last time, by adding the following to the end:
if (gameIsWon) { return <h2>{gameState.winner.name} won!</h2> } if (gameIsDrawn) { return <h2>Draw!</h2> }
et voila! Our game should now be playable start to end!
Let's RECAP (AGAIN)! 😅😅😅
We've done another heap of stuff!
We made our server server static files
We wrote some HTML
We added some polyfills and dependencies
We wrote a UI for our game, including three React components!
👍
👍
👍
👍
🌟
If you want to see the complete finished project, you can check out the "finished" branch!
git checkout finished
WHAT's Next!?
There's still heaps of stuff we could do here!
We could...
make it prettier?
fix the bug where both players can have the same symbol!
let you use an emoji for your symbol!?
make it so you don't have to restart the server to start a new game 🙃
🤘
🤘
🤘
🤘
🤘
write some tests?! 🦄
FIN! THANK YOU!
🖖
summer-of-tech-js-masterclass-2017
By Craig Spence
summer-of-tech-js-masterclass-2017
Summer of Tech 2017 - JS Masterclass | https://slides.com/craigspence/summer-of-tech-js-masterclass-2017 | CC-MAIN-2022-05 | refinedweb | 5,343 | 67.04 |
For the last few days, I was trying to make a simple game with pygame. It was very interesting to learn pygame, and I got thrilled when I created my own game. is the official website for pygame. There is a myriad of tutorial links and simple documentation in this site.Here in this post I would like to share some useful codes, that’s I found confusing or tricky in the beginning. Before that let me tell about my game. It is a word game that checks your typing speed.A word will appear on the screen and you have to type it down before it disappears. I will attach the source code very soon.
1) Display text / string on-screen
screen = display.set_mode((900,800))
#This line of code set the window size.
basicfont = pygame.font.Font('freesansbold.ttf',28)
#Here we set the custom font. The first argument denote the font name.
#If it is not specified, pygame use default font.The second argument is the font size.
text = basicfont.render('Text to display ',True , [255, 255, 255])
#The ‘text to display’ can only be a single line: newline characters are not rendered.
#The second argument is a boolean:if true the characters will have smooth edges.
#The third argument is the color of the text [e.g.: (0,0,255) for blue].
screen.blit(text,(200,200))
#This code display the ‘text’ at position (200,200) on the screen.
The above code will help you to display text on the screen. But in-order to display numerals (especially when you want to display pints)you need a slight change in second line of code:
suppose you need to display points( suppose ‘points’ is numeral here)
text = basicfont.render(‘Your score :’+str(points), True, [255,0,0])
2) Read character from keyboard
def getkeypress(): for event in pygame.event.get(): if event.type == pygame.KEYDOWN : key_val = event.key character = chr(key_val) return character | https://jineshpaloor.wordpress.com/2011/07/02/122/ | CC-MAIN-2017-51 | refinedweb | 326 | 77.13 |
#include <Servo.h> int servoPin = 3; Servo Servo1; void setup() { Servo1.attach(servoPin); Serial.begin(115200);}void loop(){ Servo1.write(0); delay(1000); Servo1.write(180); delay(1000); }
What kind of servo is it? The metal geared servos I use, would twitch at 5ish volts but not turn.
What is the current rating of your voltage regulator? That servo is likely to take over 1A on startup. Is the regulator output connected directly to the servo + and - or via a breadboard or even worse via the Arduino?There's nothing wrong with the code so it has to be power or wiring.Steve | https://forum.arduino.cc/index.php?topic=578159.msg3936778 | CC-MAIN-2020-05 | refinedweb | 103 | 69.99 |
#!/bin/python3 ''' Problem: 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest positive number that is evenly divisible(divisible with no remainder) by all of the numbers from 1 to N? ''' """ Euclidean GCD Algorithm """ def gcd(x,y): return x if y==0 else gcd(y,x%y) """ Using the property lcm*gcd of two numbers = product of them """ def lcm(x,y): return (x*y)//gcd(x,y) n = int(input()) g=1 for i in range(1,n+1): g=lcm(g,i) print(g) | http://python.algorithmexamples.com/web/project_euler/problem_05/sol2.html | CC-MAIN-2019-47 | refinedweb | 103 | 53.24 |
Additional namespace in web service
Discussion in 'ASP .Net Web Services' started by AF,72
- William F. Robertson, Jr.
- Jul 29, 2003
Writing image data to web page without additional ASPX fileNeo Geshel, Feb 13, 2006, in forum: ASP .Net
- Replies:
- 6
- Views:
- 516
- Neo Geshel
- Feb 14, 2006
Trying to add additional .aspx pages to an existing web site=?Utf-8?B?ZGF2ZQ==?=, Apr 7, 2006, in forum: ASP .Net
- Replies:
- 3
- Views:
- 437
- =?Utf-8?B?ZGF2ZQ==?=
- Apr 10, 2006
Where to get Additional Web Controls such as TreeViewLei Guangfu, Jul 4, 2003, in forum: ASP .Net Web Controls
- Replies:
- 1
- Views:
- 94
- Teemu Keiski
- Jul 4, 2003
Does timer in Web Service Global.asax block my Web Service from processing web-site requests?Leo Violette, Apr 17, 2009, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 1,054
- Leo Violette
- Apr 17, 2009 | http://www.thecodingforums.com/threads/additional-namespace-in-web-service.782663/ | CC-MAIN-2014-41 | refinedweb | 147 | 76.93 |
Hi,
I am developing an Eclipse plugin ( ) which will offer Subclipse
integration . This integration mainly involves posting a diff to a
remote code review server.The single case which I am unable to handle
correctly is a renamed file. When renaming a Java class, named Second:
package com.example;
public class Second {
}
to Third:
public class Third {
I would expect the diff to show that Second.java has all lines deleted
and Third.java has all lines added. While Second.java does have all
lines deleted, Third.java only has a diff from Second.java:
Index: src/com/example/Third.java
===================================================================
--- src/com/example/Third.java (revision 2)
+++ src/com/example/Third.java (working copy)
@@ -1,5 +1,5 @@
package com.example;
-public class Second {
+public class Third {
}
I am currently using svnClient.createPatch(changes,
_project.getLocation().toFile(), tmpFile, false) to generate the diff,
but have experimented with svnClient.diff as well, with no success.
What are my options of generating a delete-create diff for a renamed file?
Thanks,
Robert
--
Sent from my (old) computer
------------------------------------------------------
To unsubscribe from this discussion, e-mail: [dev-unsubscribe_at_subclipse.tigris.org].
This is an archived mail posted to the Subclipse Dev
mailing list. | http://svn.haxx.se/subdev/archive-2011-10/0001.shtml | CC-MAIN-2015-14 | refinedweb | 199 | 53.58 |
Tirex/Overview
The Tirex tile rendering system renders map tiles on the fly and caches them. It is very similar (and mostly backwards compatible) to the mod_tile/renderd system, but improves on it in many respects.
Contents
Rendering Backends
Tirex can work with several different rendering backends at the same time. Normally Mapnik is used for this, but a test backend (to test the system setup) and a WMS backend is also provided.
Maps
Tirex can handle several maps. Maps can have different styles and potentially use different data. In the most common case different maps are just different map.xml style files for the Mapnik renderer.
Tiles
As with many other map systems, the map of the world uses a Mercator projection and is available in several zoom levels and divided into tiles. Tiles are always square and 256x256 pixels. At zoom level 0 the map of the whole world fits into one tile, at zoom level 1 there are four (two by two) tiles for the whole world. At zoom level 2 there are 16 tiles and so on. The highest zoom level depends on the detail you want. For OSM its normally about 17 to 19.
Tiles are numbered x=0 (-180 degrees) to x=2^zoom-1 (+180 degrees) and from y=0 (about 85 degrees) to y=2^zoom-1 (about -85 degrees).
Metatiles
The tile size (256x256) is optimized for fast transfer on the web, it is not necessarily the best size for handling tiles on the tile server. Rendering many small tiles takes more time than rendering fewer bigger tiles and storing many small tiles in single files is inefficient because of the block size used by filesystems. Also a directory with many files in it will be slower than with fewer files.
This is solved by aggregating several tiles into a metatile. Typically 8x8 tiles make up one metatile, but this is configurable (although sizes other than 8x8 are not well tested) in Tirex. This is the same size as used in mod_tile/renderd systems. The tile server internally always "thinks" in metatiles. Access requests for tiles are converted into requests for metatiles. Metatiles are rendered and stored.
Metatiles are numbered just like tiles, a metatile number is always the same as the number of the top-left tile, in other words to get the metatile number you have to round the tile coordinates down to the neirest multiple of the metatile size (8). Tile (x=17 y=4) is metatile (x=16 y=0).
In the Tirex system metatiles are represented by Tirex::Metatile objects. In addition to the x- and y-coordinates and the zoom level, a metatile also needs the map name.
Jobs
If a metatile should be rendered, a job must be created for it. The job contains at least the information about the metatile and the priority. It can also contain additional information such as the expire time, the source of the request and the time it took to render the job (once it is rendered).
In the Tirex system jobs are represented by Tirex::Job objects.
The master will keep track on where a job came from so that it can notify the source when the job is done.
The Queue
The master keeps a prioritized queue of jobs. Every job that comes in will be placed on the queue. When there are free renderers, the master will take the first job from the queue and sends it to the renderer backend.
There is only one queue for all jobs, but it is prioritized. Priorities are positive integers, 1 is the highest priority. New jobs are added before the first job in the queue with a lower priority. Jobs are always taken from the front of the queue. So jobs will always be worked on based on their priority and age.
A job can have an expire time. If it comes up to be rendered and the expire time has passed, the job will not be rendered. This basically allows you to say: I have a metatile I want rendered because I might need it in the next few minutes. But if its not rendered in, say, 10 minutes, you needn't bother rendering it.
When a new job comes in for a metatile that already has a job on the queue, those two jobs will be merged. The old job will be taken from the queue and a new job will be added. The new job has the higher priority of both jobs. The expire time of the new job will be the larger of both times. No expire time is the same as "infinite expire time", so if at least one of the jobs has no expire time, the new job will have no expire time. Both sources of the two jobs will be notified when the job is rendered.
The queue is implemented in a way that its doesn't matter how many jobs there are in the queue. If you want to stick 1000 or 10000 jobs in the queue, thats ok.
It is your job as administrator of the system to decide which priorites to use for which kind of requests. Live requests coming in from a user should probably get a higher priority than batch requests to re-render old tiles. The Tirex system gives you the mechanisms needed, you have to decide which jobs get priority, how long they should stay on the queue etc.
Buckets
Tirex allows an infinite number of priorities. To make configuration and handling easier, these priorities can be divided up into several buckets. Each bucket has a name and represents all priorities in a certain range. You define the name and range. Configuration and some other operations will use those priority classes instead of the priorities itself.
A typical setup will have a bucket for live requests from the web (lets call it 'live') that works on priorities 1 to, say, 9. And then one or more buckets for background requests with lower priorities.
The Master
The master is the heart of the Tirex system. Its job is to work throught the queue in order and to dispatch jobs to be rendered when its their turn. The manager takes the configuration and the current system load into account when deciding which and how many tiles can be rendered.
Messages
The Tirex system consists of several processes that work together. Requests for the rendering of metatiles and other messages are passed between those processes through UDP or UNIX domain datagram sockets. Datagram sockets are used to make handling of many data sources easier in a non-blocking environment.
Messages always have the same, similar format: Fields are written as "key=value" pairs (no spaces allowed), one per line. A linefeed (LF, "\n") is used as a line ending, the software ignores an additial carriage return (CR, "\r") before the linefeed. Each message must have a "type" (for instance "type=metatile_request"). Requests will normally be answered by a reply with the same type and added "result" field. The result can either be positive ("result=ok") or negative ("result=error"). More descriptive error messages are also allowed, they always begin with "error_" ("type=error_unknown_message"). An additional error message for human consumption can be added in the "errmsg" field.
Request handling
The following simplified diagram shows how a tile request from a web browser is handled by Tirex:
1. The web browser ask the webserver (in this case Apache with the mod_tile module) for the tile.
2. mod_tile checks the disk for the tile. If it is available, it can be delivered to the browser immediately (9). If it is not available we go on...
3. Send a request to tirex-master for the tile. The master will put the request in the queue.
4. Once its the turn of this tile, the master will send the request on to the rendering backend, typically the one using the Mapnik renderer tirex-renderd-mapnik. There are other backends available, too.
5. The rendering backend generates the tile and stores it on disk.
6. It then sends a reply back to the master to tell it that the tile is done.
7. The master sends this reply on to the original source of the request.
8. mod_tile now gets the tile image from disk...
9. ...and sends it back to the browser. | http://wiki.openstreetmap.org/wiki/Tirex/Overview | CC-MAIN-2016-50 | refinedweb | 1,402 | 72.97 |
#include <XnCppWrapper.h>
Detailed Description
Purpose: The ScriptNode object loads an XML script from a file or string, and then runs the XML script to build a production graph. It also references every node created from that script, so it wouldn't be destroyed.
Remarks:
A typical usage of a script node is:
- Create the script node, using the Create() method.
- Load XML script from file or string, using LoadScriptFromFile() or LoadScriptFromString().
- Execute the script, using Run().
Note that the context's RunXmlScriptFromFile() or RunXmlScript() methods can be used to perform all above steps.
All production nodes in the production graph use reference count to determine their life time, but if an application executed a script, it can't know upfront which nodes will be created by this script, and so, can't take reference to them, so those nodes might be destroyed immediately. The script node, apart from executing the script, also keeps reference to all the nodes created by the script. This means that if the script node is destroyed, every production node that was created by the script, and is unreferenced by the application will also be destroyed. For this reason, it is recommended for application using XML scripts to keep a reference to the script node as long as they keep a reference to the context itself.
A single ScriptNode object is responsible for building the entire production graph, irrespective of however many node definitions there are in the XML script and how many production nodes are created.
For additional information about XML scripts, see Xml Scripts.
Constructor & Destructor Documentation
Ctor
- Parameters:
-
Member Function Documentation
Loads an XML script file into the ScriptNode object.
- Parameters:
-
Loads an XML script string into the ScriptNode object.
- Parameters:
-
Runs the ScriptNode object's XML script to build a production graph.
- Parameters:
-
Remarks
This method causes the whole production graph to enter Generating state. To read data you have to run one of the 'Update Data' methods of the Context or of the node itself.
The documentation for this class was generated from the following file:
Generated on Wed May 16 2012 10:16:07 for OpenNI 1.5.4 by
| https://documentation.help/OpenNI/classxn_1_1_script_node.html | CC-MAIN-2021-49 | refinedweb | 361 | 60.14 |
Message-ID: <2113242880.177645.1550294584880.JavaMail.j2ee-wiki@duraspace-app01.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_177644_133078245.1550294584880" ------=_Part_177644_133078245.1550294584880 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Registration for workshops is at...&n=
bsp; EventBrite W=
orkshop Registration (Note: You are first asked to register a=
s a Workshop or Orientation Attendee. On the next screen, you specify=
which workshop you want to attend.)
Using Git and GitHub for managing = metadata (no new data models, we promise). This session is proposed as a tw= o-part workshop: The first will cover a modified version of the =E2=80=98Ve= rsion Control with Git=E2=80=99 Software Carpentry lesson, tailored for a n= on-developer audience, with more focus on metadata. This is typically taugh= t as a half-day (3 hour) workshop. The second part will focus on the use of= Git and GitHub in the context of the metadata workflow. We will present ex= amples and strategies, taken from recent work by UC Santa Barbara and UC Sa= n Diego, of version control, pull requests, and automated hooks and integra= tions as they relate to moving metadata through a workflow and into our rep= ositories. In addition to these demonstrations, we hope to spend a good per= centage of the time available in discussion with other interested instituti= ons and how we might leverage our collective experience to make getting our= metadata into our repositories easier, more consistent, and maybe even mor= e fun!
Presenters: Matt Critchlow, Alex D= unn, and Chrissy Rissmeyer
Audience: Part 1 - Anyone new to G= it/GitHub; Part 2 - All (Metadata focused)
Equipment: Please bring a laptop w= ith Git installed, download the sample data and follow the setup instructions, and verify= that you know your GitHub email address and password. If you don't have a GitHub account, please create one.
Fedora is the flexible, extensible, open source repository platform tha= t commonly underlies Samvera implementations. Fedora provides a number of c= ore services that Samvera already uses, such as CRUD operations, versioning= , and fixity, and several new, potentially useful extended services have be= en introduced within the last year. The API Extension Framework provides a = means of binding services to repository objects in order to extend the func= tionality of Fedora, while the Import/Export Utility makes it easier to get= content into and out of Fedora in standardized formats and packages. This = workshop will introduce both of these new services and discuss how they mig= ht be used in the context of Samvera. Participants will also have an opport= unity.<= /span>
Following the acquisition of Digital Commons / bepress by Elsevier, ther= e=E2=80=99s been a surge of interest in supporting campus publishing activi= ty in Samvera. Fulcrum is in its third year of developing a publishing plat= form on Samvera (and is now running on Hyrax). While we=E2=80=99ll keep the= structure of this workshop flexible to respond to the interests of the par= ticipants, we=E2=80=99ll work from this basic structure:
Presentation of the service model that Fulcrum is being built to sup= port.
Presentation of the features and architecture of the platform, with = an emphasis on Epub support and publishing workflows.
A group discussion of the kinds of publishing-related service reques= ts attendees are hearing from their communities, in particular from those w= ho are concerned about the Elsevier acquisition of Digital Commons / BePres= s, and what interest is there in a coordinated community effort around supp= ort for publishing and fully-encoded texts.
Presenters: Jeremy Mors= e, Melissa Baker-Young, Jon McGlone
Audience: Managers
Rails, the framework that Samvera is based on continues to march forward= rapidly. Best practices and features change with each release continuing t= o make the platform better (or at least different =F0=9F=98=89). This = workshop will go through details of some of the latest changes in recent Ra= ils versions including ActiveJob, creation of better parent objects, the at= tributes API and a slew of Javascript related changes including webpack int= egration and what it means for the future of Javascript in Rails applicatio= ns. Along the way we'll discuss changes to best practices in the broader co= mmunity and how they affect your existing and new apps. We'll do lots of li= ve examples and get into implementation details. This workshop will be divi= ded into 3 parts for participant convenience. Parts 2 and 3 are hands on, s= o please bring a laptop with Virtual Box installed. We'll distribute a VM t= o make sure that everyone is getting a uniform experience.
Presenter: Rob Kaufman
Audience: Developers, ideally with at least some Samvera or Rails experi= ence. Those without such experience are encouraged to come to Part 1 and sh= ould feel free to decide if staying for part 2 and 3 is the right optionthi= ng for them.
Equipment: Please bring a laptop with VirtualBox inst= alled. We'll have a VM available via thumb drives for those attending the w= orkshop which we'll use to bring everyone to the same system setup quickly.=
In this workshop we will walk thro= ugh a local installation of Hyrax 2.x, generate a work type, and get a loca= l test suite running. This will be an abbreviated version of a module usual= ly run at Samvera Camp. Because of the limited time, this will mostly be a = demo with audience participation. However, we will offer some support if yo= u want to bring your laptop and follow along.
Presenter: Bess Sadler
Audience: Developers
Equipment: Laptops
As in past Connects, this will be = an updated workshop covering issues and strategies when testing Samvera-bas= ed applications using RSpec. The workshop will go over testing practices fo= r each of the principle unit components of an application (models, controll= ers, views, jobs, services, etc.) and also contrast that with how feature t= ests and written. Takeaways will include some "boilerplate" examples for ea= ch kinds of test, test suite configuration, continuous integration, and if = time permits, one-on-one help with individual questions or blockers that an= yone might be currently having. Having a laptop and working Hydra applicati= on is a must, even if it's just the barebones. Ideally, this workshop is ge= ared towards current Hydra adopters or people who have just started working= on applications. Someone who has never used any of Samvera's principle app= lications such as Curation Concerns, Sufia, or Hyrax, might have difficulty= .
Presenter: Adam Wead
Audience: Developers, new and= old - geared towards current adopters or people who have just started work= ing on an application.
Equipment: Laptops are optional, b= ut encouraged.
Workshop going over the interface,= configuration, patterns, and interaction points for using Valkyrie, a libr= ary to enable persisting metadata and files into a variety of different bac= kends with a common interface.
Presenter: Trey Pendragon
Audience: Developers
Equipment: Laptop
Tired of worrying about load order for your script includes? Has global = namespace pollution got you down? Have you ever accidentally clobbered exis= ting-bas= ed JavaScript app with a few modules. Some experience with JavaScript is re= commended, and a laptop is highly recommended because you will be writing c= ode. Workshop materials (in the form of download links) will be supplied be= fore the workshop begins.
Presenter: Eric O'Hanlon
Audience: Developers
Equipment: bring laptop
This session will be a chance for you to ask all those non-technical que= stions that, as someone newish to Hydra, are running through your head. Wha= t would it mean to adopt Hydra at my institution, how many people will it t= ake, what's the real cost of adoption. How do I get involved with the Hydra= Community? What will they expect of me? What should I expect of them? No s= erious question will be deemed too silly, or too basic, to ask! Depending o= n what comes up, this may be followed up by one or more unconference sessio= ns on Thursday.
Presenters: Robin Ruggaber, Bess Sadler
Audience: People new-ish to H= ydra with lots of non-technical questions to ask!
This hands-on workshop will = cover tools and techniques to help managers decide whether to spin up a new= Samvera repository, manage the process of building that repository, and ma= intain the repository once it is in production. We=E2=80=99ll cover the pro= ject lifecycle for migrating to Hyrax, defining roles within your team, kee= ping in sync with community development efforts, managing documentation, an= d managing user expectations and needs.
Presenters: Nabeela Jaffer, Chris Diaz, Steve Van Tuyl, Julie Rudder
Audience: Managers | https://wiki.duraspace.org/exportword?pageId=90964551 | CC-MAIN-2019-09 | refinedweb | 1,468 | 50.06 |
This document gets you started
with Groovy in NetBeans IDE. You will
create a Java application, add a JFrame, and retrieve a simple message from
a Groovy file.
Contents
To follow this tutorial, you need the following software and resources.
In this section, we create a Java application.
Click Next.
Make sure to unselect the Create Main Class checkbox. Click Finish.
In this section, we create a JFrame and a Groovy class.
Click Finish. The JFrame is created.
Click Next.
Click Finish. The Groovy file is created. Your project structure should now be as follows:
In this section, we code the interaction between the Groovy file and the Java class.
class GreetingProvider {
def greeting = "Hello from Groovy"
}
GreetingProvider provider = new GreetingProvider();
public DisplayJFrame() {
initComponents();
String greeting = provider.getGreeting().toString();
jTextField1.setText(greeting);
}
Note: Issue 161176 covers
the problem where there is an error underline when there shouldn't be, in the first
line above. The application should still run successfully, however.
You now know how to create a basic Java application
that interacts with Groovy. | http://www.netbeans.org/kb/docs/java/groovy-quickstart.html | crawl-002 | refinedweb | 175 | 66.64 |
class Solution {
public:
int lengthOfLongestSubstring(string s) {
string temp="";
int len=s.size();
int max=0;
//int size=0;
for(int i=0;i<len;++i){
size_t found=temp.find(s[i]);
if(found==string::npos){
temp+=s[i];
}
else{
if(temp.size()>max){
max=temp.size();
}
temp="";
temp+=s[i];
}
}
return max;
}
};
Disclaimer: C++ is not my language
I think you need to re-read the specification of the question. Your code appears to only pick up on the longest substring if it is at the start of the given string or completely independant of the first substring. If the longest substring starts at character 2 of the first substring then you will miss it.
Example:
abcaefgh
Your algorithm produces the first candidate as "abc" length 3. Then it wipes temp and starts again at position 5 making the next candidate "efgh" length 4. 4 is returned as the result.
The actual longest substring is "bcaefgh" length 7.
In order to quickly hack your algorithm to work, add a new integer "j" outside the for loop. Set it to 1. Inside the for loop where you reset temp to the empty string set i=j and increment j; Total hack but it should get the correct result using your base code.
Remember to use the code formating button when you submit next time!
Edit:There's one other addition you need to do to to catch the case where the longest string is the last substring you encounter. I also added a simple optimisation that if you get to max length being greater than the number of characters from s[i] to sen [len-1] then you quit.
Sorry, it's in java but you should get the gist of the algorithm. Quick hint: whilst this works, it's not very efficient and you'll not pass the time constraint with it.
String temp=""; int len=s.length(); int max=0; int j=0; for(int i=0;i<len;++i){ String current = ""+s.charAt(i); boolean found=temp.contains(current); if(!found){ temp+=current; if(i==len-1&&temp.length()>max){ max=temp.length(); } } else{ if(temp.length()>max){ max=temp.length(); } temp=""; i=j; j++; if(max>(len-i)){ i=len; } } } return max;
The trick with this one is realising that you don't actually need to use a string type to keep track of the current substring. Try using a hashmap and just remember the starting index of the first character in the current candidate substring. Spoiler below:
public class Solution { public int lengthOfLongestSubstring(String s) { int len=s.length(); int max=0; int current_index = 0; HashMap<Character,Character> hm = new HashMap<Character,Character>(); for(int i=0;i<len;++i){ char current_char = s.charAt(i); if(!hm.containsKey(current_char)){ hm.put(current_char, current_char); } else{ if(hm.size()>max){ max=hm.size(); } while(s.charAt(current_index)!= current_char){ hm.remove(s.charAt(current_index)); current_index++; } current_index++; } } if(hm.size()>max){ max=hm.size(); } return max; } } | https://discuss.leetcode.com/topic/6366/anyone-know-what-s-wrong-with-my-code | CC-MAIN-2017-34 | refinedweb | 497 | 66.74 |
Sep 19, 2017 06:43 PM|oeurun|LINK
Hi,
I developed a chat module with SignalR in MVC4 application. I want to make optional this module. So I add a key to web.config. But I cannot achieve turn off SignalR. Its running in the background. I check network actions on chrome and I see actions such as "poll?transport=longpolling....." and "send?transport=longpolling....."
I comment all codes about signalr but it is running still.
How can i stop signalr service. How can i configure when SignalR service run.
Contributor
6450 Points
Sep 20, 2017 06:05 AM|Jean Sun|LINK
Hi oeurun,
oeurunI developed a chat module with SignalR in MVC4 application. I want to make optional this module. So I add a key to web.config. But I cannot achieve turn off SignalR. Its running in the background.
Please try the following scenario on my side.
1. Add the following setting in the web.config file.
<configuration> <appSettings> ... <add key="SignalREnabled" value="true"/> </appSettings>
2. Please check the StartUp.cs which is located under the root folder of your project. Modify the code as the follows.
public class Startup { public void Configuration(IAppBuilder app) { var SignalREnabled = System.Configuration.ConfigurationManager.AppSettings["SignalREnabled"]; if (SignalREnabled=="true") { app.MapSignalR(); } } }
3. You can hide the links the to the SignalR chat page based on the configuration settings.
@{ if("true".Equals(System.Configuration.ConfigurationManager.AppSettings["SignalREnabled"])) { <li>@Html.ActionLink("Chat", "Chat", "SignalR")</li> } }
Best Regards,
Jean
Contributor
6450 Points
Sep 27, 2017 06:37 AM|Jean Sun|LINK
Hi oeurun,
oeurunActually, I confused about visual studio browser link. The newtork traffic that i've seen is about browser link. I disable browser link on vs2015
You can find how to use browser link in the following article.
Best Regards,
Jean
3 replies
Last post Sep 27, 2017 06:37 AM by Jean Sun | https://forums.asp.net/t/2129040.aspx?Turn+On+Off+SingalR+ | CC-MAIN-2020-05 | refinedweb | 311 | 52.76 |
Selenium Java TutorialSelenium Java
1. Import the SDK into Maven's pom.xml1. Import the SDK into Maven's pom.xml
<!-- This is the Applitools Selenium Java SDK --> <!-- Use JDK 1.8 (required for Maven CLI) --> <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> </properties> <dependencies> <!-- This is the Applitools Selenium Java SDK --> <dependency> <groupId>com.applitools</groupId> <artifactId>eyes-selenium-java3</artifactId> <version>RELEASE</version> </dependency> <!-- Required to run "mvn package" which runs test --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> <!-- Required for Java 10 --> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1.1</version> </dependency> </dependencies>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
2. Initialize the SDK and set the API key2. Initialize the SDK and set the API key
// Initialize the eyes SDK and set your private API key. Eyes eyes = new Eyes(); // Set Applitools API key eyes.setApiKey("APPLITOOLS_API_KEY");
2
3
4
5
3. Set the application (AUT) name, the test name and set the browser's viewport size3. Set the application (AUT) name, the test name and set the browser's viewport size
// Start the test by setting AUT's name, window or the page name that's being tested, viewport width and height eyes.open(driver, "appName","windowName", new RectangleSize(600, 800)); // Navigate the browser to the "ACME" demo app driver.get("");
2
3
4
5
4. Generate screenshot.4. Generate screenshot.
The following uploads the image data to Applitools for the AI to compare differences, generate baseline and so on.
// Visual checkpoint. eyes.checkWindow("Login window");
2
5. End the test5. End the test
// End the test eyes.close(); // Close the browser. driver.quit(); // If the test was aborted before eyes.close was called, ends the test as aborted. eyes.abortIfNotClosed();
2
3
4
5
6
7
8
9
Putting it all together (simplified code)Putting it all together (simplified code)
import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import com.applitools.eyes.selenium.Eyes; import com.applitools.eyes.RectangleSize; public class App { public static void main(String[] args) { //Use Chrome browser WebDriver driver = new ChromeDriver(); // Initialize the eyes SDK and set your private API key. Eyes eyes = new Eyes(); // Set the API key from the env variable. Please read the "Important Note" // section above. eyes.setApiKey("APPLITOOLS_API_KEY"); try { // Start the test by setting AUT's name, window or the page name that's being tested, viewport width and height eyes.open(driver, "appName","windowName", new RectangleSize(600, 800)); // Navigate the browser to the "ACME" demo app driver.get(""); // Visual checkpoint #1. eyes.checkWindow("Login window"); /(); } catch (TestFailedException e) { System.out.println("\n" + e + "\n"); } finally { // Close the browser. driver.quit(); // If the test was aborted before eyes.close was called, ends the test as // aborted. eyes.abortIfNotClosed(); // End main test System.exit
TIP
The instructions are elaborate and are for a brand new machine. You may skip most of it(Steps 3, 4, 5 and 6) if you already have Java, JAVA_HOME, Maven, Chrome browser, Chromedriver all set up properly. Java's Java SE Development Kit (JDK) from here.
- Set JDK's
JAVA_HOMEenvironment variable to so that the command line can find JDK
Mac:
Find the Java installation folder path.
- First, find where the Java is installed on your mac.
- Run
/usr/libexec/java_homein your Terminal
- You will see something like
/Library/Java/JavaVirtualMachines/jdk-10.0.2.jdk/Contents/Home. The path will vary depending on the version of the JDK and your Mac's OS version. We need to now set this folder as
JAVA_HOMEenvironment variable.
Setup JAVA_HOME Environment variable
- Open
~/.bash_profilefile (Create one if it's missing using
touch ~/.bash_profile).
- Add the line
export PATH="<PATH_TO_JDK_FOLDER>:$PATH".
- Change the
<PATH_TO_JDK_FOLDER>to the folder you got earlier.
- For example:
export PATH=/Library/Java/JavaVirtualMachines/jdk-10.0.2.jdk/Contents/Home:$PATH
- Save it and close the file.
- Run
source ~/.bash_profile. This will load the environment variables from the
~/.bash_profilefile.
- Run
$ echo $JAVA_HOME. It shuld say something like:
/Library/Java/JavaVirtualMachines/jdk-10.0.2.jdk/Contents/Home
Test your JAVA_HOME setup
- Run
javac --versionin your Terminal. It should show something like
javac 10.0.2.
- Run
echo $JAVA_HOMEand it should show the path you set earlier.
Windows:
Find the Java installation folder path
- Typically it is something like
C:\Program Files\Java\jdk10.0.2
Setup JAVA_HOME Environment variable
- Navigate to
Computer (Right click) | Properties | Advanced System Settings | Advanced (Tab) | Environment variables | System Variables.
- Click on New
- Add
JAVA_HOMEas the Variable Name.
- Enter JDK installation path in the Variable Value field. Typically this looks something like:
C:\Progra~1\Java\jdk10.0.2
- Click on
Save
Test your
JAVA_HOMEsetup
- Restart your command line prompt to load the new environment variable.
- Run
javac --versionin your command line prompt. It should say something like
javac 10.0.2.
Download Maven binary Zip file from here.
- It should look something like
apache-maven-3.5.4-bin.zip
- Follow the installation instructions from here to add it to the
PATH.
TIP
- It's better if you add it permanently to the environment so when you open a new Terminal the values will persist. Otherwise, you may have to redo it for everytime you open the Terminal. This means you should put it in the
~/.bash_profilefile (Mac) or in System variables in Windows. For more, see the Steps for adding
Chromedriverto the
PATHbelow.
- The Maven executable is inside
/binfolder of the extracted Maven directory. So you must include
/bin. It should look something like:
/Users/raja/apps/apache-maven-3.5.4/bin
Test your Maven setup
- Make sure to restart the Terminal or Command line prompt to load the new environment variables.
- Run
mvn -v. You should see something like below:
TIP
Installing
gitis optional. You need this mainly to clone the demo project from the Github repository. Instead of installing
git, you can simply download the Zip file from the repo. Further, If you are Mac, you already have
git.
Install Google Chrome browser from here
Install a
Chromedriverthat's appropriate for your operating system and your Chrome browser's version from here.".
- For example, if the chromedriver is in
/Users/apps/chromedriver, then it would be
export PATH="/Users/apps/:$PATH.
- Save the file and go back to the Terminal.
- Run
source ~/.bash_profile. This will load environment variables from
~/.bash_profile..
- Save it.
If everything went fine, and if you typed
chromedriverin the command prompt, you should see something like below:
Step 1.2 Download the demo projectStep 1.2 Download the demo project
- Clone the repo:
git clone
Go to the project folder
cd tutorial-selenium-java.
Run
mvn package. This will compile and get the project ready for execution.
Step 1.3 Importing the project into EclipseStep 1.3 Importing the project into Eclipse
The project can be run from the command line or through Eclipse. We recommend the command line option because it's faster. Also it is CI/CD friendly. However, if you want to run the tests through Eclipse, here is how to import it.
- Navigate to File | Import... | Maven | Existing Maven Projects | Next (button)
- Navigate to the place where you downloaded the project
- Click Next and Import the project.
Important Tip: How to load Environment variables into Eclipse / IntelliJ
If you are running the tests through an editor like Eclipse or IngelliJ. Open the editor directly from the Terminal (and not by clicking on the editor icon).
✴️ To do that, run "open /path/to/Eclipse or IntelliJ.app" from the Terminal.
This will load all the environment variables that's set in that Terminal into Eclipse/IntelliJ! This will avoid you hard-coding paths and keys for things as webdriver.chrome.driver, APPLITOOLS_API_KEY
Step 1.4 Run your first testStep 1.4 Run your first test
From command line:
Run
$ mvn exec:java -Dexec.mainClass="com.applitools.quickstarts.App" -Dexec.args="1"
From Eclipse:
"Run app" menu --> Run Configurations.. --> Arguments (tab) --> Enter "1" in the "Program arguments" (section) --> Click "Run".
App.javaand reduce
viewPortWidthand
viewPortHeight. Keep reducing until it's smaller than your monitor's size.
//App.java String viewportWidth = "1200"; // Reduce this to 800 String viewportHeight = "750"; // Reduce this to 600 String testName = "Login Page Java Quickstart"; String appName = "ACME app"; String loginPageName = "Login Page"; String appPageName = "App Page";
2
3
4
5
6
7
8.
From command line:
Run
$ mvn exec:java -Dexec.mainClass="com.applitools.quickstarts.App" -Dexec.args="2"
From Eclipse:
"Run app" menu --> Run Configurations.. --> Arguments (tab) --> Enter "2" in the "Program arguments" (section) --> Click "Run". eyes is superior to pixel-by-pixel comparisonPart 3 - Understand why Applitools AI eyes help eliminate pixel-by-pixel comparison. This will cause:
From command line:
Run
$ mvn exec:java -Dexec.mainClass="com.applitools.quickstarts.App" -Dexec.args="2"
From Eclipse:
"Run app" menu --> Run Configurations.. --> Arguments (tab) --> Enter "2" in the "Program arguments" (section) --> Click "Run". the screenshot and closes it.
From command line:
Run
$ mvn exec:java -Dexec.mainClass="com.applitools.quickstarts.App" -Dexec.args="3"
From Eclipse:
"Run app" menu --> Run Configurations.. --> Arguments (tab) --> Enter "3" in the "Program arguments" (section) --> Click "Run"..
From command line:
Run
$ mvn exec:java -Dexec.mainClass="com.applitools.quickstarts.App" -Dexec.args="4"
From Eclipse:
"Run app" menu --> Run Configurations.. --> Arguments (tab) --> Enter "4" in the "Program arguments" (section) --> Click "Run".
Step 5.3 See the differencesStep 5.3 See the differences
Step 5.4 Dealing with Dynamic contentsStep 5.4 Dealing with Dynamic contents
Dynamic contents are contents that constantly change, for example, a clock. If you use. | https://applitools.com/tutorials/selenium-java.html | CC-MAIN-2019-35 | refinedweb | 1,631 | 52.76 |
[…]
5. Data binding Part 1 (Text, raw HTML, JavaScript expressions)
I: […]
Java StringBuffer append new line example
/* Java StringBuffer append new line example This example shows how to append new line in StringBuffer in Java using append method. */ public class JavaStringBufferAppendNewLineExample { public static void main(String args[]){ //create StringBuffer object StringBuffer sbf = new StringBuffer(“This is the first line.”); /* * To append new line to StringBuffer in Java, […]. […]
Best trading platform for cryptocurrency
So, you have just picked up your first Bitcoin and are looking join the world of crypt° traders! Or maybe you already got started with the trading process and are simply looking to better your skills. But with so many overwhelming choices, finding the best one for you is a daunting task. Worry not! If […]
- « Previous Page
- 1
- 2
- 3
- 4
- …
- 46
- Next Page » | https://proprogramming.org/page/2/ | CC-MAIN-2018-47 | refinedweb | 136 | 60.55 |
. > > I. > > > diff --git a/libavutil/common.h b/libavutil/common.h > index d054f87..d9d99b1 100644 > --- a/libavutil/common.h > +++ b/libavutil/common.h > @@ -59,10 +59,16 @@ > #define FF_ARRAY_ELEMS(a) (sizeof(a) / sizeof((a)[0])) > #define FFALIGN(x, a) (((x)+(a)-1)&~((a)-1)) > > +#ifdef WIN32 Are you sure that shouldn't be _WIN32? > +#define AV_DLLIMPORT __declspec(dllimport) > +#else > +#define AV_DLLIMPORT > +#endif I don't like the name AV_DLLIMPORT; it is too Windows-specific. I'd rather call it AV_EXTERN_DATA and include "extern" in its definition (and remove extern from the declarations which would use it). This way the name carries more meaning to someone unfamiliar with Windows, and it wouldn't feel misnamed if it were ever to be needed on another system. -- M?ns Rullg?rd mans at mansr.com | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-September/100832.html | CC-MAIN-2014-42 | refinedweb | 132 | 50.43 |
Doubts regarding Hashtable - Java Beginners
information,
Thanks... it possible to create a hashtable like this?
java.util.Hashtable hashtable=new...(12,13,10,1));
since we get the key of hashtable from the database.
When I tried
Java hashtable
Java hashtable What is hash-collision in Hashtable and how it is handled in Java
Java collection -Hashtable
Java collection -Hashtable What is Hashtable in java collection?
Java collection -Hashtable;-
The hashtable is used to store value... {
public static void main(String [] args){
Map map = new Hashtable
Lang and Util Base Libraries
Lang and Util Base Libraries
The Base libraries provides us the fundamental features and functionality of
the Java platform.
Lang and Util Packages
Lang and Util package provides the fundamental classes and Object of
primitive type
Java hashmap, hashtable
Java hashmap, hashtable When are you using hashmap and hashtable
util packages in java
util packages in java write a java program to display present date and after 25days what will be the date?
import java.util.*;
import java.text.*;
class FindDate{
public static void main(String[] args
Java Util Package - Utility Package of Java
Java Util Package - Utility Package of Java
Java Utility package is one of the most commonly used packages in the java
program. The Utility Package of Java consist
Java Hashtable Iterator
be traversed by the Iterator.
Java Hashtable Iterator Example
import java.util.*;
public class hashtable {
public static void main(String[] args) {
Hashtable hastab = new Hashtable();
hastab.put("a", "andrews
hashtable java swing
hashtable java swing i m getting this warning
code is here
Hashtable nu=new Hashtable();
Hashtable ns=new Hashtable();
nu.put(new...
mber of the raw type Hashtable
plz help me
Java Collection : Hashtable
Java Collection : Hashtable
In this tutorial, we are going to discuss one of concept (Hashtable ) of
Collection framework.
Hashtable :
Hashtable.... When you increase the entries in the Hashtable, the product of
the load
Java Util Examples List
examples that demonstrate the syntax and example code of
java util package... Java Util Examples List - Util Tutorials
The util package or java provides many
How to find hashtable size in Java?
How to find hashtable size in Java? Hi,
What is the code for Hashtable in Java? How to find hashtable size in Java?
Give me the easy code.
Thanks
hashtable - Java Beginners
hashtable pls what is a hashtable in java and how can we use... to Roseindia");
Hashtable hash = new Hashtable();
hash.put("amar","amar");
hash.put...://
Thanks.
Amardeep
Hashtable java prog - Java Interview Questions
Hashtable java prog Create a Hashtable with some students hall... the results? please provide the java code detaily for this query?thanks... bal;
Hashtable table = new Hashtable();
table.put( new Integer(1111),"Selected - Java Interview Questions
://
Thank you for posting...Java By default, Hashtable is unordered. Then, how can you retrieve
Hashtable elements in the same order as they are put inside? Hi hasNext
Java hasNext()
This tutorial discusses how to use the hasNext() method... Iterator. We are going to use
hasNext()
method of interface Iterator
in Java... through the following java program. True is return by this method in case
The Hashtable Class
The Hashtable Class
In this section, you will learn about Hashtable and its implementation with
the help of example.
Hashtable is integrated... for complete list of Hashtable's method.
EXAMPLE
import java.util.*;
public
util
java - Java Beginners
java write a programme to to implement queues using list interface Hi Friend,
Please visit the following link:
Thanks
Data Structures in Java
Data Structures in Java
In this Section, you will learn the data structure of java.util
package with example code.
Java util package(java.util) provide us...
Hashtable
Properties
After the release of Collections in Java 2 release
Associate a value with an object
with an object in Java util.
Here, you
will know how to associate the value... of the several extentions
to the java programming language i.e. the "...;}
}
Download this example
java - Java Beginners
Java hashtable null values How to get if there is null value in the Java hashtable? and is it possible to accept both null and values
java.util - Java Interview Questions
* WeakHashMapLearn Java Utility Package at... description of java.util PackageThe Java util package contains the collections framework..., internationalization, and miscellaneous utility classesThe util package of java provides
java - Applet
://
Thanks...java what is the use of java.utl Hi Friend,
The java
java persistence example
java persistence example java persistence example
J2ME HashTable Example
J2ME HashTable Example
To use the HashTable, java.util.Hashtable package must be
imported into the application. Generally HashTable are used to map the keys to
values
java
java why data structures?
The data structures provided by the Java utility package are very powerful and perform a wide range...:
Enumeration
BitSet
Vector
Stack
Dictionary
Hashtable
Properties
These classes
Collections in Java
Collections in Java are data-structures primarily defined through a set of classes and interface and used by Java professionals. Some collections in Java that are defined in Java collection framework are: Vectors, ArrayList, HashMap
java - Java Interview Questions
information :
Thanks
Java Collection
Java Collection What are Vector, Hashtable, LinkedList and Enumeration
Java collection
Java collection What are differences between Enumeration, ArrayList, Hashtable and Collections give a simple example for inheritance in java
java - Java Beginners
Define what is Vector with an example?why it is used and where it is used
Define HashTable ? how can we enter the Keys and Values in HashTable ?
would u give the example source code for it
thanks
krishnarao
VECTOR collection
Java collection What are differences between Enumeration, ArrayList, Hashtable and Collections and Collection
Java
Java I want to practise with Java Recursive program
The recursive functions are the function which repeats itself in a java program. Here is an example of recursive function that finds the factorial of a number
java
java what is an interface? what is an interface?
... but cannot be instantiated.
For more information, visit the following link:
Java Interface Example
Java Syntax - Java Beginners
to :
Thanks...Java Syntax Hi!
I need a bit of help on this...
Can anyone tell
JAVA
JAVA plz send me code, How to find fare form one place to another place using Java,Jsp,Servlets?
for example:i need to calculate from bangalore to gulbarga..i need to claculate the bus fare,distance in kms
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/20661 | CC-MAIN-2015-48 | refinedweb | 1,073 | 55.95 |
What if I want to clone something to which I have push (write) access, like thoughtbot/high_voltage? Here you go:
# ~/.gitconfig [url "git@github.com:"] # With write access insteadOf = wgh:
$ git clone wgh:thoughtbot/high_voltage
Now then, what if you need to add a Heroku remote for the sushi app? You could go to Heroku, log in, click “My Apps”, click “Sushi”, and find the remote URL. Or, you can do this:
# ~/.gitconfig [url "git@heroku.com:"] insteadOf = heroku:
$ git remote add heroku:sushi.git
Before I learned this tip, I missed it real bad. For more, check out my gitconfig. What are your little Git tips?
I.
Many at thoughtbot run their editor+shell combos inside of tmux. Some remote pair program with ssh, vim, and tmux.
Getting started with tmux, these are the questions I’ve had.
Install tmux, read the documentation, and fire it up.
brew install tmux man tmux tmux -u ''
The “prefix” namespaces tmux commands. By default it is
Ctrl+b. In our
tmux.conf in
thoughtbot/dotfiles, we bound it to
Ctrl+a:
# act like GNU screen unbind C-b set -g prefix C-a
This was non-obvious to me.
Enter “copy mode”:
prefix+[
Use vim bindings to page up and down:
Ctrl+b Ctrl+f
Add this to your
tmux.conf:
# enable copy-paste # enable RubyMotion set -g default-command "reattach-to-user-namespace -l zsh"
Create a window:
prefix c
Move to window 1:
prefix 1
Move to window 2:
prefix 2
Kill a window:
prefix x
I believe in setting my mouse free but it takes time for muscle memory to make this fast.
~/.tmux.conf?
After editing
~/.tmux.conf, execute this from a shell:
tmux source-file ~/.tmux.conf.
Written by Dan Croak. | http://robots.thoughtbot.com/tagged/productivity | CC-MAIN-2013-20 | refinedweb | 294 | 76.72 |
I'm trying to post a json file to influxdb on my local host. This is the code:
import json
import requests
url = ''
files ={'file' : open('sample.json', 'rb')}
r = requests.post(url, files=files)
print(r.text)
sample.json
{
"region" : "eu-west-1",
"instanceType": "m1.small"
}
{"error":"unable to parse '--1bee44675e8c42d8985e750b2483e0a8\r':
missing fields\nunable to parse 'Content-Disposition: form-data;
name=\"file\"; filename=\"sample.json\"\r': invalid field
format\nunable to parse '\r': missing fields\nunable to parse '{':
missing fields\nunable to parse '\"region\" : \"eu-west-1\",': invalid
field format\nunable to parse '\"instanceType\": \"m1.small\"': invalid
field format\nunable to parse '}': missing fields"}
I think that the fault maybe is that you just open the file but not read it. I mean since you want to post the content of the
json object which is stored on the file, and not the file itself, it may be better to do that instead:
import json import requests url = '' json_data = open('sample.json', 'rb').read() # read the json data from the file r = requests.post(url, data=json_data) # post them as data print(r.text)
which is actually your code modified just a bit... | https://codedump.io/share/COvc7NNiFYVf/1/status-code-400-on-post-message-to-influxdb | CC-MAIN-2017-17 | refinedweb | 195 | 58.08 |
FHOPEN(2) BSD Programmer's Manual FHOPEN(2)
fhopen, fhstat, fhstatfs - access file via file handle
#include <sys/types.h> #include <sys/stat.h> int fhopen(const fhandle_t *fhp, int flags); int fhstat(const fhandle_t *fhp, struct stat *sb); int fhstatfs(const fhandle_t *fhp, struct statfs *buf);
These functions provide a means to access a file given the file handle fhp.() and fhstatfs() provide the functionality of the fstat(2) and fstatfs(2) calls except that they return information for the file..
fstat(2), fstatfs(2), getfh(2), open(2)
The fhopen(), fhstat(), and fhstatfs() functions first appeared in NetBSD 1.5. MirOS BSD #10-current June 29,. | https://www.mirbsd.org/htman/i386/man2/fhstat.htm | CC-MAIN-2015-32 | refinedweb | 108 | 56.25 |
I am trying to figure out how to take my .class files, put them in a .jar file, and then
import the package into a .java file.
So far, I have been able to create the .jar file, but I am seriously stuck now.
My directory looks like this:
Dir01.jpg
According to info here:
Setting the class path
and here:
Classpath (Java) - Wikipedia, the free encyclopedia
I should be able to run the HelloWorld file from the command prompt . . .
however here is all I get:
command-1.jpg
I am getting nothing but the same message, no matter what I do,
ad infinitum . . . Error: Could not find or load main class apk.pkg.HelloWorld
As you can see in the command prompt image, >java HelloWorld works fine and prints out "This is a test".
Where do I go from here???
Windows 7, Java SE Runtime Environment 1.7.0_25 64-bit. | http://www.javaprogrammingforums.com/java-theory-questions/33052-classpath-jar-import-statements.html | CC-MAIN-2014-15 | refinedweb | 152 | 85.59 |
Which PHP5 Framework is Your Favorite? 138
matt_j_99 asks: "With all the talk about Ruby on Rails, I've been thinking about PHP frameworks. Ruby on Rails looks pretty cool, but frankly, I don't want to learn a new language. It seems that with all the slashdot discussion about RoR, somebody always makes the valid point that PHP is not a framework. But with PHP5's, Object Oriented features, a standard framework might emerge. Prado, Carthag, BlueShoes, and PHITE all seem like interesting frameworks. What PHP frameworks have you used in your applications? What were the pros and cons of each? Which framework do you think will have the best chance of long-term viability and maintenance?"
Pretty obvious answer (Score:2, Insightful)
Re:Pretty obvious answer (Score:1)
Re:Pretty obvious answer (Score:1)
Prepare to be sued into the next millenium!
Re:Pretty obvious answer (Score:2, Insightful)
Indeed. You have the choice of someone else's idea of what features and work-flow you need, or your own. Writing wrapper classes for output, databases, etc. isn't that hard, and you can get a solution 100% tailored to you needs.
The only argument I could imagine for using someone else's framework is to reduce the overhead to bring in new programmers since they'll already know much of the ropes. But in the case of PHP there really isn't a clear winning system with a large pool of available programmers...
Your own DBMS (Score:1)
The thing is I think people don't think deep enough anymore.
What is a framework?
I not sure we can all agree on the answer, I am sure that there is a formal answer.
I would say a framework, is a factory.
Someone else can say, a framework is a meta-tool, a tool that makes tools.
Another can say, a framework is a domain-specific language.
And another (the one I like the best) would ask, what is the difference between all those answ
uh-oh (Score:5, Insightful)
but frankly, I don't want to learn a new language
That's the worst thing that can happen to a professional (assuming you are one): not willing to learn new things. I strongly recommend you to learn Ruby, "it puts the fun back on programming", you won't regret.
Re:uh-oh (Score:1)
Re:uh-oh (Score:3, Insightful)
I also go along with the sentiment (of sometimes wanting to avoid learning yet another language).
This can be summed up with the question: "Is this guy a programmer with ten years' experience, or a programmer who has repeated one year of experience ten times over?"
As Grasshopper plans his career it can be good if he asks himself how others will see him in a few years.
Re:uh-oh (Score:3, Insightful)
Every language has advantages & disadvantages. I love Ruby & Rails, but for some jobs, I'll still use PHP. Some jobs are best
Re:uh-oh (Score:2)
This argument falls flat pretty quick. You don't need to abandon everything else you've learned to learn a new language.
I've got no argument about that. However the time you spend learning a new language I will spend in learning better approaches to the problems that I'm addressing with one language. That is, given a choice between spending half a day learning Ruby's syntax, and putting that time into tweaking a Perl object that models conversions between HTML and XML, I think I'm more productive by any m
not only that.... (Score:2)
Going right off topic here (Score:1)
You don't know Javascript? Or maybe you don't know that it has much in common with Lisp/Scheme, with C-like syntax [crockford.com]? Look into it - as much as people denigrate it, it's one of the coolest languages out there.
Ruby has pretty much the same features - functional programming is very possible. If you master the concepts using these mostly familiar tools, it's much simpler to jump into the functional languages (or at least it
Re:Going right off topic here (Score:2)
I do see that Ruby has functional aspects, but I don't have to use those for the basics--I can stick to the models I know, like OO. So its easy for me to get started. Eventually, yes, I suspect it will be as you say, but I'm not quite there yet. Still, I'm having fun, and that's the important thing.
Re:Going right off topic here (Score:2)
Re:uh-oh (Score:2)
Re:uh-oh (Score:3, Insightful)
I switched from PHP to Ruby after reading the Pragmatic Programmer's tip to learn a new language every year. Learning new languages makes you a better programmer, and Ruby is a great language to learn.
After only a few months using Ruby on Rails, I can't imagine trying to manage a large project in PHP. My attempts at OOP resulted in huge classes (100+ lines), my code wasn't reusable, unit testing was nonexistent, and adding functionality to an existing page usually meant breaking the rest of the applicatio
Re:uh-oh (Score:1)?)
Err, if you don't write unit tests, whose fault is that? I don't see how language choice helps with that.
Re:uh-oh (Score:2)
So what was it about my post that set you off? Or are you just having a bad day??)
Yes, you can do something with 100+ line classes. (Notice that 500 lines is larger than 100 lines, and would th
Re:uh-oh (Score:1)
Uh, yeah. My point of contention was that 100 lines makes a "huge" class.
Depends. Dunno "Active Record" from a hole in the ground, but I've had plenty of experience with packages that only take 3 lines to give you almost what 300 lines would otherwise do, and somewhere do
Re:uh-oh (Score:2)
Excellent point!
Re:uh-oh (Score:2)
Do you have any more compelling arguments than that? The first time I read about Ruby I was interested. Now, by the 10000th time I read how much better Ruby is, without any specific reason other that some people love it so much, I'm pretty bored.
And, BTW, you are way off topic. This article is about PHP frameworks. Let me explain to you. PHP is considered by some to be an excellent language. However, differently
Ruby resources (Score:4, Informative)
Heh, well, no offense but, people that use to reply like you haven't tried Ruby, or don't understand it. Otherwise you would be in love with it already.
We cannot compare PHP and Ruby. It's like comparing BASIC and Perl, you get the idea. Remember when you discovered Perl and all its magic? Well, that's what happens when you get into Ruby. It's a true object oriented and dynamic language ready for real applications.
This might or not make sense to you. It depends on the use you are giving to your language of choice. If you write one-liners in Perl, you might not feel motivated to move to Ruby. If you are writing templates in PHP for your web applications and you're doing fine, you might not need Ruby either.
You see the light
:) when you want to write OO applications/scripts. PHP used to have an awful hack (I haven't seen PHP 5), so does Perl 5. Python would be your choice, but for some reasons I cannot explain (yes, this is subjective) Ruby feels more natural.
Ok, I have fallen again in the "I love Ruby so much" that gets you so bored. So, here is some homework for you (some very nice presentations and small articles):
Ruby: A transparent, object-oriented programming language [pragmaticprogrammer.com]
10 Things Every Java Programmer Should Know About Ruby [onestepback.org]
The Ruby Programming Language [informit.com] (by Matz, Ruby's author)
Thirty-seven reasons I love Ruby [rubyhacker.com]
Blocks and closures in Ruby [artima.com]
Re:uh-oh (Score:2)
Well, we're not really THAT far off topic. The reason that the OP doesn't want to use Rails is that they don't want to learn another language. That's a good reason on the surface, but to those of us
Re:uh-oh (Score:1)
I maintain a site in PHP (I am not about to slashdot myself). It was not my first choice for a language. I bought a cheap (£20 for two years - including 1GB transfer per month- can you show me a ruby hosting service that can match that?) domain plus hosting based on the availability of Python support. Unfortunately they had discontinued Python support shortly before that. I ended up using what was available (the choice was Perl or PHP).
PHP i
Re:uh-oh (Score:2)
My point is not that everyone should switch to Rails. But learning a web framework is never a minor undertaking. For me, learn
Re:uh-oh (Score:2)
Yep, you're right. Us Ruby users are just sitting on our asses cheering, not actually improving anything. Oh, wait. No, we're not. The entire Rails environment is only about a year old, and is advancing literally every day. The core ruby language has been at the same version number for a while now, but there is a new version on the near horizon
Re:uh-oh (Score:2)
Funny typo, perhaps you were thinking about Charles Fort? [forteana.org]
But, if you really meant Henry Ford, he fell in love so much with his Model T that General Motors have sold more cars than Ford in every year since 1927. Ford was unable to view objectively his own creation and realize that it wasn't perfect, while Chevrolets were imp
Re:uh-oh (Score:2)
Can you explain in a few words why Ruby is so superior? Can you explain why PHP is worse than, for instance, Visual Basic, Fortran, or Cobol? If it's so obvious, it should be very easy to demonstrate.
A few words? How about one: taint [phrogz.net].
Or for a broader one that incompasses the first, security [harvard.edu].
These are just two examples of what I see as a broad pattern. The attitudes of the two languages (and thei
Re:uh-oh (Score:2)
The issues on that paper boil down to two problems:
The first is not a problem as of now, as the insecurity of automatically registering global variables with user input is a widely recognized problem. Any developer worth his salt will avoid register_globals (and it i
Re:uh-oh (Score:2)
I prefer design by contract, with precondition and postcondition checks in place.
For what it's worth, Ruby supports both of those as well (e.g. with assert_* in
:around methods, or in Rails with validates and/or before/after filters). I don't recall PHP having anything of the sort--or at least, I wasn't able to find it a few months ago when I was trying to shore up a badly compromised PHP application.
--MarkusQ
Re:uh-oh (Score:1)
Re:uh-oh (Score:2)
Can you explain why PHP is worse than, for instance, Visual Basic, Fortran, or Cobol? If it's so obvious, it should be very easy to demonstrate.
It's not "worse". It's just that there's an inexplicable bias against PHP which has very little basis in reality. PHP is a great, powerful language that does what it does very well, despite its imperfections (and by the way, all programming languages have about the same degree of imperfections).
Re:uh-oh (Score:2)
Not all companies 'sanction' all languages. I typically don't have much of a choice on language when doing my job. What makes you think everyone else does? What the hell is wrong with you Ruby zealots?
Re:uh-oh (Score:2)
What are you talking about?
What makes you think I'm thinking everyone else can choice his/her language? Anyways, the OP clearly *can* choice his language. He's just too lazy to learn another one. And that's what I'm
Re:uh-oh (Score:2)
People without spare time ne
Re:uh-oh (Score:1)
There is one problem though: if he works for a company, they may have very strict regulations about what he can/cannot use (for stuff at work).
For example, at my place we're pigeon-holed into using only a small set of languages and frameworks. They even recently cut back one or two.
The common reasoning is, if they don't put down controls, then developers go off and do their own thing, which is ok for the s
Re:uh-oh (Score:3, Insightful)
Now his manager's manager wants to know what the business case was for coding in a langauge that:
1) No one else knows.
2) Could have been done in Java.
3) Offers no benefits over other languages alre
yet another framework (Score:3, Informative)
MVC Framework (Score:1)
Which PHP5 Framework is Your Favorite? (Score:5, Funny)
I guess I sorta like them all
Re:Which PHP5 Framework is Your Favorite? (Score:3, Informative)
+5, Awesome
Re:Which PHP5 Framework is Your Favorite? (Score:2)
Learn Ruby (Score:5, Insightful)
The secret to Ruby on Rails is RUBY. You just can't do that kind of stuff in PHP. PHP is pretty pathetic once you get beyond the basics. It is truly a language for the "bottom 95%" as I call it. PHP has at least the following flaws:
* poor metaprogramming: try creating an anonymous function in PHP, it's just a STRING! Yuck. Closures? Never heard of 'em. Try writing a one-liner in PHP that sorts a list of objects. Impossible.
* global variables for session, cookies, etc. Makes unit-testing a bitch!
* no "finally" clause on exceptions. WTF? Built-in functions don't raise exceptions. WTF?
* no way to refactor object fields. Yes you can use "__get/__set" but those "fake" fields don't work in every place a regular field works. WTF? In Ruby everything is a method, there are no fields, refactoring is a breeze.
* No "mixins".. I can't write a method and then stick it into multiple classes. Not even with include().
* Exposes variables vs. variable references. I thought PHP5 would get rid of "&" forever. I was wrong.
Now Ruby ain't Lisp, that's for sure. But I'd rather stick forks in my eyes than programming in PHP again.
Anyway, a good programmer has no problem learning a new language. It'll take you longer to learn the framework than the language. Ruby is simple and clean and VERY consistent from top to bottom, give it a try.
Ruby is a passing fad, a PhD's toy (Score:2, Informative)
Yes, sure, if you worry about "metaprogramming", "refactor object fields", or "Exposes variables vs. variable references", then Ruby is the language for you, but... how about Oberon? Now that's one language I'm sure you'll love!
OTOH, if you aren't in an ivory tower and have to program for a living, then PHP is like C, a language the "perfessors" hate,but i
Re:Ruby is a passing fad, a PhD's toy (Score:3, Interesting)
Or would that be the 99.5% of PHP apps that have constant SQL and variable injection attacks. *cough* PHP XML_RPC support *cough*
Re:Ruby is a passing fad, a PhD's toy (Score:2)
People vs. Technology (Score:3, Interesting)
And beginners won't know to ask about it. The incorrect option is all they know. The solution, of course, is better tools.
And, once again, there is NO excuse for building a network-aware technology that allows for setting variables from the URI query string. None. Even
Re:People vs. Technology (Score:2)
And beginners won't know to ask about it.
Eh. There's no way to give amateur developers the ability to produce professional code. If you want a secure system, don't hire a novice. It's not PHP's fault that a developer is a failure, to be quite blunt.
And, once again, there is NO excuse for building a network-aware technology that allows for setting variables from the URI query string. None. Even PH
Re:People vs. Technology (Score:3, Insightful)
In the modern internet, the practice is unforgivable.
Re:People vs. Technology (Score:2)
Re:Ruby is a passing fad, a PhD's toy (Score:2)
The need to learn a new language is certainly a reasonable concern. Fortunately Ruby has a reason
Re:Ruby is a passing fad, a PhD's toy (Score:2)
Re:Ruby is a passing fad, a PhD's toy (Score:2)
(while the rest of us program rings around you in languages like Ruby, Python and Lisp)
Re:Learn Ruby (Score:2)
That is the most coherent list of the deficiences in PHP compared to other more well designed OO languages that I have seen. Most people just say "oh PHP is rubbish compared to a real OO language" but they never bother to say exactly why they feel that way.
nobody can deny that PHP is a bit of a dogs breakfast when it comes to "design", it's had a very evolutionary and "throw everything in" history (a bajillion functions always there, no modularity), but the fact is that it works, it's n
Re:Learn Ruby (Score:2)
The real question is: Does Ruby present the same flaws that prevented Smalltalk from taking over the world? Take a look to the side, and you'll find C powering most applications, not Objective C -- a much much cleaner language. This is the same kind of scenario.
Probably, a language for the bottom
Delphi (Score:1, Offtopic)
I don't have a point. I just thought it was kinda neat.
Re:Delphi (Score:1)
Re:Delphi (Score:2)
Oy vey... (Score:2)
What a silly perspective. I've never met a carpenter who knew how to use a hammer, but refused to learn screwdrivers, miter saws, and a lathe.
you win the horrible anaolgy of the day contest (Score:4, Insightful)
Hammer + tool (Score:1)
Re:you win the horrible anaolgy of the day contest (Score:2)
Really? Come visit some time, I will introduce you do a few.
Carpenters do not use hammers for driving nails much anymore, their are tools that do the job faster. Take a hammer and beat a chunk of wood with the claw and you can get an acceptable dado. Not as good as what a chisel/dado saw/router can do, but acceptable for rough work, and you may not have the other tools.
I've never done it, but a sharp claw ought to be able to turn a table leg just like real lathe tools. Mind it would be very dangero
Zelots. (Score:2, Insightful)
Re:Zelots. (Score:3, Insightful)
PHP is very accessible, and that's a great strength. But any time you start talking about "frameworks", you're well outside the user base that is best served by accessible.
Having said that, there are a lot of big PHP projects doing good service in the real world. It might not be the place to start new development, but integrating it into new dev
Frameworks for PHP, not that hot. (Score:5, Interesting)
But with PHP5's, Object Oriented features, a standard framework might emerge.
Indeed, one might. So far, not looking so good on that front. All the frameworks I've encountered so far have seemed cumbersome or tedious somehow (I glanced at Prado just now; the advantages of their approach aren't readily apparent, I'd say. The demos are unimpressive, using some god-awful javascript: pseudo protocol links for updates and deletes, which really puts the internals of the framework into serious question).
It seems that PHP is bereft of any real, exciting developments on the framework front. There are a lot of frameworks, but I guess the reason why none stand out like Rails does with Ruby is simply that none are good enough, providing no significant added value.
You have to ask yourself: why a PHP framework? What such significant advantages would one of the existing frameworks provide that learning its ins and outs wouldn't be a waste of time and energy? If you're looking to automate some of the drudgework of form processing, for example, I suggest you roll a minimalist "frameworklet" - or simply "component" - yourself (if that's plausible in your situation) for that specific purpose, making it generic enough to be reusable, but not so generic that you end up fitting your projects to the tools instead of vice versa, which often happens with frameworks.
I've found minimalism to work really well with PHP. Frameworks that try to be all things to all people mostly end up being more trouble than they're worth. It may very well be faster and more efficient (and more fun) to code a small component for a specific purpose than trying to work with an existing solution. Your own solution will be tailored to fit your application and will work as your mind wants it to work, not the way the framework creator sees fit for himself.
It's a Unixy approach, I think: combine small tools in inventive ways to accomplish even the largest tasks. Of course, with tools of your own creation, you wouldn't have to deal with inconsistent APIs, a thousand syntaxes and wholly different philosophies across these tools. Write a custom session handler here, a generic input validator there... Even if you create these tools for a specific project, you will most likely find yourself reusing them in future projects, too, with possible minor customizations.
An example: when I first wanted a lightweight way of separating the business logic from the display logic for a project, I coded a single class that did the template stuff, using standard PHP with no additional burdens. Smarty etc. were readily available options, but PHP is already a templating language, and separate template engines would just have added excess bloat to the mix. My solution wasn't as feature-rich, of course, but it did exactly what it had to in the parameters set by the project specs. I've successfully and rapidly reused the code (and more importantly, the overall technique) in several later projects. Besides templating, I've had similar good experiences with an extensible input validation system I cooked up once, adjusting and refining it to later projects.
The way I see it is this: languages like Ruby and Python benefit from good web frameworks, since they're non-web-specific languages, and these frameworks make their use a lot more convenient in web programming. PHP, on the other hand, is very much a web programming language at heart. Ignore the "PHP suxx0rz!" trolls, it is a good tool for that purpose. (Even though it's capable of more, it's rarely - if ever - the best choice in those circumstances.) The best a framework would do with PHP is addressing clear shortcomings of the language in some way, but you don't really need a full-fledged framework to fight these annoyances. I find the "invented here" mini-component approach superior.
In short, I don't see a framework "enabling" significantly better ways to do web programming in PHP, unlike with Ruby or Python. For PHP, a framework will probably be more trouble than it
Re:Frameworks for PHP, not that hot. (Score:3, Insightful)
I heartily disagree.
First of all, can you explain what's a good "web-specific-language"? As both a web-programmer and a general-purpose programmer, I'd say there's nothing really "web-specific" about a core language. You
Which PHP App? (Score:5, Funny)
I have yet to see a PHP app -- especially one that also used MySQL -- that used a design pattern other than "Big Ball of Mud" most often.
Do be fair, PHP 5 looks pretty good -- or at least is a vast improvement. Unfortunately I can't say the same thing about the people who've coded in PHP up to this point. Even when PHP shows some growth, most PHP coders ignore it.
"Database abstraction? Why would anyone need that?"
"Namespaces? Why would anyone need that?"
"Design patterns? What are those?"
"Security? If it's a problem, we'll fix it?
-----
I'm liking this meme. Anyone got any more?
Re:Which PHP App? (Score:1)
You can remove the PHP qualifier from that statement and it's just as true. I've seen just as much good PHP code as good code in any other language - i.e., precious little.
Re:Which PHP App? (Score:2)
Re:Which PHP App? (Score:3, Informative) [php.net]
"Namespaces? Why would anyone need that?"
it's coming [beeblex.com]
"Design patterns? What are those?" [php.net] [phppatterns.com]
php|architect's Guide to PHP Design Patterns [phparch.com]
"Security? If it's a problem, we'll fix it later." [php.net]
(Almost all of PHP's historical security problems have been third-party.)
S
Re:Which PHP App? (Score:2)
From the top of that PDO page you sent me to:
Re: namespaces. How old is PHP? And they're only now getting around to it? Younger languages seem to have them. Why has PHP, a very popular l
Re:Which PHP App? (Score:2)
Have a look to the Drupal [drupal.org] code. Great innovative design can be found there.
Re:Which PHP App? (Score:2, Flamebait)
Sounds conspicuously similar to "I was coding in Prolog when I discovered BASIC."
For very simple pages, pages that amount to slightly more code than a server-side include, PHP is a perfect fit. This is obvious because PHP was originally made to be a page-embedded version of Perl -- originally written in Perl. Back in the days of CGI scripts, mostly in Perl, when you only had a little bit of code and a lot of markup, PHP (and ASP, ColdFusion, etc.) were a br
Re:Which PHP App? (Score:3, Interesting)
While I agree with this, I think that, for trivial apps, it's not such a big deal to mix the presentation layer and the logic. The problem I have with PHP is that some over-enthusiastic developers have tried to extend its use to writing enterprise applications. That's not what it was designed for, though feature-creep has moved it a little in that direction. There's not much wrong with it for small-s
Re:Which PHP App? (Score:2)
Embedding logic in your presentation layer is a bad idea. It ties your code to how your presenting it. Want to change how your site looks? Gotta change code. Want to change from HTML tables to CSS? Looks like it'll be a fundamental rewrite.
This is the problem that PHP -- and most mod_perl-based frameworks for that matter, like Slashdot -- have. It's a write-once language that you simply pile more and more code on until it break
A Short History of Web Apps (Score:2)
But the problems with patching your web server for even the slightest dynamism were immediately obvious even in the early days of "a patchy server". So CGI (Common Gateway Interface) was born and Perl rose up as the dominant web language. Process creation overhead became an issue as the web took off so techniques for mitigating this were put in like mod_perl, servlets,
Re:Which PHP App? (Score:1)
Of course, notices aren't shown by default (presumably because it'd wreck havoc on most pieces of php code) so most people won't know. First thing I do to my php.ini is to set error_reporting to E_ALL (default is E_ALL & ~E_NOTICE it seems).
Does PHP5 suffer from excessive RAM usage? (Score:2) id=13297391 [slashdot.org]
Now, I have been contemplating the use of PHP5 for various webapps, but after reading that I am unsure about whether or not I should use it. Mind you I do not want to invest large sums of money into an accelerator, which appears to be necessary if reasonable server memory usage is a goal.
Does PHP5 indeed suffer from excessive memory consumption, and if so, can it
Re:Does PHP5 suffer from excessive RAM usage? (Score:1)
The Apache module does use a significant amount of RAM, but unlike standard CGI's, the processes are not dying and re-spawning, so the same process actual
I use... (Score:1)
version 2 will work with gtk 2.x, which is great!
It is not much of a thing, but will be ok!
eGroupWare, of course! (Score:4, Informative)
After searching all over for several weeks, I chose eGroupWare [sourceforge.net]. Their "etemplates" framework settled the issue for me.
Re:eGroupWare, of course! (Score:3, Funny)
This completely explodified my sarcasmification detector . . .
Have your cake... (Score:2)
frameworks? no i18n, no custom auth, ... (Score:2, Insightful)
Almost all the frameworks, no matter which language they are written in, don't provide the basics for a real world application. What about i18n? I have yet to see a framework where the template system AND the application supports translation of messages.
Customize Authentication? There are more complex apps that don't just require username+password to login (e.g. logon to database - username+password+database depending on the database you may have access or not). Also users may be in many groups, each gro
php.MVC (Score:1) [phpmvc.net]
It's in beta, but I think a good MVC framework is all PHP needs to stop looking like such spaghetti. In defense of the Ruby zealots: I've haven't learned Ruby yet, but it's exactly the futuristic *REAL* object-oriented language that's going to propel us into the future. PHP is very old-school in the way code is written -- it DOES encourage spaghetti coding -- and for that I think it deserves to be phased out.
Propel. (Score:2) [phpdb.org]
Try Achievo ATK (Score:1)
It's essentially a 'business framework', targeted at developing web applications. Where other frameworks mainly provide a large set of utility classes, ATK lets you write an application in as few as 10 lines [achievo.org] of code.
We're all about inventing every wheel only once. Everything that can be generalized, will be, but anything that the framework automatically offers, can be fully customized.
In one of the replies to this story, I18N and custom
Hmmm, what for? (Score:1)
My question is: Is there something similar to Hibernate for PHP?
Here is a reasonably complete list of frameworks (Score:2)
List of PHP Frameworks [dmoz.org]
By the way, it's interesting how a thread about PHP frameworks turned into a thread for Ruby-on-Rails zealotry. I won't knock Ruby, but if PHP is good enough for the Wikipedia, Yahoo! and Friendster, it's good enough for me. There's nothing wrong with wanting to become a guru in one language (that happens to be in the top five in popularity!) than becoming a jack of all trades.
Re:lame excuse (Score:1, Offtopic)
Re:lame excuse (Score:2, Interesting)
Re:lame excuse (Score:2, Insightful)
Seriously, is Ruby some kind of a cult or something? I thought mac zealots were bad, but everytime a scripting language is mentioned the ruby enthusiasts come out with such hate for everything non-Ruby. Get a grip.
Re:lame excuse (Score:2)
I don't see the zealotry nor the cult here. He replied to a troll, with a nice summary of Ruby features not found in C.
You get a grip
Re:lame excuse (Score:1)
But does Ruby like koolaid? I don't know. I just don't know.
To be honest, I've been using Ruby in a zen-like trance to help me attain my buddha-hood, so I haven't watched these in awhile.
not ruby specific (Score:1)
i could go with other languages like Python, OCaml, Scheme, Perl and many others, all with far better support for higher level programming, OO and modularization than that PHP crap.
Re:lame excuse (Score:2)
I find that even if it takes a bit longer to code, you will get better results out of C/C++/ObjC. Objective C is slowly but surely gaining momentum in my programming portfolio as I think "hey, that'd be easier to do in ObjC than it would C++".
As for scripting, PHP is pretty good now. It used to be trashy, but version 4 and 5 are very nice, easy to work with, and reasonably fast. Python's not my cup of tea (little too much like Java for me, and I
Re:lame excuse (Score:2, Insightful)
"find that even if it takes a bit longer to code, you will get better results out of C/C++/ObjC."
"Inevitably, as higher level languages are written in C, you're almost always going to find that you get better performance out of a comparable C app."
ah! the "slow performance" argument, easily refutable by noticing that most performance b
Re:PHP? Switch to Python and Django (Score:2)
As for my contribution: I've worked a *miniscule* bit with Prada and I really didn't like it. But you might find it to your own liking. Different strokes for different folks I guess.
Re:PHP? Switch to Python and Django (Score:1)
Re:PHP? Switch to Python and Django (Score:1) | https://slashdot.org/story/05/08/13/2119215/which-php5-framework-is-your-favorite | CC-MAIN-2018-09 | refinedweb | 5,661 | 73.17 |
As far as Apple goes, OS 2.x doesn’t exist anymore. That much was clear from WWDC when we asked their engineers any questions about it. And as cool as 3.0 is, with all the new nifty features, the reality is that there’s still a good percentage of (mostly iPod Touch) users out there still on 2.2. We can have our cake and eat it too by targeting 2.x and still using a few select 3.0 features. But it’s more complicated than Apple made it out to be. Trouble is looming just under the surface.
Trouble With Versions
When you install the 3.0 SDK and create a new project, it will be automatically set up to build only for 3.0. To target earlier versions while still having access to 3.0 features, you need to take a few extra step. These steps are described in detail in the readme of the MailComposer sample (iPhone dev account required). My friend Serban also wrote about how to do it when you add Snow Leopard to the mix.
The required steps are:
- Under project properties, set up the Base SDK setting to be OS 3.0
- Also in project properties, change the iPhone OS Deployment Target to OS 2.2 (or whichever version you want to target).
- Go to target properties, and add a 3.0 library with the features you need. Make sure you set its type to Weak instead of Required.
- In your code, check that a feature is available before using it. For example, you can do this check to see if the in-app mail functionality is available:
Class mailClass = (NSClassFromString(@"MFMailComposeViewController")); return (mailClass != nil && [mailClass canSendMail]);
- Make sure you set your app to build with the 3.0 SDK and off you go.
If all goes well, it should run under 2.x and 3.0. If all goes well…
Trouble With Libraries
If that was the end of the story, then we would all be happy and I wouldn’t have to write this entry. And for a while I really thought that was everything I had to do to get Flower Garden to use 3.0 features and still work on 2.x devices. Everything compiled fine, but when I went to run it on a 2.2 device, it crashed.
Looking at the crash logs, it was crashing inside a static library that used Objective C and UIKit. Digging further, it seemed that function calls were being sent to the wrong place. What was going on?
At this point I realized the root of this problem was the linker flags I was using. As soon as I started using the 3.0 SDK, I had to add the -all_load linker flag in order to be able to use the static library. I believe this loads all the symbols used by the libraries and links with them at link time. Without it, the library code would crash at runtime as soon as it was executed
The -all_load flag seems fine, except that the 2.x and 3.x versions of the SDK have different libraries and resolve symbols to different locations. So by doing -all_load, we’re linking against the location of the 3.0 version and trying to run it on 2.x. Bad idea.
I thought long and hard on how to get around this. I came up with all sorts of crazy schemes, and after a couple frustrating days, I gave up. Then, all of a sudden, I realized that it had an embarrassingly simple solution: Don’t use a library! I’m not kidding. Just move the files in XCode directly into your main game target and you’re done. No -all_load and everything works fine.
Yes, I’m still embarrassed for not figuring that out after 30 seconds…
Trouble With Compilers
So all happy with that discovery, I rebuild and run the app and… crash again!
The call stack this time looked like this:
Thread 0 Crashed: 0 dyld 0x2fe01060 dyld_fatal_error + 0 1 dyld 0x2fe07ca8 dyld::bindLazySymbol(mach_header const*, unsigned long*) + 484 2 dyld 0x2fe15eb4 stub_binding_helper_interface + 12
What was going on in there? Some Googling and searching in the iPhone forum later, I learned that SDK 3.0 uses a different version of GCC (4.2 instead of 4.0). That means it will try to use some runtime functions that are not available with earlier versions. In particular, my crash was related to subtracting two uint64_t variables. Re-writing the code by casting the values to uint32_t before doing the operation fixed the problem. There’s an ugly “solution” for you!
So how do you know if something will work on 2.0? I don’t have a good answer for that other than test it as much as you can. Does someone have a better solution?
The good news is that, after I made those fixes, Flower Garden was happily running on 2.2 and 3.0. Now I can finally roll out in-app email without giving up on 2.x devices and cutting my potential customer base (or depriving current users of future updates).
Open Questions
Going through this answered a few questions, but also created a few new ones. Maybe someone here will know the answer or will be able to point me in the right direction.
- Does anyone know how to debug your OS 3.0 app on the simulator set to 2.2? Whenever I launch it from the debugger, the simulator gets set to 3.0. Even if I set it to 2.2, I wonder if it will behave the same as a 2.2 device.
- I’ve heard rumours about somehow, packing two versions of the app in the same executable (kind of like the universal MacOS executables with both a PowerPC and an Intel version). Has anyone done something like that with the iPhone? Any docs on that? | http://gamesfromwithin.com/tag/compiler | CC-MAIN-2018-47 | refinedweb | 997 | 75.81 |
Tarek Ziadé a écrit : > On Fri, Oct 9, 2009 at 2:43 PM, kiorky <kiorky at cryptelium.net> wrote: >> Hi tarek, >> >> Tarek Ziadé a écrit : >> >>> The *whole* point of Distribute 0.6.x is to be backward compatible, meaning >>> that if virtualenv switch to it, you will not even notice it. >> Living in my 0.6.x snail sandbox is not a solution. >> As it seems that Distribute 0.7 won't for a long time. >> >> "setuptools based" packages will be able to be installed via the distribute 0.6 >> branch but not compatible with "distribute based" stuff. Note that new things >> will eventually be packaged with the "new good way todo, aka with 0.7". There is >> a great risk that they can't live together aside. NOGO > > Why they can't ? As i understood all those readings, packages for 0.6 and 0.7 will be installable with the appropriate distribute version, thus side by side, but for me, they may be incompatibles together. > >> 0.7 packages wont be compatibles with setuptools installation/namespaces, so it >> will be impossible to install a lot of "setuptools based" packages aside with >> new stuff in with this way too. NOGO too. > > Why will it be impossible ? pep-0382 is not equivalent to setuptools's one for example. Can i have been certified i will not have breakages when trying to import a setuptools based namespace package from a 0.7 sharing the same namespace? > > [...] >> I appreciate what you folks are doing with the distribute sphere, i have not >> that much problems with it, but i do not support that it breaks very badly the >> retro compatibilty for all things already packaged today, today tomorrow or in >> one year. > > Again, you will be able to use 0.6 and 0.7 together. or 0.6 alone, or > 0.7 alone. > > Nothing will be broken in a distribution that uses 0.6. > > 0.6 stays maintained. As i said ealier, there will be incompatibilities at some point. And also, to use them together, what a hell. For package A i need 0.6 (hard requirement), for package B i need 0.7 (hard requirement), for C i need 0.6. C depend on A which depends on B. I also have no sort of control over the maintenance of those products, think that the authors are dead. So, i ll have to manually install B for A to fulfill its requirements then C will install. Deployments will be simple :) > >: <> | https://mail.python.org/pipermail/distutils-sig/2009-October/013719.html | CC-MAIN-2017-30 | refinedweb | 416 | 77.13 |
unique (<algorithm>)
Removes duplicate elements that are adjacent to each other in a specified range.
Both forms of the algorithm remove the second duplicate of a consecutive pair of equal elements.
The operation of the algorithm is stable so that the relative order of the undeleted elements is not changed.
The range referenced must be valid; all pointers must be dereferenceable and within the sequence the last position is reachable from the first by incrementation. he number of elements in the sequence is not changed by the algorithm unique and the elements beyond the end of the modified sequence are dereferenceable but not specified.
The complexity is linear, requiring (_Last – _First) – 1 comparisons.
List provides a more efficient member function unique, which may perform better.
These algorithms cannot be used on an associative container.
// alg_unique.cpp // compile with: /EHsc #include <vector> #include <algorithm> #include <functional> #include <iostream> #include <ostream> using namespace std; // Return whether modulus of elem1 is equal to modulus of elem2 bool mod_equal ( int elem1, int elem2 ) { if ( elem1 < 0 ) elem1 = - elem1; if ( elem2 < 0 ) elem2 = - elem2; return elem1 == elem2; }; int main( ) { vector <int> v1; vector <int>::iterator v1_Iter1, v1_Iter2, v1_Iter3, v1_NewEnd1, v1_NewEnd2, v1_NewEnd3; int i; for ( i = 0 ; i <= 3 ; i++ ) { v1.push_back( 5 ); v1.push_back( -5 ); } int ii; for ( ii = 0 ; ii <= 3 ; ii++ ) { v1.push_back( 4 ); } v1.push_back( 7 ); cout << "Vector v1 is ( " ; for ( v1_Iter1 = v1.begin( ) ; v1_Iter1 != v1.end( ) ; v1_Iter1++ ) cout << *v1_Iter1 << " "; cout << ")." << endl; // Remove consecutive duplicates v1_NewEnd1 = unique ( v1.begin ( ) , v1.end ( ) ); cout << "Removing adjacent duplicates from vector v1 gives\n ( " ; for ( v1_Iter1 = v1.begin( ) ; v1_Iter1 != v1_NewEnd1 ; v1_Iter1++ ) cout << *v1_Iter1 << " "; cout << ")." << endl; // Remove consecutive duplicates under the binary prediate mod_equals v1_NewEnd2 = unique ( v1.begin ( ) , v1_NewEnd1 , mod_equal ); cout << "Removing adjacent duplicates from vector v1 under the\n " << " binary predicate mod_equal gives\n ( " ; for ( v1_Iter2 = v1.begin( ) ; v1_Iter2 != v1_NewEnd2 ; v1_Iter2++ ) cout << *v1_Iter2 << " "; cout << ")." << endl; // Remove elements if preceded by an element that was greater v1_NewEnd3 = unique ( v1.begin ( ) , v1_NewEnd2, greater<int>( ) ); cout << "Removing adjacent elements satisfying the binary\n " << " predicate mod_equal from vector v1 gives ( " ; for ( v1_Iter3 = v1.begin( ) ; v1_Iter3 != v1_NewEnd3 ; v1_Iter3++ ) cout << *v1_Iter3 << " "; cout << ")." << endl; } | https://msdn.microsoft.com/en-us/library/9f5eztca(v=vs.90).aspx | CC-MAIN-2018-09 | refinedweb | 351 | 56.86 |
This series gives an advanced guide to different recurrent neural networks (RNNs). You will gain an understanding of the networks themselves, their architectures, applications, and how to bring the models to life using Keras.
In this tutorial we’ll start by looking at deep RNNs. Specifically, we’ll cover:
- The Idea: Speech Recognition
- Why an RNN?
- Introducing Depth Into the Network: Deep RNNs
- The Mathematical Notion
- Generating Music Using a Deep RNN
- Conclusion
Let’s get started!
Bring this project to life
The Idea: Speech Recognition
Speech Recognition is the identification of the text in speech by computers. Speech, as we perceive it, is sequential in nature. If you are to model a speech recognition problem in deep learning, which model do you think suits the task best?
As we'll see, it could very well be a recurrent neural network, or RNN.
Why an RNN?
Handwriting recognition is similar to speech recognition, concerning the data type at least–they both are sequential. Interestingly, RNNs have generated state-of-the-art results in recognizing handwriting as can be seen in Online Handwriting Recognition and Offline Handwriting Recognition research. RNNs have proved to generate efficient outputs with sequential data.
Thus, we might optimistically say that an RNN can best fit a speech recognition model as well.
Yet, unanticipatedly, when a speech recognition RNN model was fit onto the data, the results were not as promising. Deep feedforward neural networks generated better accuracy in comparison to a typical RNN. Although RNNs fared pretty well in handwriting recognition, it wasn’t considered to be a suitable choice for the speech recognition task.
After analyzing why the RNNs failed, researchers proposed a possible solution to attain greater accuracy: by introducing depth into the network, similar to how a deep feed-forward neural network is composed.
Introducing Depth into the Network: Deep RNNs
An RNN is deep with respect to time. But what if it’s deep with respect to space as well, as in a feed-forward network? This is the fundamental notion that has inspired researchers to explore Deep Recurrent Neural Networks, or Deep RNNs.
In a typical deep RNN, the looping operation is expanded to multiple hidden units.
An RNN can also be made deep by introducing depth to a hidden unit.
This model increases the distance traversed by a variable from time $t$ to time $t + 1$. It can have simple RNNs, GRUs, or LSTMs as its hidden units. It helps in modeling varying data representations by capturing the data from all ends, and enables passing multiple/single hidden state(s) to the multiple hidden states of the subsequent layers.
The Mathematical Notion
The hidden state in a deep RNN can be given by the following equation:
$$H^{l}_t = \phi^{l}(H^{l}_{t-1} * W^{l}_{hh} + H^{l-1}_{t} * W^{l}_{xh} + b^{l}_{h})$$
Where $\phi$ is the activation function, $W$ is the weight matrix, and $b$ is the bias.
The output at a hidden state $H_t$ is given by:
$$O_t = H^{l}_{t} * W_{hy} + b_y$$
The training of a deep RNN is similar to the Backpropagation Through Time (BPTT) algorithm, as in an RNN but with additional hidden units.
Now that you’ve got an idea of what a deep RNN is, in the next section we'll build a music generator using a deep RNN and Keras.
Generating Music Using a Deep RNN
Music is the ultimate language. We have been creating and rendering beautiful melodies since time unknown. In this context, do you think a computer can generate musical notes comparable to how we (and a set of musical instruments) can?
Fortunately, by learning from the multitude of already existing compositions, a neural network does have the ability to generate a new kind of music. Generating music using computers is an exciting application of what a neural network can do. Music created by a neural network has both harmony and melody, and can even be passable as a human composition.
Before directly jumping into the code, let's first gain an understanding of the music representation that shall be harnessed to train the network.
Musical Instrument Digital Interface (MIDI)
MIDI is a series of messages that are interpreted by a MIDI instrument to render music. To meaningfully utilize the MIDI object, we shall use the music21 Python library which helps in acquiring the musical notes and understanding the musical notation.
Here’s an excerpt from a MIDI file that has been read using the music21 library:
[>]
Here:
streamis a fundamental container for music21 objects.
instrumentdefines the instrument used.
tempospecifies the speed of an underlying beat using a text string and a number.
notecontains information about the pitch, octave, and offset of the note. Pitch is the frequency, an octave is the difference in pitches, and offset refers to the location of a note.
chordis similar to a
noteobject, but with multiple pitches.
We shall use the music21 library to understand the MIDI files, train an RNN, and generate music of our own.
Deep LSTMs
From what you've learned so far, you can take a guess as to what kind of data music falls under. Because music is made of a series of notes, we can say that music is sequential data.
In this tutorial, we are going to use a Long Short-Term Memory (LSTM) network to remember the information for a long period of time, which is necessary for music generation. Since various musical notes and interactions have to be captured, we shall use a Deep LSTM in particular.
Step 1: Import the Dataset
First import the Groove MIDI dataset (download link) available under the Magenta project. It contains about 1,150 MIDI files and over 22,000 measures of drumming. Training on all of the data consumes a lot of time and system resources. Therefore, let’s import a small subset of the MIDI files.
To do so, use the
glob.glob() method on the dataset directory to filter for
.mid files.
import glob songs = glob.glob('groovemagenta/**/*.mid', recursive=True)
Let's print the length of the dataset.
len(songs)
# Output 1150
Randomly sample 200 MIDI files from the dataset.
import random songs = random.sample(songs, 200) songs[:2]
# Output ['groovemagenta/groove/drummer1/session1/55_jazz_125_fill_4-4.mid', 'groovemagenta/groove/drummer1/session1/195_reggae_78_fill_4-4.mid']
Now print the MIDI file as read by the music21 library.
!pip3 install music21 from music21 import converter file = converter.parse( "groovemagenta/groove/drummer1/session1/55_jazz_125_fill_4-4.mid" ) components = [] for element in file.recurse(): components.append(element) components
# Output [>]
Step 2: Convert MIDI to Music21 Notes
Import the required libraries.
import pickle from music21 import instrument, note, chord, stream
Next, define a function
get_notes() to fetch notes from all of the MIDI files. Load every MIDI file into a music21 stream object using the
convertor.parse() method. Use this object to get all notes and chords corresponding to the MIDI file. Further, partition the music21 object in accordance with the instruments. If there’s an instrument present, check if there are any inner sub-streams available in the first part (
parts[0] ) using the
recurse() method. Fetch all of the notes and chords, and append the string representation of the pitch of every note object to an array. If it’s a chord, append the
id of every note to a string joined by a dot character.
""" Convert mid to notes """ def get_notes(): notes = [] for file in songs: # Convert .mid file to stream object midi = converter.parse(file) notes_to_parse = [] try: # Partition stream per unique instrument parts = instrument.partitionByInstrument(midi) except: pass # If there's an instrument, enter this branch if parts: # Check if there are inner substreams available notes_to_parse = parts.parts[0].recurse() else: # A very important read-only property that returns a new Stream that has all # sub-containers “flattened” within it, that is, it returns a new Stream where # no elements nest within other elements. notes_to_parse = midi.flat.notes for element in notes_to_parse: # Extract pitch if the element is a note if isinstance(element, note.Note): notes.append(str(element.pitch)) # Append the normal form of chord(integers) to the notes list elif isinstance(element, chord.Chord): notes.append(".".join(str(n) for n in element.normalOrder)) with open("notes", "wb") as filepath: pickle.dump(notes, filepath) return notes
Step 3: Preprocess the Data
We have to now make our data compatible for network processing by mapping string-based to integer-based data.
First import the required libraries.
import numpy as np from tensorflow.keras import utils
Define the sequence length (i.e. the length of an input sequence) to be 100. This means there will be 100 notes/chords in every input sequence.
Next, map music notes to integers, and create input and output sequences. Every 101st note (a note succeeding a set of 100 notes) is taken as the output for every input to train the model. Reshape the input to a 3D array:
samples x timesteps x features.
samples specifies the number of inputs;
timesteps, the sequence length; and
features, the number of outputs at every time step.
Finally, normalize the input by dividing the input with a value corresponding to the number of notes. Now convert the output to a one-hot encoded vector.
Create a function
prep_seq() that maps inputs and outputs to their corresponding network-compatible data, as in the following code.
""" Mapping strings to real values """ def prep_seq(notes): seq_length = 100 # Remove duplicates from the notes list pitchnames = sorted(set(notes)) # A dict to map values with intgers notes_to_int = dict((pitch, n) for n, pitch in enumerate(pitchnames)) net_in = [] net_out = [] # Iterate over the notes list by selecting 100 notes every time, # and the 101st will be the sequence output for i in range(0, len(notes) - seq_length, 1): seq_in = notes[i : i + seq_length] seq_out = notes[i + seq_length] net_in.append([notes_to_int[j] for j in seq_in]) net_out.append(notes_to_int[seq_out]) number_of_patterns = len(net_in) # Reshape the input into LSTM compatible (3D) which should have samples, timesteps & features # Samples - One sequence is one sample. A batch consists of one or more samples. # Time Steps - One time step is one point of observation in the sample. # Features - One feature is one observation at a time step. net_in = np.reshape(net_in, (number_of_patterns, seq_length, 1)) # Normalize the inputs net_in = net_in / float(len(pitchnames)) # Categorize the outputs to one-hot encoded vector net_out = utils.to_categorical(net_out) return (net_in, net_out)
Step 4: Define the Model
First import the required model and layers from the
tensorflow.keras.models and
tensorflow.keras.layers libraries, respectively.
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import ( Activation, LSTM, Dense, Dropout, Flatten, BatchNormalization, )
Moving onto the model's architecture, first define a sequential Keras block and append all of your layers to it.
The first two layers have to be LSTM blocks. The input shape of the first LSTM layer has to be the shape derived from the input data being sent to the function
net_arch(). Set the number of output neurons to 256 (you can use a different number here; this certainly affects the final accuracy that you would achieve).
return_sequences in the LSTM block is set to
True since the first layer's output is passed to the subsequent LSTM layer.
return_sequences makes sure to maintain the shape of the data as-is, without removing the sequence length attribute, which would otherwise be ignored.
recurrent_dropout is set to 0.3 to ignore 30% of the nodes used while updating the LSTM memory cells.
Next, append a
BatchNormalization layer to normalize the inputs by recentering and rescaling the network data for each mini batch. This produces the regularization effect and enforces the usage of fewer training epochs, as it reduces the interdependency between layers.
Append a
Dropout layer to regularize the output and prevent overfitting during the training phase. Let 0.3 be the percentage of dropout. 0.3 here means that 30% of the input neurons would be nullified.
Append a
Dense layer comprised of 128 neurons, where every input node is connected to an output node.
Now add ReLU activation. Further, perform batch normalization and add dropout. Then map the previous dense layer outputs to a dense layer comprised of nodes corresponding to the number of notes. Add a softmax activation to generate the final probabilities for every note during the prediction phase.
Compile the model with
categorical_crossentropy loss and
adam optimizer.
Finally, define the model's architecture in a function
net_arch() that accepts the model’s input data and output data length as parameters.
Add the following code to define your network architecture.
""" Network Architecture """ def net_arch(net_in, n_vocab): model = Sequential() # 256 - dimensionality of the output space model.add( LSTM( 256, input_shape=net_in.shape[1:], return_sequences=True, recurrent_dropout=0.3, ) ) model.add(LSTM(256)) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(128)) model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(n_vocab)) model.add(Activation("softmax")) model.compile(loss="categorical_crossentropy", optimizer="adam") return model
Step 5: Train Your Model
It’s always good practice to save the model’s weights during the training phase. This way you needn’t train the model again and again, each time you change something.
Add the following code to checkpoint your model's weights.
from tensorflow.keras.callbacks import ModelCheckpoint def train(model, net_in, net_out, epochs): filepath = "weights.best.music3.hdf5" checkpoint = ModelCheckpoint( filepath, monitor="loss", verbose=0, save_best_only=True ) model.fit(net_in, net_out, epochs=epochs, batch_size=32, callbacks=[checkpoint])
The
model.fit() method is used to fit the model onto the training data, where the batch size is defined to be 32.
Fetch the music notes, prepare your data, initialize your model, define the training parameters, and call the
train() method to start training your model.
epochs = 30 notes = get_notes() n_vocab = len(set(notes)) net_in, net_out = prep_seq(notes) model = net_arch(net_in, n_vocab) train(model, net_in, net_out, epochs)
# Output Epoch 1/30 2112/2112 [==============================] - 694s 329ms/step - loss: 3.5335 Epoch 2/30 2112/2112 [==============================] - 696s 330ms/step - loss: 3.2389 Epoch 3/30 2112/2112 [==============================] - 714s 338ms/step - loss: 3.2018 Epoch 4/30 2112/2112 [==============================] - 706s 334ms/step - loss: 3.1599 Epoch 5/30 2112/2112 [==============================] - 704s 333ms/step - loss: 3.0997 Epoch 6/30 2112/2112 [==============================] - 719s 340ms/step - loss: 3.0741 Epoch 7/30 2112/2112 [==============================] - 717s 339ms/step - loss: 3.0482 Epoch 8/30 2112/2112 [==============================] - 733s 347ms/step - loss: 3.0251 Epoch 9/30 2112/2112 [==============================] - 701s 332ms/step - loss: 2.9777 Epoch 10/30 2112/2112 [==============================] - 707s 335ms/step - loss: 2.9390 Epoch 11/30 2112/2112 [==============================] - 708s 335ms/step - loss: 2.8909 Epoch 12/30 2112/2112 [==============================] - 720s 341ms/step - loss: 2.8442 Epoch 13/30 2112/2112 [==============================] - 711s 337ms/step - loss: 2.8076 Epoch 14/30 2112/2112 [==============================] - 728s 345ms/step - loss: 2.7724 Epoch 15/30 2112/2112 [==============================] - 738s 350ms/step - loss: 2.7383 Epoch 16/30 2112/2112 [==============================] - 736s 349ms/step - loss: 2.7065 Epoch 17/30 2112/2112 [==============================] - 740s 350ms/step - loss: 2.6745 Epoch 18/30 2112/2112 [==============================] - 795s 376ms/step - loss: 2.6366 Epoch 19/30 2112/2112 [==============================] - 808s 383ms/step - loss: 2.6043 Epoch 20/30 2112/2112 [==============================] - 724s 343ms/step - loss: 2.5665 Epoch 21/30 2112/2112 [==============================] - 726s 344ms/step - loss: 2.5252 Epoch 22/30 2112/2112 [==============================] - 720s 341ms/step - loss: 2.4909 Epoch 23/30 2112/2112 [==============================] - 753s 357ms/step - loss: 2.4574 Epoch 24/30 2112/2112 [==============================] - 807s 382ms/step - loss: 2.4170 Epoch 25/30 2112/2112 [==============================] - 828s 392ms/step - loss: 2.3848 Epoch 26/30 2112/2112 [==============================] - 833s 394ms/step - loss: 2.3528 Epoch 27/30 2112/2112 [==============================] - 825s 391ms/step - loss: 2.3190 Epoch 28/30 2112/2112 [==============================] - 805s 381ms/step - loss: 2.2915 Epoch 29/30 2112/2112 [==============================] - 812s 384ms/step - loss: 2.2632 Epoch 30/30 2112/2112 [==============================] - 816s 386ms/step - loss: 2.2330
Step 6: Generate Predictions from the Model
This step consists of the following sub-steps:
- Fetch the already stored music notes
- Generate input sequences
- Normalize the input sequences
- Define the model by passing the new input
- Load the previous weights that were stored while training the model
- Predict the output for a randomly chosen input sequence
- Convert the output to MIDI and save it!
Create a function
generate() to accommodate all of these steps.
""" Generate a Prediction """ def generate(): # Load music notes with open("notes", "rb") as filepath: notes = pickle.load(filepath) pitchnames = sorted(set(notes)) n_vocab = len(pitchnames) print("Start music generation.") net_in = get_inputseq(notes, pitchnames, n_vocab) normalized_in = np.reshape(net_in, (len(net_in), 100, 1)) normalized_in = normalized_in / float(n_vocab) model = net_arch(normalized_in, n_vocab) model.load_weights("weights.best.music3.hdf5") prediction_output = generate_notes(model, net_in, pitchnames, n_vocab) create_midi(prediction_output)
The
get_inputseq() function returns a group of input sequences. This has already been explored in Step 3 above.
""" Generate input sequences """ def get_inputseq(notes, pitchnames, n_vocab): note_to_int = dict((pitch, number) for number, pitch in enumerate(pitchnames)) sequence_length = 100 network_input = [] for i in range(0, len(notes) - sequence_length, 1): sequence_in = notes[i : i + sequence_length] network_input.append([note_to_int[char] for char in sequence_in]) return network_input
You can now generate 500 notes from a randomly chosen input sequence. Choose the highest probability returned by the softmax activation and reverse-map the encoded number with its corresponding note. Append the output of the already-trained character at the end of the input sequence, and start your next iteration from the subsequent character. This way we generate 500 notes by moving from one character to the next throughout the input sequence.
""" Predict the notes from a random input sequence """ def generate_notes(model, net_in, notesnames, n_vocab): start = np.random.randint(0, len(net_in) - 1) int_to_note = dict((number, note) for number, note in enumerate(notesnames)) pattern = net_in[start] prediction_output = [] print("Generating notes") # Generate 500 notes for note_index in range(500): prediction_input = np.reshape(pattern, (1, len(pattern), 1)) prediction_input = prediction_input / float(n_vocab) prediction = model.predict(prediction_input, verbose=0) index = np.argmax(prediction) result = int_to_note[index] prediction_output.append(result) # Add the generated index of the character, # and proceed by not considering the first char in each iteration pattern.append(index) pattern = pattern[1 : len(pattern)] print("Notes generated") return prediction_output
Finally, convert the music notes to MIDI format.
Create a function
create_midi() and pass the predicted encoded output as an argument to it. Determine if the note is a note or a chord, and appropriately generate the
note and
chord objects respectively. If a chord, split it up into an array of notes and create a note object for every item. Append the set of notes to a
chord object and an output array. If a note, simply create a
note object and append it to the output array.
In the end, save the MIDI output to a
test_output.mid file.
""" Convert the notes to MIDI format """ def create_midi(prediction_output): offset = 0 output_notes = [] # Create Note and Chord objects for pattern in prediction_output: # Chord if ("." in pattern) or pattern.isdigit(): notes_in_chord = pattern.split(".") notes = [] for current_note in notes_in_chord: new_note = note.Note(int(current_note)) new_note.storedInstrument = instrument.Piano() notes.append(new_note) new_chord = chord.Chord(notes) new_chord.offset = offset output_notes.append(new_chord) # Note else: new_note = note.Note(pattern) new_note.offset = offset new_note.storedInstrument = instrument.Piano() output_notes.append(new_note) # Increase offset so that notes do not get clumsy offset += 0.5 midi_stream = stream.Stream(output_notes) print("MIDI file save") midi_stream.write("midi", fp="test_output.mid")
Now call the
generate() function.
generate()
You can play the generated mid file by uploading it into a MIDI player.
This SoundCloud tune was generated by training the model for 30 epochs on an NVIDIA K80 GPU for about 6 hours.
Conclusion
In this tutorial, you’ve learned what a Deep RNN is and how it’s preferable to a basic RNN. You’ve also trained a Deep LSTM model to generate music.
The Deep RNN (LSTM) model generated a pretty passable tune out of its learnings, though the music might seem monotonous owing to the lesser number of epochs. I would say that it's a good start!
To attain better notes and chords, I recommend you play around and modify the network's parameters. You can tune the hyperparameters, increase the number of epochs, and modify the depth of the network to check if you’re able to generate better results.
In the next part of this series, we'll be covering encoder-decoder sequence-to-sequences models.
References
Skulder Classical-Piano-Composer
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/advanced-recurrent-neural-networks-deep-rnns/ | CC-MAIN-2022-21 | refinedweb | 3,425 | 57.37 |
I’m new to pandas & numpy. I’m running a simple program
labels = ['a','b','c','d','e'] s = Series(randn(5),index=labels) print(s)
getting the following error
s = Series(randn(5),index=labels) File "C:\Python27\lib\site-packages\pandas\core\series.py", line 243, in __init__ raise_cast_failure=True) File "C:\Python27\lib\site-packages\pandas\core\series.py", line 2950, in _sanitize_array raise Exception('Data must be 1-dimensional') Exception: Data must be 1-dimensional
Any idea what can be the issue? I’m trying this using eclipse, not using ipython notebook.
Kenil Vasani
I suspect you have your imports wrong
If you add this to your code
It runs fine.
That said, and as pointed out by @jezrael, it’s better practice to import the the modules rather than pollute the namespace.
Your code should look like this instead.
solution | https://www.errorcorner.com/question/pandas-series-getting-data-must-be-1-dimensional-error/ | CC-MAIN-2021-04 | refinedweb | 145 | 57.77 |
Version: 0.91
Contact: Jack Ozzie, George Moromisato, and Paresh Suthar, Microsoft Corporation
Updated: 01/12/06
Change log
The objective of Simple Sharing Extensions (SSE) is to define the minimum extensions necessary to enable loosely-cooperating apps
Simple Sharing extends the Really Simple Syndication (RSS) 2.0 and Outline Processor Markup Language (OPML) 1.0 specifications:
The XML namespace URI for the XML data format described in this specification is:
In this spec, the prefix "sx:" is used for the namespace URI identified above. For example:
<rss version=2.0 xmlns:
In this document, the key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as described in RFC 2119.
The term item denotes a typed data entity and the basic unit of sharing and synchronization.
The term endpoint denotes an entity that participates in the synchronization of shared items with other endpoints. An endpoint can act as a publisher, a subscriber or both.
An endpoint's item set is a complete set of related items as determined by the endpoint.
A feed is an addressable collection of items in a endpoint's item set. The feed may be partial (only the items that have changed within a given time window) or complete (all of the items in the endpoint's item set are contained in the feed as of its publishing).
A subscription is a unidirectional relationship between two endpoints where one endpoint acting as a subscriber pulls feeds from the other endpoint, which acts as a publisher.
The term updated is used to include items created, changed or deleted.
All date-time values MUST be expressed as GMT and MUST conform to the Date and Time specification of RFC 822, except using 4 digit years (consistent with RSS 2.0 specification)
All endpoint identifiers MUST conform to the syntax for Namespace Specific Strings (the NSS portion of a URN) in RFC 2141.
Imagine two loosely coupled endpoints, A and B, that wish to share and co-edit a set of independent items in an RSS feed. The two endpoints can use RSS+SSE to replicate the set. The process would look like this:
The extensions described in the Simple Sharing Extensions).
The sx:sharing element is optional, in which case defaults are assumed. When sx:sharing is present, it contains all of the top-level properties required by SSE, including:
The sx:related element is optional, but when present contains information about related feeds.
2.2.1 INCREMENTAL FEEDS and COMPLETE FEEDS
Publishers will generally include, in a feed, only the most recent modifications, additions, and deletions within some reasonable time window. These feeds are referred to herein as partial feeds, whereas feeds containing the complete set of items are referred to as complete feeds.
In the feed sharing context, new subscribers will need to initially copy a complete set of items from a publisher before being in a position to process incremental updates. As such, this spec provides for the ability for the latter feed to reference the complete feed. By placing the link to this feed in the channel descriptor, only the partial feed URL need to be distributed to potential subscribers.
2.2.2 AGGREGATED FEEDS
In the case where a publisher has aggregated information from other feeds into a larger work, it can be useful for subscribers to see more detailed information about the other source feeds. In the case of feed sharing as envisioned by this specification, this feature can also be used to notify subscribing feeds of the feeds of other participants which they might also wish to subscribe to.
EXAMPLE
<channel> <sx:sharing <sx:related <sx:related <sx:related </sx:sharing> ...</channel>
The most important extension described in this spec is the sx:sync element, which contains the information required for synchronization. This element is a child of the item or outline element. This is a REQUIRED element of all items in all feeds wishing to participate in SSE-based replication.
A date-time attribute. This is the date-time when the most recent modification took place. If this attribute is omitted the value defaults to the earliest time representable in RFC 822.
Note: Either or both of the when or by attributes MUST be present; it is invalid to have neither.
A string attribute. A text attribute identifying the unique endpoint that made the most recent).
Note: Either or both of the when or by MUST be present; it is invalid to have neither.
The sx:sync element MUST contain a sx:history element that contains the chronological history for the item.
A date-time attribute. This is the date-time when the modification took place. If this attribute is omitted the value defaults to the earliest time representable in RFC 822.
A string attribute. A text attribute identifying the unique endpoint that made a).
The sx:history element MAY contain one or more sx:update sub-elements, each of which represents a distinct update to the item.
To assure consistency, incoming (subscribed) feed items are processed one at a time, and are compared with already-known items whose state is maintained locally (and potentially published to others on an outbound feed). When merging such changed items or outline elements into a the local feed, processing behavior differs for "unordered" vs. "ordered" feeds.
In unordered feeds, changes are processed one at a time, and the local copy of an item is generally looked-up by using the id attribute from the sx:sync element of the incoming item. The two are then compared, and a "winner" and a "loser" are declared, and the local item is updated appropriately.
In ordered feeds, changes MUST be processed such that the order of items is preserved. When the position of an item is modified, the publishers and subscribers MUST follow these rules:
When creating a new item as the result of a local user operation, endpoints MUST follow this algorithm:
Here is what an item would look like after being created:
<item> <description>This is a test item</description> <sx:sync <sx:history </sx:sync></item>
When incorporating endpoints' item updates from other feeds into the current feed, items are copied leaving sx:sync element completely intact. However, when creating, updating or deleting an item as the result of a local user operation, endpoints MUST follow this algorithm:
EXAMPLES
Here is what the same item in the above example would look like after having been updated three more times in sequence, once a day, by the same endpoint:
<item> <description>This is a test item</description> <sx:sync <sx:history <sx:update <sx:update <sx:update </sx:history> </sx:sync></item>
Here is what the same item would look like after having been updated two more times in sequence, once a day, by the same endpoint, but after the particular implementation only maintains a limited number of versions (4):
<item> <description>This is a test item</description> <sx:sync <sx:history <sx:update <sx:update <sx:update </sx:history> </sx:sync></item>
When a subscriber reads a feed, it MUST determine if others' modifications to items are more recent than the subscriber's own modifications. For example, imagine A, B and C mutually publish/subscribe, and all three modify item #1. When each reads the others' feed, all implementations MUST detect which modification will "win" and which will "lose," consistently.
Feed items are processed one at a time, and are compared with already-known items whose state is maintained locally (and potentially published to others on an outbound feed).
To ensure global consistency, all implementations MUST use this algorithm when comparing an incoming item from a publisher:
In order to detect conflicts, endpoints MUST follow this algorithm on the modification history of two updated items:
After comparison of all incoming items, all implementations for hierarchical data (e.g. OPML) MUST check if the results of comparison have caused an item to indirectly become a parent of itself. This is called cycle detection.
In order to detect cycles, endpoints MUST follow this algorithm:
A user may want to "resolve" a conflict, where "resolve" means take an action resulting in a further update incrementing the version and removing the "conflict" attribute from sx:sync.
This spec allows for the saving of the actual contents of previous versions of the item attribute, so they can be presented to the user. This is done by adding and maintaining one or more optional sx:conflict sub-elements of an sx:conflicts element. One sx:conflict element is maintained for each entity who might've conflicted that is, for each unique by specified in sx:update (or in sx:endpoint). Any number of sx:conflict elements may exist without a by value.
The sx:conflicts element MUST contain one or more sx:conflict sub-elements.
An optional, date-time attribute. This is the date-time when the conflicting modification took place. See sx:update for format guidance.
An optional, string attribute. This text attribute identifies the endpoint that made the conflicting modification. It is used and compared programmatically. See sx:update for format guidance.
When a conflict happens, a sx:conflict sub-element is created by duplicating information from the loser's sx:history element into the newly-created sx:conflict element. Thus, the original SSE data of the losing modification is preserved. It is up to the application that is creating the sx:conflict element to provide any relevant conflict data as a child element(s).
When creating an sx:conflict sub-element, we always seek to maintain only a single such element for a given by, of the latest version. Meaning, if there is already such an element with the same by value but a lower version number, that sub-element is replaced with the new one.
If no by value exists for the either element, then comparison of the version and when attributes should occur. Meaning, if there is already an element with no by value, but a lower version number, that sub-element is replaced with the new one. In the event that version numbers are equal, if the existing element has lower when date, that sub-element is replaced with the new one.
Note that sx:conflict elements are always preserved as a flat, unordered list.
<item> <description>This is a test item</description> <sx:sync <sx:history <sx:update </sx:history> <sx:conflicts> <sx:conflict <!--conflict data here --> </sx:conflict> </sx:conflicts> </sx:sync></item>
Microsoft's copyrights in this specification are licensed under the Creative Commons Attribution-ShareAlike License (version 2.5). To view a copy of this license, please visit. As to software implementations, Microsoft is not aware of any patent claims it owns or controls that would be necessarily infringed by a software implementation that conforms to the specification's extensions. If Microsoft later becomes aware of any such necessary patent claims, Microsoft also agrees to offer a royalty-free patent license on reasonable and non-discriminatory terms and conditions to any such patent claims for the purpose of publishing and consuming the extensions set out in the specification. | http://msdn.microsoft.com/en-us/xml/bb190613.aspx | crawl-002 | refinedweb | 1,860 | 50.67 |
The Boundary Extractor vs. the Regular Expression Extractor in JMeter
A relatively short time has passed since Apache JMeter™ 4.0 was released. The new JMeter introduces many useful improvements and features (check “What’s New in JMeter 4.0?” for details). In this article we will focus on the Boundary Extractor element, which is a new member of the JMeter Post Processors family. Particularly, we will demonstrate how to use the Boundary Extractor and will compare it with the Regular Expression Extractor, which is one of the well known and very popular Post Processors. Also, don't miss the pros and cons table in the end.
Preparation Steps
The service used in this current tutorial is uploaded to the github repository called Bookshelf. To run this service we need .Net Core SDK 2.1 or higher. Please download and install it from the .Net Core official website if you don’t have it already.
After cloning the project locally, open the project folder in a command line tool and run “dotnet run” to start the corresponding service. Those who are interested in more information about the dotnet cli tool can find more details here.
The dotnet run result should look similar to this:
After the application has started we are ready to start testing!
How to Use the Boundary Extractor in JMeter
The Boundary Extractor, as mentioned above, is a post processing component introduced in JMeter 4.0. When learning about a new component, it makes sense to start with the official documentation. Let’s go through all main components of the Boundary Extractor and see what it does.
According to the official documentation it extracts values from a server response using left and right boundaries. Let’s get down to the tutorial to see - what does this mean exactly?
The first two sections don’t contain anything revolutionary and are the same for practically all the extractors. The first one is to set the name and description of the extractor. The second is for use in samplers that can generate sub-samples like HTTP Sampler and Mail Reader.
- The “Filed to check” section is also not a new one and is quite self explanatory. It allows you to set which part of a response should be assessed.
- “Name of created variable” indicates the key that will be used to store the parsed value into vars predefined variables. More details about vars can be found in the official documentation.
- The “Left boundary” field is unique for the Boundary Extractor. It allows you to set a left boundary value for the string being verified.
- “Right boundary” does the same as the previous field but from the right side.
- “Match No.” allows you to set an index for the found value if many values were found by the given criteria. The index starts from 1, as 0 is a reserved value.
- “Default Value” is for setting a default value for the variable if nothing was found by the given criteria.
Let’s create a project that will utilize the Boundary Extractor to parse a response. The service under the test will be the Bookshelf service mentioned in the previous paragraph.
1. First, add a Thread Group.
2. Add an HTTP Request sampler to the newly created Thread Group.
3. Configure the HTTP Request Sampler to send a request to the Bookshelf service.
If you run the Bookshelf service from github without modification, then the HTTP Request configuration should be the following:
- Server Name or IP - localhost
- Port Number - 60657
- Path - /api/books
That should send a http request to the service we are testing.
4. Add a Boundary Extractor to parse values from the service response.
5. We will now configure the component we added in the previous step to parse a book’s author out of the response. Here is the expected response from the Bookshelf service:
Let’s configure the extractor to parse an author’s name and store it in the author variable.
Since the expected data is surrounded by the strings ‘author:”’ from the left and ‘”’ from right, we should put these values as left and right boundaries in the boundary extractor configuration.
Here we’ve also added a variable name and match index, so that we can easily access the first parsed value later.
6. Let’s make sure that we have actually parsed something out of the response. Let’s retrieve the “author” variable and log in the log viewer panel.
To do that, add the JSR223 PostProcessor.
And add the following script into the script section:
def author = vars.get("author") log.info("Author from the response is ${author}!!!")
After that, the JSR223 PostProcessor should look like in the screenshot below:
7. Let’s run the test and make sure that the author variable is not empty. Open the log viewer by pressing Option => Log Viewer menu item.
8. Run the test and make sure that there is an author name in the logs.
Let’s sum up what was done in this section. We’ve built a test project that uses the Boundary Extractor to parse a response from a service. What could be confusing is that JMeter already has plenty of components that allow you to fetch a data from a response.
In the next section we are going to compare the Boundary Extractor with the arguably most powerful value extractor components - the Regular Expression Extractor.
The Boundary Extractor vs the Regular Expression Extractor
This section is by no means a full and comprehensive description of Regular Expression Extractor. There is another article that is dedicated to the Regular Expression Extractor itself.
This section is only a comparison between the Boundary Extractor and the Regular Expression Extractor. To start out the comparison, let’s see how the Regular Expression Extractor performs within the same scenario as before. Then, we will compare the source code, KPIs and finish off with a list of pros and cons for each one.
Running the Regular Expression Extractor
1. Let’s add a Regular Expression Extractor to the project we created earlier.
2. The fully configured Regular Expression Extractor for this test, which is configured to run the same scenario we ran with the Boundary Extractor, should look like in the screenshot below.
As promised earlier, we won’t go into details about how any of these values operate. The only thing worth mentioning here is that the result will be parsed with the regular expression "author":"([\w ]*)" and the result will be stored into the author variable as it was before.
Let’s disable the Boundary Extractor component temporarily and make sure that the regular expression extractor works as it was intended.
After running the test the log window should contain an author name fetched from the response.
As we found out, the Boundary Extractor and Regular Expression Extractor can solve the same problems. What are their differences then?
Let’s dig deeper to find the answer. How deep? To the source code!
Comparing the Source Code of the Boundary and Regular Expression Extractors
JMeter is an open source project, so it is possible to download the source code and look inside the extractors we are comparing. Here, the code from JMeter project’s github mirror was used. So, let’s look inside.
The working horse of the Boundary Extractor is the String.indexOf method. The Regular Expression Extractor on the other hand utilizes a Perl5Matcher class. It is easy to find in the net multiple researches indicating that String.indexOf is much faster than regular expressions. On the other hand, indexOf does not support a searching by pattern. If a test requires searching with a complex pattern, the Regular Expression Extractor would be the component of choice.
Comparing the KPIs of the Boundary Extractor and the Regular Expression Extractor
Before moving on we can actually try to perform a rough comparison of the Boundary and Regular Expression Extractors. JMeter has an Aggregate Report component that provides various test metrics such as minimum processing time, average time and so on. Let’s use it to measure a difference between the tested components.
Add an Aggregate Report component to the test Thread Group.
Set the number of users to 100, to send multiple parallel requests to the service the same time.
Disable the Boundary Extractor from the test project and enable the Regular Expression Extractor.
Take a look to the aggregate report component. In my system the numbers are more or less close to what is in the screenshot below.
Now, enable the Boundary Extractor and disable the Regular Expression Extractor.
Clear the aggregate report window, run the test and check the report. Now the picture is a bit different.
Though the used comparison technique is not fully precise, multiple run cycles showed that theBoundary Extractor is faster than Regular Expression Extractor.
So, if to put what was said above into a pros/cons list, we would have the following picture:
Boundary Extractor vs. Regular Expression Extractor Pros and Cons Table
To learn more advanced JMeter, go to our free JMeter academy.
Making Your Load Testing Easier with BlazeMeter
If your load tests include extracting and you need to scale them, or if you want to share your test results with managers, you might want to consider running your test in BlazeMeter.
After creating your JMX file, upload it to BlazeMeter and run your tests. With BlazeMeter, you will be able to collaborate on your tests and reports with team members and managers, scale to more users from more geo-locations, and get advanced reports.
Summary
This blog post has demonstrated a work of the Boundary Extractor, a new post processing component introduced in JMeter 4.0. Also a basic comparison of Boundary Extractor with Regular Expression Extractor was performed.
I hope this will be helpful next time that you’re wondering which extractor to choose for parsing test results.
To try out BlazeMeter, request a demo, or put your URL in the box below and your test will start in minutes. | https://www.blazemeter.com/blog/the-boundary-extractor-vs-the-regular-expression-extractor-in-jmeter/ | CC-MAIN-2021-04 | refinedweb | 1,675 | 64 |
Azure Point to Site VPN - private access to your cloud environment
You don't really have to worry about connectivity when you have a single in-house data center. All your proprietary data is on "your" network that you manage. You firewall protects your sensitive information from internet intruders. The internal network provides routing and name lookup services..
Azure a Cloud Provider
Cloud providers give you the ability to spin up off-site data centers that are visible and reachable from the internet. The actual remote data center organization and configuration is somewhat opaque to you since it is managed and controlled by the cloud provider. The cloud provides internal network connectivity between your virtual hosts. PaaS services handle internal cloud specific configuration handles in-cloud machine-to-machine connectivity usually through proprietary API. Public endpoints are usually web services or web sites.
There is limited management connectivity from the outside. Allowing management access via the public internet increases the number of attack vectors and odds of system intrusion.
Azure users get around the remote nature of the environment by enabling Remote Desktop Protocol access to their windows machines. This lets any machine on the internet RDP into the Azure machines as long as they can provide the right credentials. This is an obvious security risk. Azure users add remote Powershell, remote profiling, remote management and non-web accesses by enabling those additional public endpoints. This increases the number of attack vectors on against the Azure machines. We want to minimize the number of ports/services that are visible to the internet while providing as much corporate/owner and operational access as possible.
Corporate Azure users get around the remote nature by extending a site-to-site VPN tunnel that joins the cloud and the internal corporate network. Some companies will not allow this type of network configuration because they are worried about the bi-directional open nature of the connection. Site-to-site has additional issues like the fact that does not help off-site developers and operational folks because only one site-to-site connection is supported.
Note: that this picture shows an Azure DNS service. This is used for internal name service for processes that run inside your Azure account. They could always connect by IP address if azure DNS has not been set up. Azure DNS is not required for external access. That all happens on cloudnet.app and is supported by the cloudapp.net DNS servers.
See this document for a description of how to set up Azure point to site VPN networks and connections.
Network Organization as a First Step, VLANs
Azure users must first organize their networks in a way that makes it possible to provide both public access to web sites and services while making it also possible to provide secure non-public access to the management functions. The default Azure configuration throws all VMs and Cloud Services into a single large network.
The first step is to create a Virtual LAN (VLAN). VLANs provide a way to organize a network into different segments and is the base level network construct for creating subnets including remote connection tunnels (VPNs). Pretty much everything you previously created will have to be deleted and recreated sitting on a VLAN instead of your default Azure network. You will want to to subdivide the VLAN namespace into subnets. Normally they are just syntactic sugar because Azure doesn't support firewall or filtering between subnets. They are important to the point-to-site VPN because Azure VPN will use one of the subnets as the landing zone for VPN connections.
Initial Azure Configuration, Preparing for VPN
Azure VPN requires a VLAN subnet of its own that acts as a type of DMZ network between your on-prem network and the Azure VLAN that you configured. Systems Administrators use the Azure Management Portal to create the external connectivity subnet. It allows you to pick the number of addresses on the VPN subnet. Each connected VPN client consumes an address on this VPN subnet. The VPN public gateway takes two addresses also. You should size appropriately for the number point-to-site connections that you expect.
The second piece of the VPN connection is the creation of the actual public IP gateway for the VPN connection subnet. This is another option on the Azure Management Portal. Microsoft creates a public IP address and creates internal routing from that address to the VPN subnet. It also creates VPN virtual appliances that sit behind that VPN public IP address.
The VPN is protected by certificates. I'm not going to go into details here. You can use Microsoft MSDN point-to-site vpn configuration page for this information
Azure Point-to-Site Remote Back Channel Connectivity
A VPN connection makes the local machine part of a remote connection using a secure tunnel. Microsoft provides a VPN client side configuration program that is customized to a specific Azure VPN public address and network. This program is dynamically created in the Azure Management Portal and makes use of built-in Windows 7 / 8 VPN capabilities.
A point-to-site VPN connection creates an additional network on the local machine that is part of the same network defined in the VPN connection subnet. Essentially it puts a machine "on the network" in the VPN subnet portion of your VLAN. This mans the local machine has access to all private resources on the VLAN. The local user can to non-internet-public ports that are unavailable to others outside the Azure cloud. This is because the local machine is part of the Azure network when connected to the VPN tunnel.
The default Windows VPN configuration does not isolate the local machine to the VPN network. It leaves all the other network connections active. This means a point-to-site connected machine has access to the internal corporate network, the Azure VLAN and the internet. Users must keep anti-virus, rootkit, and malware protection software up to date to stop attackers from attacking the Azure or corporate network through the local machine. It is possible to disable all other network connections while attached to a VLAN. You see this a lot with VPN connections that are restricted due to corporate policies.
Conclusion
<to be written>
Thanks
Shaun Says,
Totally a different lesson I've learned from this post. That's really fantastic and I've learned that how azure point can help me to site vpn private access and I hope that every little bit of information here will surely be good and effective for readers. Thanks
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator. | https://joe.blog.freemansoft.com/2013/08/azure-point-to-site-vpn-private-access.html | CC-MAIN-2020-50 | refinedweb | 1,127 | 55.03 |
C++ exceptions under the hood 4: catching what you throwPosted: February 26, 2013 Filed under: Exceptions Leave a comment
In this series about exception handling, we have discovered quite a bit about exception throwing by looking at compiler and linker errors but we have so far not learned anything yet about exception catching. Let’s sum up the few things we learned about exception throwing:
- A throw statement will be translated by the compiler into two calls, __cxa_allocate_exception and __cxa_throw.
- __cxa_allocate_exception and __cxa_throw “live” on libstdc++
- __cxa_allocate_exception will allocate memory for the new exception.
- __cxa_throw will prepare a bunch of stuff and forward this exception to _Unwind_, a set of functions that live in libstdc and perform the real stack unwinding (the ABI defines the interface for these functions).
Quite simple so far, but exception catching is a bit more complicated, specially because it requires certain degree of reflexion (that is, the ability of a program to analyze its own source code). Let’s keep on trying our same old method, let’s add some catch statements throughout our code, compile it and see what happens:
#include "throw.h" #include <stdio.h> // Notice we're adding a second exception type struct Fake_Exception {}; void raise() { throw Exception(); } // We will analyze what happens if a try block doesn't catch an exception void try_but_dont_catch() { try { raise(); } catch(Fake_Exception&) { printf("Running try_but_dont_catch::catch(Fake_Exception)\n"); } printf("try_but_dont_catch handled an exception and resumed execution"); } // And also what happens when it does void catchit() { try { try_but_dont_catch(); } catch(Exception&) { printf("Running try_but_dont_catch::catch(Exception)\n"); } catch(Fake_Exception&) { printf("Running try_but_dont_catch::catch(Fake_Exception)\n"); } printf("catchit handled an exception and resumed execution"); } extern "C" { void seppuku() { catchit(); } }
Note: You can download the full sourcecode for this project in my github repo.
Just like before, we have our seppuku function linking the C world with the C++ world, only this time we have added some more function calls to make our stack more interesting, plus we have added a bunch of try/catch blocks so we can analyze how does libstdc++ handles them.
And just like before, we get some linker errors about missing ABI functions:
> g++ -c -o throw.o -O0 -ggdb throw.cpp > gcc main.o throw.o mycppabi.o -O0 -ggdb -o app throw.o: In function `try_but_dont_catch()': throw.cpp:12: undefined reference to `__cxa_begin_catch' throw.cpp:12: undefined reference to `__cxa_end_catch' throw.o: In function `catchit()': throw.cpp:20: undefined reference to `__cxa_begin_catch' throw.cpp:20: undefined reference to `__cxa_end_catch' throw.o:(.eh_frame+0x47): undefined reference to `__gxx_personality_v0' collect2: ld returned 1 exit status
Again we see a lot of interesting stuff going on here. The calls to __cxa_begin_catch and __cxa_end_catch are probably something we could have expected: we don’t know what they are yet, but we can presume they are the equivalent of the throw/__cxa_allocate/throw conversions (you do remember that our throw keyword got translated to a pair of __cxa_allocate_exception and __cxa_throw functions, right?). The __gxx_personality_v0 thing is new, though, and the central piece of the next few articles.
What does the personality function do? We already said something about it on the introduction to this series but we will be looking into it with some more detail next time, together with our new two friends, __cxa_begin_catch and __cxa_end_catch.
C++ exceptions under the hood 3: an ABI to appease the linkerPosted: February 19, 2013 Filed under: Exceptions Leave a comment
On our journey to understand exceptions we discovered that the heavy-lifting is done in libstdc++ as specified by the C++ ABI. Reading some linker errors we deduced last time that for handling exceptions we need help from the C++ ABI; we created a throwing C++ program, linked it together with a plain C program and found that the compiler somehow translated our throw instruction into something that is now calling a few libstd++ functions to actually throw an exception. Lost already? You can check the sourcode for this project so far in my github repo.
Anyway, we want to understand exactly how an exception is thrown, so we will try to implement our own mini-ABI, capable of throwing an exception. To do this, a lot of RTFM is needed, but a full ABI interface can be found here, for LLVM. Let’s start by remembering what those missing functions are:
>
__cxa_allocate_exception
The name is quite self explanatory, I guess. __cxa_allocate_exception receives a size_t and allocates enough memory to hold the exception being thrown. There is more to this that what you would expect: when an exception is being thrown some magic will be happening with the stack, so allocating stuff here is not a good idea. Allocating memory on the heap might also not be a good idea, though, because we might have to throw if we’re out of memory. A static allocation is also not a good idea, since we need this to be thread safe (otherwise two throwing threads at the same time would equal disaster). Given these constraints, most implementations seem to allocate memory on a local thread storage (heap) but resort to an emergency storage (presumably static) if out of memory. We, of course, don’t want to worry about the ugly details so we can just have a static buffer if we want to.
__cxa_throw
The function doing all the throw-magic! According to the ABI reference, once the exception has been created __cxa_throw will be called. This function will be responsible of starting the stack unwinding. An important effect of this: __cxa_throw is never supposed to return. It either delegates execution to the correct catch block to handle the exception or calls (by default) std::terminate, but it never ever returns.
vtable for __cxxabiv1::__class_type_info
A weird one… __class_type_info is clearly some sort of RTTI, but what exactly? It’s not easy to answer this one now and it’s not terribly important for our mini ABI; we’ll leave it to an appendix for after we are done analyzing the process of throwing exceptions, for now let’s just say this is the entry point the ABI defines to know (in runtime) whether two types are the same or not. This is the function that gets called to determine whether a catch(Parent) can handle a throw Child. For now we’ll focus on the basics: we need to give it an address for the linker (ie defining it won’t be enough, we need to instantiate it) and it has to have a vtable (that is, it must have a virtual method).
Lot’s of stuff happen on these functions, but let’s try to implement the simplest exception thrower possible: one that will call exit when an exception is thrown. Our application was almost OK but missing some ABI-stuff, so let’s create a mycppabi.cpp. Reading our ABI specification we can figure out the signatures for __cxa_allocate_exception and __cxa_throw:
#include <unistd.h> #include <stdio.h> #include <stdlib.h> namespace __cxxabiv1 { struct __class_type_info { virtual void foo() {} } ti; } #define EXCEPTION_BUFF_SIZE 255 char exception_buff[EXCEPTION_BUFF_SIZE]; extern "C" { void* __cxa_allocate_exception(size_t thrown_size) { printf("alloc ex %i\n", thrown_size); if (thrown_size > EXCEPTION_BUFF_SIZE) printf("Exception too big"); return &exception_buff; } void __cxa_free_exception(void *thrown_exception); #include <unwind.h> void __cxa_throw( void* thrown_exception, struct type_info *tinfo, void (*dest)(void*)) { printf("throw\n"); // __cxa_throw never returns exit(0); } } // extern "C"
Note: You can download the full sourcecode for this project in my github repo.
If we now compile mycppabi.cpp and link it with the other two .o files, we’ll get a working binary which should print “alloc ex 1\nthrow” and then exit. Pretty simple, but an amazing feat nonetheless: we’ve managed to throw an exception without calling libc++. We’ve written a (very small) part of a C++ ABI!
Another important bit of wisdom we gained by creating our own mini ABI: the throw keyword is compiled into two function calls to libstdc++. No voodoo there, it’s actually a pretty simple transformation. We can even disassemble our throwing function to verify it. Let’s run this command “g++ -S throw.cpp”.
seppuku: .LFB3: [...] call __cxa_allocate_exception movl $0, 8(%esp) movl $_ZTI9Exception, 4(%esp) movl %eax, (%esp) call __cxa_throw [...]
Even more magic happening: when the throw keyword gets translated into these two calls, the compiler doesn’t even know how the exception is going to be handled. Since libstdc++ is the one defining __cxa_throw and friends, and libstdc++ is dynamically linked on runtime, the exception handling method could be chosen when we first run our executable.
We are now seeing some progress but we still have a long way to go. Our ABI can only throw exceptions right now. Can we extend it to handle a catch as well? We’ll see how next time.
C++ exceptions under the hood II: a tiny ABIPosted: February 12, 2013 Filed under: Exceptions Leave a comment
If we are going to try and understand why exceptions are complex and how do they work, we can either read a lot of manuals or we can try to write something to handle the exceptions ourselves. Actually, I was surprised by the lack of good information on this topic: pretty much everything I found is either incredibly detailed or very basic, with one exception or two. Of course there are some specifications to implement (most notably the ABI for c++ but we also have CFI, DWARF and libstdc) but reading the specification alone is not enough to really learn what’s going on under the hood.
Let’s start with the obvious then: wheel reinvention! We know for a fact that plain C doesn’t handle exceptions, so let’s try to link a throwing C++ program with a plain C linker and see what happens. I came up with something simple like this:
#include "throw.h" extern "C" { void seppuku() { throw Exception(); } }
Don’t forget the extern stuff, otherwise g++ will helpfully mangle our little function’s name and we won’t be able to link it with our plain C program. Of course, we need a header file to “link” (no pun intended) the C++ world with the C world:
struct Exception {}; #ifdef __cplusplus extern "C" { #endif void seppuku(); #ifdef __cplusplus } #endif
And a very simple main:
#include "throw.h" int main() { seppuku(); return 0; }
What happens now if we try to compile and link together this frankencode?
> g++ -c -o throw.o -O0 -ggdb throw.cpp > gcc -c -o main.o -O0 -ggdb main.c
Note: You can download the full sourcecode for this project in my github repo.
So far so good. Both g++ and gcc are happy in their little world. Chaos will ensue once we try to link them, though:
>
And sure enough, gcc complains about missing C++ symbols. Those are very special C++ symbols, though. Check the last error line: a vtable for cxxabiv1 is missing. cxxabi, defined in libstdc++, refers to the application binary interface for C++. So now we have learned that the exception handling is done with some help of the standard C++ library with an interface defined by C++’s ABI.
The C++ ABI defines a standard binary format so we can link objects together in a single program; if we compile a .o file with two different compilers, and those compilers use a different ABI, we won’t be able to link the .o objects into an application. The ABI will also define some other formats, like for example the interface to perform stack unwinding or the throwing of an exception. In this case, the ABI defines an interface (not necessarily a binary format, just an interface) between C++ and some other library in our program which will handle the stack unwinding, ie the ABI defines C++ specific stuff so it can talk to non-C++ libraries: this is what would enable exceptions thrown from other languages to be caught in C++, amongst other things.
In any case, the linker errors are pointing us to the first layer into exception handling under the hood: an interface we’ll have to implement ourselves, the cxxabi. For the next article we’ll be starting our own mini ABI, as defined in the C++ ABI.
C++ exceptions under the hoodPosted: February 5, 2013 Filed under: Exceptions Leave a comment
Everyone knows that good exception handling is hard. Reasons for this abound, in every single layer of an exception “lifetime”: it’s hard to write exception safe code, an exception might be thrown from unexpected places (pun intended!), it’s can be complicated to understand badly designed exception hierarchies, it’s slow because a lot of voodoo is happening under the hood, it’s dangerous because improperly throwing an exception might call the unforgiving std::terminate. And although anyone who might have had to battle an “exceptional” program might know this, the reasons for this mess are not widespread knowledge.
The first question we need to ask ourselves is then, how does it all work. This is the first article on a long series, in which I’ll be writing about how exceptions are implemented under the hood in c++ (actually, c++ compiled with gcc on x86 platforms but this might apply to other platforms too). On these articles the process of throwing and catching an exception will be explained with quite a lot of detail, but for the impatient people here is a small brief of all the articles that will follow: how is an exception thrown in gcc/x86:
- When we write a throw statement, the compiler will translate it into a pair of calls into libstdc++ functions that allocate the exception and then start the stack unwinding process by calling libstdc.
- For each catch statement, the compiler will write some special information after the method’s body, a table of exceptions this method can catch and a cleanup table (more on the cleanup table later).
- cleanup table.
This already looks quite complicated and we haven’t even started; that was but a short and inaccurate description of all the complexities needed to handle an exception.
To learn about all the details that happen under the hood on the next article we will start to implement our own mini libstdlibc++. Not all of it though, only the part that handles exceptions. Actually not even all of that, only the bare minimum we need to make a simple throw/catch statement work. Some assembly will be needed, but nothing too fancy. A lot of patience will be required, I’m afraid.
If you are too curious and want to start reading about exception handling implementation then you can start here, for a full specification of what we are going to implement on the next few articles. I’ll try to make these articles a bit more didactic and easier to follow though, so see you next time to start our ABI! | https://monoinfinito.wordpress.com/category/programming/c/exceptions/page/3/ | CC-MAIN-2017-39 | refinedweb | 2,482 | 58.52 |
scrapelib 0.9.1
a library for scraping things.
- optional robots.txt compliance
scrapelib is a project of Sunlight Labs (c) 2013. All code is or 3.3
- requests >= 1.0
Installation
scrapelib is available on PyPI and can be installed via pip install scrapelib
PyPI package:
Source:
Documentation:
Example Usage
import scrapelib s = scrapelib.Scraper(requests_per_minute=10, follow_robots=True) # Grab Google front page s.urlopen('') # Will raise RobotExclusionError s.urlopen('') # Will be throttled to 10 HTTP requests per minute while True: s.urlopen('')
- Downloads (All Versions):
- 40 downloads in the last day
- 976 downloads in the last week
- 2397-0.9.1.xml | https://pypi.python.org/pypi/scrapelib/0.9.1 | CC-MAIN-2015-14 | refinedweb | 104 | 69.99 |
In the previous article, we saw how to install and use the Broker MQTT Mosquitto on a Raspberry Pi 3 (or some other system). The advantage of owning your own Broker is to keep your data “at home”. It is however possible to publish / subscribe data from connected objects using an online Broker.
8 Brokers MQTT in the Cloud
Online Brokers are not yet very numerous but at least 4 of them will allow you to start the realization of your connected objects. ThingStudio is clearly the most advantageous at the moment because totally free (and without limitations) for the Makers, Hackers and Designers. It joins the same free-of-charge strategy for individuals, students (and small businesses) as Autodesk with fusion360.
Finally, you will definitely find DIoTY in your research. DIoTY seems free and provides an application on iOS and Android (or rather a webapp developed with the ionic framework) to connect your connected objects. I prefer to stay away from this service at the moment because registration seems to me dubious since a gmail account.
5 OnlineMQTT Brokers for Testing
You can also do some online testing on some Brokers. Be careful, however, not to publish anything sensitive. Topics are accessible by anyone.
-]
- ESP8266 + DHT22 + MQTT: make a connected object IoT and include it in Home Assistant
You can also use flespi MQTT broker for free:
Isolated MQTT namespace, additional REST API for persistent sessions and subscriptions managements, SSL and WebSockets support.
You can try MQTTRoute ( ) which works with all standard MQTT Clients and at the same time can be customised to store data to any back end Big data engine or application. It has a ready to use Connectors( ) which is a open source to customise. It can be hosted on.
Server Name: broker.bevywise.com
TCP Port : 1883
WebSocket Port : 8443 | https://diyprojects.io/8-online-mqtt-brokers-iot-connected-objects-cloud/?shared=email&msg=fail | CC-MAIN-2019-22 | refinedweb | 305 | 54.52 |
Once upon a time, there was a compiler that generates Web Components and builds high performance web apps called StencilJS. Among all the build-time tool ever created, it had for goal to build faster, more capable components that worked across all major frameworks.
On the internet next door, there lived a boy (me 😉). And the boy watched the compiler grow more and more effective, more and more developer friendly with each passing year.
One day, as he was developing something new in his beloved project DeckDeckGo, the boy had the idea to experiment a feature of the compiler he had never tried so far, the integration of Web Workers.
He was so blew away by the results, that he had to share that magical encounter.
Chapter One: Abracadabra
A boy publishes a blog post but would not reveal any unknown secret spells. Meanwhile, there would be no good fairy tale without magic words.
Fortunately, the compiler has shared its sorcery to anyone publicly in a very well documented grimoire called “documentation”.
Valiant knights seeking to technically defeat such implementation, let me suggest you to have a look to these spells but, if to the contrary you are here to find a quest, stay with me and let me tell you the rest of the story.
Chapter Two: Init Camelot
King Arthur and the Round Table had Camelot, we, developer, have Web Components and shadow DOM, which can be metaphorically represented as castles. That’s why we are initializing a new Stencil playground before experimenting new magical tricks.
npm init stencil
In addition, to replicate the exact formula the boy had tried out, we enhance our fortifications with the usage of Marked.js so that we give our component the goal to render a magical sentence from Markdown to HTML.
npm i marked @types/marked
Having found some mortar, we are creating a component which aims to reproduce the rendering spell by transforming markdown to HTML when the lifecycle
componentWillLoad and applying it through the use of a local state.
import { Component, h, State } from '@stencil/core'; import { parseMarkdown } from './markdow'; @Component({ tag: 'my-camelot', shadow: true, }) export class MyCamelot { @State() private markdownHtml: string; async componentWillLoad() { this.markdownHtml = await parseMarkdown(`# Choose wisely For while the true Grail will **bring you life**, the false Grail will take it from you.` ); } render() { return <div innerHTML={this.markdownHtml}></div>; } }
In addition, we externalize the magical function in a separate file we can call
markdown.ts.
import marked from 'marked'; export const parseMarkdown = async (text: string) => { const renderer = new marked.Renderer(); return marked(text, { renderer, xhtml: true, }); };
Some who might fear nothing and who might run the above code on their local computers,
npm run start, might observe the following outcome.
Chapter Three: Spell Calling
The boy had already been published articles on Web Workers, one about their native JavaScript integration in React applications and another one showcasing their integration with Angular.
From taking care of making libraries available for the workers, in the Javascript version, to using listeners and creating objects to load these in both of these, even if from a certain perspective it needed few works, it still needed more work, more code.
To the contrary and to the boy wonder, Stencil made all these steps magical by simply calling a unique spell:
mv markdown.ts markdown.worker.ts
Indeed, as you can notice in following screenshot, any TypeScript file within the
src directory that ends with
.worker.ts will automatically use a worker by the Stencil’s compiler making, as far as the boy knows, the most magical Web Worker recipe he ever tried out 🔥.
Epilogue
The Stencil’s compiler, having simplified this kind of integration, demonstrated once again all its potential. Together with the Web Worker, they hopefully will have many babies, many amazing Web Components and applications.
To infinity and beyond!
David
You can reach me on Twitter and, why not, give a try to DeckDeckGo for your next presentations.
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/daviddalbusco/stenciljs-web-worker-a-fairy-tale-cob | CC-MAIN-2021-04 | refinedweb | 663 | 58.82 |
Hi Rui and everyone,
I’m using an ESP32 board and some sensors to measure parameters. I’m developing this device to be used with batteries, so optimizing the energy consumption is very important in this porject.
In this project I take values every 10-30 minutes. The rest of the time the ESP32 is in DeepSleep mode. The problem is that some of the sensors/modules I’m using don’t have a sleep mode, so keeping them powered while the ESP32 is in DeepSleep mode result in a high lost of energy. I’m planning on using a BS170 transistor as a relay in order to power these sensors/modules only when the ESP32 wakes up and I’m going to take the values. But here I have the other problem: some of the sensors need preheat time (around 1-3 minutes) before taking the values. This means I need to power on these sensors 1-3 minutes before measuring the parameters.
Let’s give an example where I want to take values every 20 minutes:
- The ESP32 goes to DeepSleep for 17 minutes.
- The ESP32 wakes up after 17 minutes, and turns on the BS170 transistor to power on the sensors.
- The ESP32 goes to DeepSleep for 3 minutes.
- The ESP32 wakes up and takes the values of the sensors.
- The ESP32 goes to DeepSleep for 17 minutes again.
The problem is the 3 minutes after powering on the transistor. I can control it setting a digital output high, but when I send the ESP32 to sleep, the digital output will go down and turn off the transistor again.
The question is: is there a way to keep a concrete digital output high while the ESP32 is in DeepSleep mode?
Or is there a better way you would do what I need to do, in order to save as much energy as possible?
Thank you very much in advance!
Any suggestion or recommendation will be appreciated 🙂
Xabi
Hi Xabi, what do you mean? Are you talking about controlling outputs while the ESP32 is in Deep Sleep mode?
Sorry Rui, I sent the question before writing it. I edited it 🙂
If you need more information, please let me know
Hi Xabi,
Sorry for taking so long to get back to you, but I’ve been receiving a high volume of emails and I wasn’t able to keep up with everything.
I would also like to know how to do that. According to the documentation it should be very easy… You would call this function to hold a state during deep sleep:
esp_err_t rtc_gpio_hold_en(GPIO_NUM_XX);
And you would call this function to disable/unlock states after a deep sleep (so, you could change the pin value):
esp_err_trtc_gpio_hold_dis(gpio_num_tgpio_num)
According to a source on the ESP32 Forum:
Something like this code should keep GPIO 14 HIGH during deep sleep:
void setup() { pinMode(14, OUTPUT); digitalWrite(14, HIGH); esp_err_t rtc_gpio_hold_en(GPIO_NUM_14); delay(1000); esp_deep_sleep(10000000000); }
However, I can’t make it work for me… I’ve also found this repository with some examples, but using the ESP IDF:.
I’ve tried to run these commands on Arduino IDE, but it also didn’t work for me:
esp_err_t rtc_gpio_init(gpio_num_t GPIO_NUM_14); esp_err_t rtc_gpio_set_direction(gpio_num_t GPIO_NUM_14, gpio_mode_t RTC_GPIO_MODE_OUTPUT_ONLY); esp_err_t rtc_gpio_set_level(gpio_num_t GPIO_NUM_14, uint32_t t=1); esp_err_t rtc_gpio_hold_en(gpio_num_t GPIO_NUM_14); esp_deep_sleep(10000000000);
Unfortunately I don’t have a working example for you… But I’m sure something small is missing to make it work.
I hope that helps. Regards,
Rui
Hi Rui,
Thanks for the information! I tried it but I still couldn’t make it work. Did you make it work? Or do you know anyone that made it work?
Thank you very much for your support 🙂
Xabi
I didn’t have the time to make it work either. In theory, it should work, but I’m sure it’s something lower level that is missing that doesn’t allow it to make it work.
Regards,
Rui
Hi Rui,
Good news here! I made it work! Here you have the simple code that worked for me:
#include "driver/rtc_io.h" gpio_num_t pin_MOSFET = GPIO_NUM_15; void setup() { rtc_gpio_init(pin_MOSFET); rtc_gpio_set_direction(pin_MOSFET,RTC_GPIO_MODE_OUTPUT_ONLY); rtc_gpio_set_level(pin_MOSFET,0); //GPIO LOW delay(5000); rtc_gpio_set_level(pin_MOSFET,1); //GPIO HIGH esp_sleep_pd_config(ESP_PD_DOMAIN_RTC_PERIPH, ESP_PD_OPTION_ON); esp_sleep_enable_timer_wakeup(60*1000*1000); esp_deep_sleep_start(); } void loop() { // put your main code here, to run repeatedly: }
On the other hand, I’m having problems with my transistor. The BS170 is not giving enough current for a Vgs of 3.3V. I made a little research over the internet and, for micros like ESP32 which give 3.3V through their outputs, it’s needed MOSFETScalled logic level MOSFET (work for Vgs=3.3V). Could you give me a good reference of a logic level MOSFET that would work well with the ESP32 and is going to be used to control a sensor that needs up to 150mA?
I hope you find that code useful 🙂
Regards,
Xabi
Thanks for sharing your code and I’m glad you made it work. I’ll definitely give it a try.
Have you tried the 2N7000? I often use that N−Channel MOSFET with the ESP8266.
That’s exactly the one I ordered from Aliexpress to try! I’m still waiting for it, but from it’s datasheet curves I’m not sure if it will give me 150mA for 3.3V in Vgs. At least you confirmed I’m not in the wrong direction 😉
I’ll close this question as resolved.
Thanks for the support Rui.
Regards,
Xabi
I’m also not 100% sure… so you’ll need to test, but I think it should work for your needs. Thanks. Regards,
Rui
I have just tried the 2N7000 to switch a red led (in line with a 70 ohm resistor) using a 3.3v power source and it only worked after I had to put a 10K ohm pull down resistor on the gate. A very strange result occurred that perhaps someone can explain was that every time I touched the gate with a wire of any sort it turned on the transistor switch, this wire was not connected to a power source. I did make sure the transistor was the right way around, I found that if you switch drain with source it just powers it on permanently like reversing a diode. My only interpretation (I am a novice) was that the floating voltage in the wire was activating the transistor, but when I used a 10K resistor as a pull down all worked fine. The 2N7000 does have a min gate activation voltage of 0.8, so perhaps the floating voltage was enough to activate it???. Any one else had this occur to them or do they have an sound explanation? | https://rntlab.com/question/outputs-while-deepsleep/ | CC-MAIN-2021-25 | refinedweb | 1,130 | 69.92 |
27 January 2012 06:00 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The company issued the statement following a
An explosion at the Deepwater Horizon rig on 20 April 2010 killed 11 and caused the largest offshore spill ever recorded in
US District Judge Carl Barbier ruled that Transocean’s contract with BP shielded the Deepwater Horizon rig operator from the full responsibility of paying the claims, according to media reports.
This meant that BP will have to pay for some of Transocean’s liabilities on the oil spill accident.
Nonetheless, Transocean is not fully indemnified in its contract with BP.
“Under the decision Transocean is, at a minimum, financially responsible for any punitive damages, fines and penalties flowing from its own conduct,” BP said.
“As we have said from the beginning, Transocean cannot avoid its responsibility for this accident," the company added.
BP said it has paid more than $7.8bn (€5.93bn) in claims, advances and other payments to individuals, businesses and governments in the aftermath of the Deepwater Horizon | http://www.icis.com/Articles/2012/01/27/9527279/bp-says-transocean-still-liable-for-punitive-damages-on-gulf-oil-spill.html | CC-MAIN-2014-52 | refinedweb | 172 | 50.57 |
ppm - Perl Package Manager, version 4
Invoke the graphical user interface:
ppm ppm gui
Install, upgrade and remove packages:
ppm install [--area <area>] [--force] <pkg> ppm install [--area <area>] [--force] <module> ppm install [--area <area>] <url> ppm install [--area <area>] <file>.ppmx ppm install [--area <area>] <file>.ppd ppm install [--area <area>] <num> ppm upgrade [--install] ppm upgrade <pkg> ppm upgrade <module> ppm remove [--area <area>] [--force] <pkg>
Manage and search install areas:
ppm area list [--csv] [--no-header] ppm area sync ppm list [--fields <fieldnames>] [--csv] ppm list <area> [--fields <fieldnames>] [--csv] ppm files <pkg> ppm verify [<pkg>]
Manage and search repositories:
ppm repo list [--csv] [--no-header] ppm repo sync [--force] [<num>] ppm repo on <num> ppm repo off <num> ppm repo describe <num> ppm repo add <name> ppm repo add <url> [<name>] [--username <user> [--password <passwd>]] ppm repo rename <num> <name> ppm repo location <num> <url> ppm repo suggest ppm search <pattern> ppm describe <num> ppm tree <package> ppm tree <num>
Obtain version and copyright information about this program:
ppm --version ppm version
The
ppm program is the package manager for ActivePerl. It
simplifies the task of locating, installing, upgrading and removing
Perl packages.
Invoking
ppm without arguments brings up the graphical user interface,
but ppm can also be used as a command line tool where the first argument
provide the name of the sub-command to invoke. The following sub-commands
are recognized:
Will initialize the given area so that PPM starts tracking the packages it contains..
Lists the available install areas. The list displays the name, number
of installed packages and
lib directory location for each install
area. If that area is read-only, the name appears in parenthesis. You
will not be able to install packages or remove packages in these areas.
The default install area is marked with a
* after its name.
The order of the listed install areas is the order perl uses when searching for modules. Modules installed in earlier areas override modules installed in later ones.
The --csv option selects CSV (comma-separated values) format for the output. The default field separator can be overridden by the argument following --csv.
The --no-header option suppresses column headings.
Synchronizes installed packages, including those installed by means other than PPM (e.g. the CPAN shell), with the ppm database. PPM searches the install area(s) for packages, making PPM database entries if they do not already exist, or dropping entries for packages that no longer exist. When used without an area argument, all install areas are synced.
Get or set various PPM configuration values.
List all configuration options currently set.
Shows all properties for a particular package from the last search result.
Lists the full path name of the files belonging to the given package, one line per file.
Prints the documentation for ppm (this file).
List information about ppm and its environment. With argument print the value of the given variable. See also ppm config list.
Install a package and its dependencies.
The argument to ppm install can be the name of a package, the name of
a module provided by the package, the file name or the URL of a PPMX or PPD file,
or the associated number for the package returned by the last
ppm
search command..
By default, new packages are installed in the
site area, but if the
site area is read only, and there are user-defined areas set up, the
first user-defined area is used as the default instead. Use the
--area option to install the package into an alternative location.
The --nodeps option makes PPM attempt to install the package without resolving any dependencies the package might have.
List installed packages. If the area argument is not provided, list the content of all install areas.
The --matching option limits the output to only include packages matching the given pattern. See ppm search for pattern syntax.
The --csv option selects CSV (comma-separated values) format for the output. The default field separator can be overridden by the argument following --csv.
The --no-header option suppress printing of the column headings.
The --fields argument can be used to select what fields to show. The argument is a comma separated list of the following field names:
The package name. This field is always shown, but if specified alone get rid of the decorative box.
The version number of the package.
The release date of the package.
A one sentence description of the purpose of the package.
The package author or maintainer.
Where the package is installed.
The number of files installed for the package.
The combined disk space used for the package.
The location of the package description file.
Print entries from the log for the last few minutes. By default print log lines for the last minute. With --errors option suppress warnings, trace and debug events.
Install the packages listed in the given profile file. If no file is given try to read the profile from standard input.
Write profile of configured repositories and installed packages to the given file. If no file is given then print the profile XML to standard output.
Alias for ppm list --matching pattern. Provided for PPM version 3 compatibility.
Uninstalls the specified package. If area is provided unininstall from the specified area only. With --force uninstall even if there are other packages that depend on features provided by the given package.
Alias for ppm repo. Provided for PPM version 3 compatibility.
Alias for ppm repo list.
Add the named resposity for PPM to fetch packages from. The names recognized are shown by the ppm repo suggest command. Use ppm repo add activestate if you want to restore the default ActiveState repo after deleting it.
Set up a new repository for PPM to fetch packages from.
Remove repository number num.
Show all properties for repository number num.
List the repositories that PPM is currently configured to use. Use this to identify which number specifies a particular repository.
The --csv option selects comma-separated values format for the output. The default field separator can be overridden by the argument following --csv.
The --no-header option suppress printing of the column headings.
Alias for ppm repo describe num.
Alias for ppm repo cmd num.
Disable repository number num for ppm install or ppm search.
Enable repository number num if it has been previously disabled with ppm repo off.
Change name by which the given repo is known.
Change the location of the given repo. This will make PPM forget all cached data from the old repository and try to refetch it from the new location.
Alias for ppm search.
List some known repositories that can be added with ppm add. The list only include repositories that are usable by this perl installation.
Synchronize local cache of packages found in the enabled repositories. With the --force option, download state from remote repositories even if the local state has not expired yet. If num is provided, only sync the given repository.
PPM will need to download every PPD file for repositories that don't provide a summary file (package.xml). This can be very slow for large repositories. Thus PPM refuses to start the downloads with repositores linking to more that 100 PPD files unless the --max-ppd option provides a higher limit.
For pattern, use the wildcard
* to match any number of characters
and the wildcard
? to match a single character. For example, to find
packages starting with the string "List" search for
list*. Searches
are case insensitive.
If pattern contains
::, PPM will search for packages that provide
modules matching the pattern.
If pattern matches the name of a package exactly (case-sensitively), only that package is shown. A pattern without wildcards that does not match any package names exactly is used for a substring search against available package names (i.e. treated the same as "*pattern*").
The output format depends on how many packages match. If there is only one match, the ppm describe format is used. If only a few packages match, limited information is displayed. If many packages match, only the package names and version numbers are displayed, one per line.
The number prefixing each entry in search output can be used to look up full information with ppm describe num, dependencies with ppm tree num or to install the package with ppm install num.
Shows all the dependencies (recusively) for a particular package. The
package can be identified by a package name or the associated number
for the package returned by the last
ppm search command.
Alias for ppm remove.
Alias for ppm upgrade.
List packages that there are upgrades available for. With --install option install the upgrades as well.
Upgrades the specified package or module if an upgrade is available in one of the currently enabled repositories.
Checks that the installed files are still present and unmodified. If the package name is given, only that packages is verified.
Will print the version of PPM and a copyright notice.
The following lists files and directories that PPM uses and creates:
Directory where PPM keeps its state. On Windows this directory is $LOCAL_APPDATA/ActiveState/ActivePerl/$VERSION. The $VERSION is a string like "818".
SQLite database where ppm keeps its configuration and caches meta information about the content of the enabled repositories.
Log file created to record actions that PPM takes. On Windows this is logged to $TEMPDIR/ppm4.log. On Mac OS X this is logged to $HOME/Library/Logs/ppm4.log.
SQLite database where PPM tracks packages installed in the install area
under
$PREFIX.
Temporary directories used during install. Packages to be installed are unpacked here.
These files contains a single package that can be installed by PPM. They are compressed tarballs containing the PPD file for the package and the blib tree to be installed.
XML files containing meta information about packages. Each package has its own .ppd file. See the ActivePerl::PPM::PPD manpage for additional information.
Meta information about repositories. When a repository is added, PPM looks for this file and if present, monitors it too stay in sync with the state of the repository.
Same as package.xml but PPM 3 compatible. PPM will use this file if package.xml is not available.
The following environment variables affect how PPM behaves:
ACTIVEPERL_PPM_DEBUG
If set to a TRUE value, makes PPM print more internal diagnostics.
ACTIVEPERL_PPM_BOX_CHARS
Select what kind of box drawing characters to use for the
ppm *
list outputs. Valid values are
ascii,
dos and
unicode. The
default varies.
ACTIVEPERL_PPM_HOME
If set, use this directory to store state and configuration information for PPM. This defaults to $LOCAL_APPDATA/ActiveState/ActivePerl/$VERSION on Windows and $HOME/.ActivePerl/$VERSION/ on Unix systems.
ACTIVEPERL_PPM_LOG_CONS
If set to a TRUE value, make PPM print any log output to the console as well.
DBI_TRACE
PPM uses the DBI manpage to access the internal SQLite databases. Setting DBI_TRACE allow you to see what queries are performed. Output goes to STDERR. See the DBI manpage for further details.
http_proxy
PPM uses the LWP manpage to access remote repositories. If you need HTTP
traffic pass via a proxy server to reach the repository, you must set
the
http_proxy environment variable. Some examples:
Using bash: export http_proxy=
Using cmd.exe: set http_proxy=
See env_proxy in the LWP::UserAgent manpage for more.
PPM version 4 is a complete rewrite. The main changes since PPM version 3 are:
The command line shell has been replaced with a graphical user interface.
Support for *.ppmx files (since PPM version 4.3)
PPM can now manage different installation areas.
No more 'precious' packages. PPM can upgrade itself as well other bundled and core modules.
Installation of packages and their dependencies happen as atomic transactions.
PPM tracks what files it has installed and can notice if files have been modified or deleted. The command 'ppm verify' will report on mismatches.
State is kept in local SQLite databases. All repository state is kept local which makes searching much faster.
PPM will pick up and manage packages installed by other means (e.g. manually or with the CPAN shell).
No more SOAP.
Underlying modules moved to the
ActivePerl::PPM:: namespace. | http://docs.activestate.com/activeperl/5.10/bin/ppm.html | CC-MAIN-2015-11 | refinedweb | 2,027 | 67.96 |
In the previous post on first class composable events in F#, we talked mostly about the underlying types and the basic composition that you can achieve through the Event module. By using the basic combinators of map, filter, partition, etc, we are able to create some rather rich scenarios for first class events. We’ve already shown what we can do with existing events off of such things as Windows Forms applications, but how about we create our own?
Creating Custom Events
Creating events in F# is unlike experiences in other .NET languages as you do not declare events as members. Typically in C#, we would declare an event such as the following:
public delegate Assembly ResolveEventHandler( object sender, ResolveEventArgs args); public class AppDomain { public event ResolveEventHandler TypeResolve; }
Instead, in F#, we use the Event.create function to create a new event. Let’s take a look at the signature:
Event.create : // No arguments unit -> // Handler function and event tuple ('a -> unit) * #Event<'a> // Event function
What this tells us is that we create both an invoker function and an event when we call Event.create. You’ll notice that our events don’t have to follow the standard object sender and EventArgs signature, and instead can be anything we want them to. If the type inference cannot infer how we’re using our event, we may need to specify what our arguments are. Let’s create a simple one that posts numbers and reacts to them.
> let fire, event = Event.create() - event.Add(printfn "Fired %d") - fire(25);; Fired 25 val fire : (int -> unit) val event : IEvent<int>
The above code first calls Event.create which gives us our invoker (fire) and our event in which we can then add a handler to in order to print our value. Finally, we call the fire invoker to send a message of 25 and as you see below it prints out “Fired 25”. You can see we used no type declarations at all in the above code as they weren’t necessary when our type inference did the work for us.
Let’s look through another example of creating an event which is fired when a combination of two events occur, a mouse move and a mouse down, but only until the mouse up event. Instead of using WinForms, let’s use WPF instead. First, let’s get some of the overhead out of the way:
open System.Windows open System.Windows.Controls open System.Windows.Input open System.Windows.Media let getPosition (elem : #IInputElement) (args : #MouseEventArgs) = let point = args.MouseDevice.GetPosition(elem) point.X, point.Y
This allows us easily to get our absolute position of our mouse based upon our given element. This will be useful for obtaining our coordinates later. By using the #, I’m specifying that our given parameters must inherit from the given class or interface. As F# is strongly typed, even moreso than C#, it makes our lives easier by not requiring casting. let’s move onto the creating this mouse tracker:
let createMouseTracker (e : #UIElement) = let fire,event = Event.create() let lastPosition = ref None e.MouseDown |> Event.map (getPosition e) |> Event.listen (fun position -> lastPosition := Some position); e.MouseUp.Add(fun args -> lastPosition := None); e.MouseMove |> Event.map (getPosition e) |> Event.listen (fun position -> match !lastPosition with | Some last -> fire(last,position); lastPosition := Some position | None -> ()) event
Once again, I create our fire and our event in the beginning. In order to capture state such as last position, I need a reference cell in order to do this. F# does not allow for mutable values to escape scope, so instead, the ref cell does the trick. We first do a map on the mouse down to get the position, then update the position ref cell. On the mouse up event, we simply reset the ref cell to empty. The mouse move event, once again we get the position, then we listen based upon last position. If there is a last, we fire our event with our previous and current value. And finally, we return our new event.
That’s great, but what about publishing our events inside a class?
Publishing Events to the World
With our previous examples, we’re mostly dealing with local functions, events and values? But, what if we want to encapsulate these events in classes and publish them? Let’s consider the following scenario of a simple state machine that holds integers. This has two events, one for updating and one for clearing. We’ll implement this by using a simple MailboxProcessor class implementation, but first, let’s get some of the infrastructure out of the way.
// Shorthand operator let (<--) (m:'a MailboxProcessor) msg = m.Post(msg) // Messages for our state machine type private StateMessage = | Update of int | Get of AsyncReplyChannel<int option> | Clear
We’ll define three messages for our state machine to handle, the Update which gets an integer, our Get which takes a reply channel for the data, and finally, our Clear message. Now that we have this defined, let’s get to our main implementation:
type StateMachine() = // Our two events let clearFire, clearEvent = Event.create() let updateFire, updateEvent = Event.create() let actor = MailboxProcessor.Start(fun inbox -> let rec loop state = async { let! msg = inbox.Receive() match msg with | Clear -> clearFire() return! loop None | Get replyChannel -> replyChannel.Reply(state) return! loop state | Update newState -> match state with | Some oldState -> updateFire(oldState, newState) | None -> () return! loop <| Some(newState) } loop None) // Publishing the events member this.Updating = updateEvent member this.Clearing = clearEvent // Expose behavior member this.Clear() = actor <-- Clear member this.Update(x) = actor <-- Update(x) member this.Get() = actor.PostAndReply(fun replyChannel -> Get(replyChannel))
This is a bit of code, so bear with me here. First, we declare our two events as I described above, one to handle our clearing event, and one for our updating event. Next, we create our mailbox processor which handles our messages. In case of our clear, we simply invoke with no arguments and loop again. In the case of our update on the other hand, we only will fire if there is previous state and will return both the old and the new.
In order to publish events, we simply expose our created events much like we would for any immutable property. This allows the outside consumer to then latch on to these events. And finally in our code, we send our private messages to our actor in order to clear, update and get our value. Now that everything is defined, let’s go for a test run to see how this works:
> let state = new StateMachine() - state.Clearing.Add(fun () -> printfn "Clearing") - state.Updating.Add(fun (old,new') -> printfn "Old(%d)\tNew(%d)" old new');; val state : StateMachine
We created two handlers, one to print clearing for a clearing event and for the update, we print out the old and new value. Let’s run this through a loop to see what sort of goodies we can print out:
> for i = 0 to 10 do - if i % 3 = 0 then state.Clear() - state.Update(i);; val it : unit = () Clearing Old(0) New(1) Old(1) New(2) Clearing Old(3) New(4) Old(4) New(5) Clearing Old(6) New(7) Old(7) New(8) Clearing Old(9) New(10)
As you can see, we get our expected behavior from this that every third action from 0 to 10 should have a clearing event which is triggered when we clear, and then update in between with our old and new value. It’s an interesting, albeit naive example that shows the power of events in F# in classes.
Conclusion
I hope by now you start to see some of the possibilities with first class events with both consuming and publishing. Unlike other .NET languages, instead of exposing our events as members, we call our Event.create function, which gives us a bit more interesting power. We’re not quite finished here as we have a lot more to cover with both first-class events in F#, as well as the Reactive Framework. | http://codebetter.com/matthewpodwysocki/2009/08/18/f-first-class-events-creating-events/ | crawl-003 | refinedweb | 1,351 | 64.41 |
An exception is a condition that may arise during the execution of a Java program when a normal path of execution is not defined.
Java handles errors by separating the code that performs actions from the code that handles errors.
When an exception occurs, Java creates an object with all pieces of information about the exception and passes it to the appropriate exception handling code.
The information about an exception includes the type of exception, line number in the code where the exception occurred, etc.
To handle exceptions, place the code in a try block. A try block looks like the following:
try { // Code for the try block }
A try block starts with the keyword try, followed by an opening brace and a closing brace.
The code for the try block is placed inside the opening and the closing braces.
A try block cannot be used just by itself.
It must be followed by one or more catch blocks, or one finally block, or a combination of both.
To handle an exception that might be thrown inside a try block, use a catch block.
One catch block can be used to handle multiple types of exceptions.
The syntax for a catch block is similar to the syntax for a method.
catch (ExceptionClassName parameterName) { // Exception handling code }
A catch block's declaration is exactly like a method declaration.
It starts with the keyword catch, followed by a pair of parentheses.
Within the parentheses, it declares a parameter.
The parameter type is the name of the exception class that it is supposed to catch.
The parameterName is a user-given name. Parentheses are followed by an opening brace and a closing brace. The exception handling code is placed within the braces.
When an exception is thrown, the reference of the exception object is copied to the parameterName.
We can use the parameterName to get information from the exception object.
We can associate one or more catch blocks to a try block.
The general syntax for a try-catch block is as follows.
try { // Your code that may throw an exception } catch (ExceptionClass1 e1){ // Handle exception of ExceptionClass1 type } catch (ExceptionClass2 e2){ // Handle exception of ExceptionClass2 type } catch (ExceptionClass3 e3){ // Handle exception of ExceptionClass3 type }
The following code shows how to handle divide by zero exception.
public class Main { public static void main(String[] args) { int x = 10, y = 0, z; try { z = x / y; System.out.println("z = " + z); } catch (ArithmeticException e) { String msg = e.getMessage(); System.out.println("The error is: " + msg); } System.out.println("The end."); } }
The code above generates the following result. | http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0380__Java_Exception_Handling.htm | CC-MAIN-2017-22 | refinedweb | 433 | 64.71 |
Executing python modules from package
I have a project that is primarily written in python but uses sage for some specific operations. In order to facilitate this I have only .py files and use
from sage.all import * at the top of all modules that make use of sage functionalities. To execute I always used
sage -python path/to/file.py.
This approach worked fine until it became necessary to split my project into multiple sub-packages. I now want to execute my modules with
python -m package.subpackage.module for modules that do not use sage (which works) and correspondingly
sage -python -m package.subpackage.module for modules that do. Unfortunately the latter only returns an error message of the form
~/sage/local/bin/python: No module named package.subpackage
In order to be able to use package relative imports I am kind of dependent on the -m syntax, so I would like to get it to work. Any ideas what I am doing wrong? Or is this simply not possible with the python that is bundled with sage for internal reasons? Any help is apreciated! | https://ask.sagemath.org/question/25041/executing-python-modules-from-package/?sort=latest | CC-MAIN-2019-22 | refinedweb | 187 | 58.89 |
Fine-grained network mediation
Bug Description
Binary package hint: apparmor
This is a wishlist item / feature request.
Increase the granularity of network restrictions to allow specification of which ports or ranges of ports can or can't be used by an application. This functionality is available in systrace if either the example or code would be of help:
http://
Two years ago something "should be coming" - is it correctly understood that this feature is indefinitely on hold?
It is safe to say it has been on hold, however, this work is still planned and will hopefully be implemented by 14.04.
No, it has been repeatedly delayed but progress has been made on it. The new base network patch on which this functionality will be built is in testing. Further work is still needed to achieve better granularity but work is being done
Hi,
can comment a little more on that, like what progress and where to find it? Can we expect to have it in future? Does it make sense to use dev package that converges with future versions of ubuntu? Just anything. If i can find it somewhere else, a link would help me a lot.
like what progress and where to find it?
Its being developed as part of the upstream apparmor project. The socket labeling portion has landed in ubuntu saucy. This does not allow for control based on ports or addresses but is the basis for that work.
So what is done is a base socket labeling on which other functionality can be based. The next step would be basic address/port binding (server setting up an address), and then send address mediation. This may happen for ipv4 (not ipv6) with in the next month as part of a dev preview to get feedback on the mediation approach. It is unlikely this will make it into saucy.
Can we expect to have it in future?
yes
Does it make sense to use dev package that converges with future versions of ubuntu?
yes. The apparmor project has a ppa that developments appear in once they reach a beta state.
https:/
Just anything. If i can find it somewhere else, a link would help me a lot.
the places to watch are the apparmor mailing list (its mostly a devel list but also takes general questions)
and of course you can always watch the ppa. I wouldn't recommend using the ppa on a production system, at least not upgrading everytime its updated. There are times its stable and other times its not
Thanks a lot!
FYI, quite a bit more work was done on IPC in AppArmor, including the groundwork for fine-grained network mediation. Fine-grained network mediation will not land for 14.10, but may land in 15.04-15.10.
FYI, this is a requirement for snapd, but it was deprioritized in favor of namespace stacking in support of LXD, upstreaming and other work in support of snappy (eg, gsettings mediation). A lot of work was done to support this, but the soonest it would be delivered given current priorities is 17.04.
Note, I'm only giving the current status, not setting the priority for this, but this feature is very high on the list and in the queue.
Yes, this ability should be coming in Oneiric, and we will hopefully have some test kernels out soon. | https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/796588 | CC-MAIN-2017-39 | refinedweb | 565 | 71.44 |
On Sun, May 21, 2006 at 05:18:50PM -0600, Eric W. Biederman wrote:> Pavel Machek <pavel@ucw.cz> writes:> > > Well, if pid #1 virtualization is only needed for pstree, we may want> > to fix pstree instead :-).yes, actually this and init itself (which uses the pid to switch between init and telinit behaviour)are the only two applications we found so far ...and as far as I know, those work with non pid=1values on other operating systems (inside containers)a fix there would definitely be appreciated and I think it would not hurt normal behaviour ...> One thing that is not clear is if isolation by permission checks is> any easier to implement than isolation with a namespace.for the pid space, I'm not really sure if isolationis really cheaper than virtualization, but for thenetwork space for example, a virtualization solutionwhich is as lightweigth as the isolation is probablymore challenging, although not impossible ...> Isolation at permission checks may actually be more expensive in terms> of execution time, and maintenance.again, for the pid space, maintenance is quite low ..best,Herbert> Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/5/21/150 | CC-MAIN-2016-30 | refinedweb | 211 | 53.51 |
sec_rgy_attr_test_and_update-Updates specified attribute instances for a specified object only if a set of control attribute instances match the object's existing attribute instances
#include <dce/sec_rgy_attr.h> void sec_rgy_attr_test_and_update ( sec_rgy_handle_t context, sec_rgy_domain_t name_domain, sec_rgy_name_t name, unsigned32 num_to_test, sec_attr_t test_attrs[], unsigned32 num_to_write, sec_attr_t update_test
An unsigned 32-bit integer that specifies the number of elements in the test_attrs array. This integer must be greater than 0.
- test_attrs[]
An array of values of type sec_attr_t that specifies the control attributes. The update takes place only if the types and values of the control attributes exactly match those of the attribute instances on the named registry object. The size of the array is determined by num_to_test.
- num_to_write
A 32-bit integer that specifies the number of attribute instances returned in the update_attrs array.
- update_attrs
An array of values of type sec_attr_t that specifies the attribute instances to be updated. The size of the array is determined by num_to_write.
Output
- failure_index
In the event of an error, failure_index is a pointer to the element in the update_test_and_update() routine updates an attribute only if the set of control attributes specified in the test_attrs match attributes that already exist for the object.
This update is an atomic operation: if any of the control attributes do not match existing attributes, none of the updates are performed, and if an update should be performed, but the write cannot occur for whatever reason to any member of the update_attrs array, all updates are aborted. The attribute causing the update to fail is identified in failure_index. If the failure cannot be attributed to a given attribute, failure_index contains -1..
If you specify an attribute set for updating, the update applies to the set instance, the set itself, not the members of the set. To update a member of an attribute set, supply the UUID of the set member._test_and_update() routine requires the test permission and the update permission set for each attribute type identified in the test_update(), sec_rgy_attr_delete(). | http://pubs.opengroup.org/onlinepubs/9696989899/sec_rgy_attr_test_and_update.htm | CC-MAIN-2017-17 | refinedweb | 329 | 51.07 |
Marc Lehmann a écrit :> Hi!> > I ran into what I see as unsolvable problems that make epoll useless as a> generic event mechanism.> > I recently switched to libevent as event loop, and found that my programs> work fine when it is using select or poll, but work eratically or halt> when using epoll.> > The reason as I found out is the peculiar behaviour of epoll over fork.> It doesn't work as documented, and even if, it would make the use of> thirdI have no idea what exact problem you have. But if the child closes some file descriptor that were 'cloned' at fork() time, this only decrements a refcount, and definitely should not close it for the 'parent'. epoll in this regard uses a generic kernel service (file descriptor sharing between tasks).I have some apps that are happily using epoll() and fork()/exec() and have no problem at all. I usually use O_CLOEXEC so that all close() are done at exec() time without having to do it in a loop. epoll continues to work as expected in the parent process.>> fd sets. This would explain the behaviour above. Unfortunately (or> fortunately?) this is not what happens: when the fds are being closed by> exec or exit, the fds do not get removed from the epoll set.at exec() (granted CLOEXEC is asserted) or exit() time, only the refcount of each file is decremented. Only if their refcount becomes NULL, files are then removed from epoll set.> > This behaviour strikes me as extremely illogical. On the one hand, one> cannot share the epoll fd between processes normally, but on fork,> you can, even though it makes no sense (the child has a different fd> "namespace" than the parent) and actually works on (then( unrelated fds in> the other process.> > It also strikes as weird that the order of closing fds should make so much> of a difference: if the epoll fd is closed first in the child, the other> fds will survive in the parent, if its closed last, they don't. Makes no> sense to me.> > Now, the problem I see is not that it makes no sense to me - thats clearly> my problem. The problem I see is that there is no way to avoid the> associated problems except by patching all code that would ever use fork,> even if it never has heard anything about epoll yet. This is extremely> nonlocal action at a distance, as this affects a lot of code not even the> author> patterns? Shouldn't epoll do refcounting, as is commonly done under> Unix? As the fd space is not shared between rpocesses, why does epoll> try? Shouldn't the epoll information be copied just like the fd table> itself, memory, and other resources?Too many questions here, showing lack of understanding.> > As it looks now, epoll looks useless except in the most controlled> environments, as it doesn't duplicate state on fork as is done with the> other fd-related resources (as opposed to the underlying files, which are> properly shared).> epoll definitly is not useless. It is used on major and critical apps.You certainly missed something.Please provide some code to illustrate one exact problem you have.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2007/10/27/35 | CC-MAIN-2016-22 | refinedweb | 566 | 71.55 |
Captures components, interfaces, interactions and constraints without regard to distribution
MapBuilder implements the MVC design pattern as a hierarchy of JavaScript Model, View and Controller objects. All objects have properties and methods. Each of the Model, View and Controller base classes provide functionality common across object types. Object properties are documented in the #ComponentRegister and the methods are documented in the #JSDocs pages.
Every object must have a unique ID attribute. Every object can be located using the global "config" objects array as in config.objects.objectId.
All model objects inherit from the ModelBase abstract class. This base class handles all document loading through GET and POST.
The ModelBase class implements the Listener methods for other objects to trigger. The Listener object provides setParam() and getParam() methods, and notifies Listeners when the parameter changes.
Instances of the abstract base class typically implement get and set methods specific for the model type's schema and namespace. A generic Model object type is provided for when no specialized get or set methods are required.
models array
Models also contain lists of child models, widgets and tools. The Model objects in those lists are as described in this section and widget and tool objects are described below. This means that a Model may have child models where it makes sense to do so, e.g. a WFS capabilities model may have child GetFeature, DescribeFeatureType and Transaction models.
The MapBuilder Configobject is itself a Model object where the document is the config file. The Config model also implements some special bootstrapping methods for object initialization.
All widget classes inherit from the abstract WidgetBaseclass. This class handles common widget property handling and selecting the HTML node in the DOM but no paint() method. This class must be further derived by a widget class that implement a paint() method that handles the actual rendering of the view, which may vary widely (see Graphics Rendering).
The WidgetBaseXSLwidgets implement a paint method where the widget output is the result of an XSL transformation on the model document. By default, the XSL filename corresponds to the object type name with an extension of .xsl by default. For example, the <Legend> XSL file to style the Legend object is lib/widget/Legend.xsl by default. This default filename can be over-ridden by setting a <stylesheet> property on the widget object.
MapContainer widgets share a map container where 2D rendering of geo data is layered. The container provides an Extent tool that translates screen pixel/line coords into map projected X and Y.
Button widgets are a specialized widget type where the output acts as buttons.
Most tool classes inherit from the ToolBase class. Tools tend to be much more varied in their purpose so there is less commonality among them. This class must be further derived by tool classes that implement the controller methods required.
Some of the tools available are listed here. All tools "control" the model in some way, either by sending events to the model for listeners to pick up on or manipulating the XML document of the model.
An exception for tools is the Extent tool which handles the affine transforamtion ffrom screen pixel and line coordinates to map projected XY values. This tool doesn't get included in the config file because it is always required for models that have a MapContainer widget.
Models inherit from the Listener class. (add graphic)
This is why objRef references are passed in listener functions.
JavaDoc style description of MapBuilder object methods. This documentation is automatically derived from structured comments in the code which is why is why it is very important that the code is properly documented.
(add outline of JSDoc comment structure)
All objects are (or should!) be documented in the configuration file schema which can be found in the source tree at mapbuilder/lib/schemas/config.xsd. This schema is used for the Component Register and describes all object properties.
Configuration documents should be able to validate against this schema, and all official demos in v1.0 do validate. However since custom XSL stylesheet parameters can be set in the configuration file and not necessarily be included in the schema, custom configuration files may not validate. The order of configuration properties also impacts validation. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=48828 | CC-MAIN-2014-41 | refinedweb | 709 | 56.76 |
Header and Footer in flashCarolWasp Jan 16, 2013 11:46 AM
Ok. This was supposed to be a question about a header and footer in Flash. I intended to attach my fla to make it possible to understand what I was saying. As it turns out I can't figure out how to attach files. Is it even possible? That's my first question.
Here's the actual question, although it will be very vague now: I have a basic header and footer in Flash that stretch to fill the screen horizontally. It works when I test the file inside Flash (Control>Test movie). But it does not work when I go "Publish Preview" and use the browser. Why is that and how can I sort it out? Any ideas?
1. Re: Header and Footer in flashNed Murphy Jan 16, 2013 12:20 PM (in response to CarolWasp)
No, you cannot attach files in these forums. Also, few people will download files for a variety of good reasons.
If you are previewing in a browser, then it is likely that an html page is being generated for the preview, and that html page is probably sizing the flash content per the stage dimensions the pieces were designed with.
What you probably need to do is find a tutorial for creating full screen content, though you could probably find a solution if you search tis forum as well.
2. Re: Header and Footer in flashCarolWasp Jan 16, 2013 12:34 PM (in response to Ned Murphy)
In that case, is there a way to add simple code in the head of the html that will force the document to play by the rules of the swf? The same way you can alter say "overflow" and "outline".
3. Re: Header and Footer in flashCarolWasp Jan 16, 2013 1:45 PM (in response to Ned Murphy)
Actually I figured out the follow up question on my own. All I had to do was change a publish setting, namely dimensions to percent. The header and footer is scalable in the browser now. But my problem is how to center the content in between. I know how to do it with movieclips. But I can't do that this time. The content is diverse and will lose functions if I turn it into a movieclip. Is there some other way? Something I can do in Flash? Or something I can add in the html-file?
4. Re: Header and Footer in flashmoccamaximum Jan 17, 2013 3:35 AM (in response to CarolWasp)
5. Re: Header and Footer in flashCarolWasp Jan 17, 2013 1:07 PM (in response to moccamaximum)
Thanks, I have followed that particular tutorial before, it tells me how to center a movieclip. Works fine as long as the mc contains a mere image or a textfield. In my case I've got a bunch of buttons (and some other stuff) that will obviously cease to work as buttons once I turn them into a movieclip.
6. Re: Header and Footer in flashmoccamaximum Jan 17, 2013 3:09 PM (in response to CarolWasp)
In my case I've got a bunch of buttons (and some other stuff) that will obviously cease to work as buttons once I turn them into a movieclip.
Nobody said you had to turn your buttons into MovieClips.
Buttons will just work fine inside a MovieClip.
Nested inside the Blue MovieClip are two SimpleButtons that work as expected, they have not been converted to MovieClips, all their up, over and whatnot states are intact.
import flash.events.MouseEvent;
mc.mouseEnabled = false;
mc.button1.addEventListener(MouseEvent.CLICK, clickHandler);
mc.button2.addEventListener(MouseEvent.CLICK, clickHandler);
function clickHandler(e:MouseEvent):void{
trace(e.currentTarget.name);
}
7. Re: Header and Footer in flashCarolWasp Jan 18, 2013 10:46 AM (in response to moccamaximum)
Thank you. That worked for me. I "nested" my buttons inside a movieclip which made it possible to center them. I'm close to home but I've got a remaining obstacle. In the main timeline I'm loading an external swf:
var ExitLoader:Loader = new Loader();
addChild(ExitLoader);
ExitLoader.x=154;
ExitLoader.y=127;
var ExitURL:URLRequest = new URLRequest("Exit.swf");
ExitLoader.load(ExitURL);
I want this swf to be centered along with the buttons. But I can't figure out how to nest it.
It does not seem to work the same way. Besides I wonder if declaring position of x and y will mess it up.
8. Re: Header and Footer in flashmoccamaximum Jan 19, 2013 3:48 AM (in response to CarolWasp)
Simply add the Loader to a newly created MovieClip:
var mc:MovieClip = new MovieClip();
addChild( mc);
var ExitLoader:Loader = new Loader();
//its important to wait for the swf to be fully loaded
ExitLoader.contentLoaderInfo.addEventListener(Event.COMPLETE , completeHandler );
ExitLoader.load(new URLRequest("Exit.swf"));
function completeHandler( event:Event ):void
{
mc.addChild( ExitLoader);
}
once inside the mc, you can manipulate the Exit.swf x/y-Positions with
mc.x=154;
mc.y=127;
9. Re: Header and Footer in flashCarolWasp Jan 19, 2013 7:21 AM (in response to moccamaximum)
I tried something similar earlier but I end up getting a faulty outcome. The swf is at first centered with the other stuff in the mc (buttons etc) but as soon as I drag and resize the browser window the mc disappears. Btw my mc is named "content". Probably a bad name since it tends to turn blue. I don't know.
var content:MovieClip;
addChild(content);
// further down
content.x = sw/2 - content.width/2;
}
var ExitLoader:Loader = new Loader();
ExitLoader.x=154;
ExitLoader.y=127;
ExitLoader.contentLoaderInfo.addEventListener(Event.COMPLETE,completeHandler);
ExitLoader.load(new URLRequest("Exit.swf"));
function completeHandler(event:Event):void
{
content.addChild(ExitLoader);
}
I could post the code in it's entirety but perhaps it would be boring to scan through.
10. Re: Header and Footer in flashmoccamaximum Jan 19, 2013 7:55 AM (in response to CarolWasp)
This code should throw an error;
Enable debugging and look at the Output window;
MovieClip needs the new Constructor:
var content:MovieClip= new MovieClip();
addChild(content);
Btw my mc is named "content". Probably a bad name since it tends to turn blue. I don't know.
You are right, its a bad name, it works in this context, but it makes the code less readable.
Its generally a good idea to avoid any reserved Keywords in any programming language.
so its definitively better to name that MovieClip _content or content_
It`s probably a good thing that you make an empty fla and compile that code to see what really happens, when you place an empty MoviClip on the stage:
var _content:MovieClip = new MovieClip();
addChild(_content);
trace(_content.x);
trace(_content.y);
trace(_content.width);
trace(_content.height);
11. Re: Header and Footer in flashCarolWasp Jan 20, 2013 10:15 AM (in response to moccamaximum)
It worked when I created a new mc for the swf, different from the one with the buttons. A bit inconvenient but I can live with it. I hoped that was it. But I stumbled on the next scene (I know scenes are frowned upon) when I tried to repeat the procedure.
Part of the problem is that I get tangled up in renaming instances with slight modifications. For instance "var my_content:MovieClip;" becomes "var my_content2:MovieClip;". And it gets worse. I'm sure there's a way around this. Can I wash out the functions and variables etc from the previous scene so I can start afresh? Ultimately I would like to use the same code again with the same instance names.
12. Re: Header and Footer in flashmoccamaximum Jan 20, 2013 11:36 PM (in response to CarolWasp)
You could put all the content of your Scenes in one MovieClip.
So you would have a Scene1, Scene2 Movieclip etc. and then you would have to declare all the vars and functions in it private, so you could reuse your code.
Or you could split up your scenes in different flas and load them via a Main.as.
But I would strongly advise against such a thing.
Here is a good read that explains how to do navigation properly in AS3.
After that you will maybe get new Ideas how to restructure your work, to make it easier to maintain.
13. Re: Header and Footer in flashCarolWasp Jan 21, 2013 12:53 PM (in response to moccamaximum)
I'm convinced there are better ways to do navigation, but this close to the end I was hoping to just grind my way forward. Somehow I managed to get the second scene working (centered that is). But when I resize the window in debug mode an error is pointed out in the first scene. "Error #1009: Cannot access a property or method of a null object reference". The snippet goes like this:
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;
stage.addEventListener(Event.RESIZE, resizeStage);
var header:MovieClip;
resizeStage(null);
function resizeStage(e:Event):void
{
var sw:Number = stage.stageWidth;
header.width = sw;
And it's the last line here that's supposed to cause the error. Do you see anything apparently wrong? I tried "var header:MovieClip; = new MovieClip();" but that didn't work at all.
14. Re: Header and Footer in flashmoccamaximum Jan 22, 2013 12:34 AM (in response to CarolWasp)
15. Re: Header and Footer in flashCarolWasp Jan 22, 2013 1:55 PM (in response to moccamaximum)
Actually the additional ";" was a typo in the post only. The header doesn't work typing it the right way either. It only works when I keep it "var header:MovieClip;".
If I keep it that short and ignore the error message I find that the fluid layout actually works in browsers (some problem with the footer put aside). But, and here's another thing, it looks quite awkward. When resizing the window the content is stretched out before it snaps into place. Pretty much like this demo I found:
Ugly I say. As compared to something like this:
What's the secret? Any keywords? What should I look for?
16. Re: Header and Footer in flashmoccamaximum Jan 23, 2013 2:37 AM (in response to CarolWasp)
Define ugly.
Does the content look pixelated, distorted or blurry after you resize?
The demo you mention looks perfectly fine for me.
If you rightclick on the demo, check if the quality of your flashplayer is set to high.
The local Flash Player settings will alway "override" your Publishing settings. Even if you publish your content to 100% and deactivate compressed: if the user chooses low quality it will always look ugly.
As for the Reuters site you showed as a superior example: it uses no Flash to resize its Layout but purely css/javascript.
What's the secret? Any keywords? What should I look for?
You are probably looking for Pixel Perfect Fluid Layouts
Other tips: If you have any bitmaps on your stage that have bitmapSmoothing set to true, any scaling (that you might do during rezizing, or rotation will require you to set the property again. The bitmap.Smoothing property gets "Lost in Transformation" so to speak.)
17. Re: Header and Footer in flashCarolWasp Jan 24, 2013 11:17 AM (in response to moccamaximum)
Thanks for all the help. Based on the condition that scaling and resizing in Flash look distorted I decided to use CSS instead. Turned out to be much easier and faster to center the flash content that way as well as creating and stretching the header and footer. And it scales without distortion. | https://forums.adobe.com/thread/1136417 | CC-MAIN-2018-34 | refinedweb | 1,940 | 64.71 |
castalla Wrote:You can get a free 7 day trial for unblockus - there are easy to follow install guides. It's a so-called' smart dns service. Last time I tried it, it worked okay.
castalla Wrote:It's not free after 7 days - but not expensive either!
castalla Wrote:My advice - switch your atom to windows! Helluva lot easier to administer. I have xbmc on an atom PC in win 7 32.
Maybe there are linux instructions for unblockus ... just browse to that url above and it should detect your system and give you the setup instructions.
castalla Wrote:A strange issue has arisen this evening when Live streams are set as Favourites.
Suddenly, selecting a BBC live stream in the favourites list resulted in xbmc locking up.
After quite a bit of investigating (uninstalling various programs, and reverting to a pre-Eden release), I believe the issue lies with an iPlayer setting:
Display dialog when Play pressed - on/off
Switching this to 'off' allows Favourites to work normally
I have no idea why this problem arose out-of-the-blue as I have been using BBC favourites for months without any problems.
I experienced the same problem on 2 different PCs.
Anybody else observed this issue?
xdoktor Wrote:hi all!
Wanted to ask if there is build of iplayer xbmc plugin that also includes streaming of other channels eg. C4, ITV etc...like the iplayer website can?
torch1 Wrote:Hmm, even stranger. If I select a programme it plays for a split seconds then stops. If I go out of the menus and long click on 'iplayer' to bring up the menu that including settings, I have the option of 'now playing'. If I select that I can watch the programme I just kicked out of?
Quote:00:42:15 T:4540 ERROR: C:\Users\House\AppData\Roaming\XBMC\addons\plugin.video.iplayer\lib\iplayer2.py:8: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import md5
00:42:16 T:4540 ERROR: C:\Users\House\AppData\Roaming\XBMC\addons\plugin.video.iplayer\lib\iplayer_search.py:5: DeprecationWarning: the sets module is deprecated
from sets import Set | http://forum.kodi.tv/showthread.php?tid=51322&pid=1034950 | CC-MAIN-2016-40 | refinedweb | 360 | 67.25 |
Base types, Collections, Diagnostics, IO, RegEx…
We often get asked about the capabilities of the .NET compression classes in System.IO.Compression. I'd like to clarify what they currently support and mention some partial workarounds for formats that aren't supported.
The .NET compression libraries support at the core only one type of compression format, which is Deflate. The Deflate format is specified by the RFC 1951 specification and a straightforward implementation of that is in our DeflateStream class.
Other compression formats, such as zlib, gzip, and zip, use deflate as a possible compression method, but may also use other compression methods. In the case that they use deflate, you can think of these formats as a wrapper around deflate: they take bytes generated by deflate compression and tack on header info and checksums.
Our GZipStream class does exactly that – it uses DeflateStream and then adds header info and checksums specific to the gzip format. The gzip format is specified in RFC 1952.
So, out of the box, we support deflate and gzip formats.
Until we provide support for the other formats, which we plan to do soon, there are partial workarounds that may help you out in some situations, but they're definitely not a complete solution.
The zlib format is specified by RFC 1950. Zlib also uses deflate, plus 2 or 6 header bytes, and a 4 byte checksum at the end. The first 2 bytes indicate the compression method and flags. If the dictionary flag is set, then 4 additional bytes will follow (which explains why the header will be 2 or 6 bytes). Note that in the wild, preset dictionaries aren't very common (and our classes don't support them).
This diagram from RFC 1950 shows the zlib structure:
0 1
+---+---+
|CMF|FLG| (more-->)
+---+---+
(if FLG.FDICT set)
0 1 2 3
+---+---+---+---+
| DICTID | (more-->)
+---+---+---+---+
+=====================+---+---+---+---+
|...compressed data...| ADLER32 |
+=====================+---+---+---+---+
This means that to read a zlib file using only the .NET libraries, you can often just chop off the first two bytes and 4 end bytes and use DeflateStream on the rest of the stream as normal. (It would be better to check the dictionary bit and not attempt to read anything in that case).
Going in the opposite direction isn't as trivial, so I'm not really suggesting to generate zlib files this way. However, a couple people have asked in the past so I'll sketch an overview of that.
To start, you need to know which bytes to add at the beginning. With our deflate implementation, those bytes are 0x58 and 0x85. If you're curious about how this is derived from RFC 1950, see section 2.2 "Data format" and note that we use a window size of 8K and the value of FLEVEL should be 2 (default algorithm).
After that, you need to add the Adler-32 checksum at the end. The checksum will depend on the payload that you're compressing so you need to calculate it programmatically. Because of this, the easiest way to generate the checksum is to subclass DeflateStream and override the Write/BeginWrite methods to update the checksum. Steven Toub's NamedGZipStream article (mentioned at the end) shows an example of creating such a subclass for generating named gzip files.
The big format you're probably thinking about is zip. Currently the .NET libraries don't support zip but the J# class libraries do. The following article describes using these libraries with a C# app.
But if you don't want to rely on the J# class libraries, we'll need to provide a better solution.
Now that you're familiar with some compression specifications, let's focus on zip a little more. A zip specification is here:
Notice that zip also allows deflate. Again the same principle applies – there are deflate bytes packaged in a header and footer. This may tempt you into writing a zip reader/writer based on DeflateStream (as described above for zlib), but there are two key differences that make zip more complicated.
First, the zip header contains a lot more information than the zlib header. To read a zip file, you'd definitely have to parse the header to figure out how many bytes to skip over because the header contains variable length items such as a file name.
Second, zip tools actively use different compression methods. For example, use Windows compression tool on a very small text file (with just a few words in it) and then a bigger file, say around 20 KB. Chances are it used no compression (yes, that's an option) for the small file and deflate for the 20 KB file.
Because different compression methods are used, an extension of the zlib technique described above may not help you much if you want to use the .NET libraries to read zip files. You'd definitely have to read the compression method to determine how to proceed. If it's deflate, then chop off the header and proceed as above. If it's no compression, chop off the header and read the bytes as a normal stream of bytes. If it's something else, then the .NET libraries have no built-in support for it.
Steven Toub observed in an MSDN article that WinZip can't handle our GZipStream because it requires filename info. He's created a NamedGZipStream implementation that generates files readable by WinZip
We'd like to address the shortcomings of our compression library in future releases. The following items are our highest priority compression requests:
Are there any others you'd like us to address?
Support the new format ZIP files that allow >4GB (Both the new WinZip & PKWare formats) and AES Encryption.
Support GZIP files >4GB (This would be a simple bug fix). There should be no limit on how big a gzip file can be.
Other formats :
Bzip2 format - patent free, better compression than zip & gzip.
RAR Format
Please support LZMA which is the algorithm used in the 7z or 7-Zip format. Its faster than zip with higher compression ratios.
I second the 4GB limit problem. I bugged it at over a year ago...
Stronger GZip compression would be nice, too...
If you can do both of these, I won't have to use the open-source SharpZipLib anymore :)
BZip2 support would be welcome.
Can't believe you didn't mention the horrible 4GB limitation! This really needs fixing because at present solutions are getting developed in .NET that explode without warning in production when files get too big (I know from experience...)
I would love to see the RAR format implemented.
Fixing the limit and RAR support would be brilliant.
Are the team going to ensure that classes in this namespace can be used under partial trust ASP.NET?
Kev
Compressing a directory is an extremely common use case. I would love to be able to do this in a line or two of code. Even if it is not appropriate for all compression formats I believe the tradeoff is worth it.
What about the self-extracting feature?
Why point people to Java when they can use #ZipLib?
Thanks everyone, this is great feedback. Some notes, and first a very important clarification:
Adi - are you trying to get my blogging license revoked? :) I can see it on slashdot now: "CLR developer encourages users to switch to Java."
Very important to clarify -- I'm encouraging people to check out the technique shown in the MSDN article using the J# libraries, which are a Microsoft product :), and can be used from C# apps...also available courtesy of Microsoft (and Anders et al)...which hopefully they're editing in Visual Studio... You get the point.
About 4GB support: yes, this has definitely been in our plans too -- sorry I left it off the list. But it's great to know how high this is in relative priority!
Kev - just to back up a second, the compression classes are transparent; security issues are pushed to file open and creation. E.g. if you can open a file then you can compress or decompress, because the compression classes only deal with the file as a stream. But the file open/creation part is the part that could block an ASP.NET app. We haven't planned any changes to this security model so far, but if there are any particular scenarios you're interested in, let us know.
Brent - yes, we'd include the ability to compress directories along with zip support.
Formats - While we can't say for certain exactly which formats we'll include in the near timeframe, these replies have given us an excellent sense of which ones people want to work with.
Fastest compression on most types of data: LZO/NRV (oberhumer.com). QuickLZ is maybe even faster, but it's not as "proven" as LZO/NRV.
These are open source, but there are commercial licenses available, and I'm sure the authors wouldn't mind their compression algorithms being included in the .NET BCL. :)
The 4GB issue is definitely top priority.
I'd like the support for using Stream to read/seek over ISO9660/UDF (cd images), RAR. Writing wouldn't matter so much because:
Currently you need to install all kinds of drivers and stuff to deal with filesystem images and as seen from Month of Apple Bugs that's one area with a lot of potential for escalation exploits. With ability to easily deal with the images through .net and powershell in fully managed way you would have both security and ability to easily do processing over images in remote servers. | http://blogs.msdn.com/b/bclteam/archive/2007/05/16/system-io-compression-capabilities-kim-hamilton.aspx | CC-MAIN-2014-23 | refinedweb | 1,607 | 72.46 |
Testing (and more specifically, unit testing) is meant to be carried out by the developer as the project is being developed. In this article, we will see how to implement testing tools to perform proper unit testing for your application classes and components.
This tutorial is an excerpt taken from the book Learning Angular (Second Edition) written by Christoffer Noring, Pablo Deeleman.
When venturing into unit testing in Angular, it’s important to know what major parts it consists of. In Angular these are:
- Jasmine, the testing framework
- Angular testing utilities
- Karma, a test runner for running unit tests, among other things
- Protractor, Angular’s framework for E2E testing
Configuration and setting up of Angular CLI
In terms of configuration, when using the Angular CLI, you don’t have to do anything to make it work. You can, as soon as you scaffold a project, run your first test and it will work. The Angular CLI is using Karma as the test runner. What we need to know about Karma is that it uses a karma.conf.js file, a configuration file, in which a lot of things are specified, such as:
- The various plugins that enhance your test runner.
- Where to find the tests to run? It should be said that there is usually a files property in this file specifying where to find the application and the tests. For the Angular CLI, however, this specification is found in another file called src/tscconfig-spec.json.
- Setup of your selected coverage tool, a tool that measures to what degree your tests cover the production code.
- Reporters report every executed test in a console window, to a browser, or through some other means.
- Browsers run your tests in: for example, Chrome or PhantomJS.
Using the Angular CLI, you most likely won’t need to change or edit this file yourself. It is good to know that it exists and what it does for you.
Angular testing utilities
The Angular testing utilities help to create a testing environment that makes writing tests for your various constructs really easy. It consists of the TestBed class and various helper functions, found under the @angular/core/testing namespace. Let’s have a look at what these are and how they can help us to test various constructs. We will shortly introduce the most commonly used concepts so that you are familiar with them as we present them more deeply further on:
- The TestBed class is the most important concept and creates its own testing module. In reality, when you test out a construct to detach it from the module it resides in and reattach it to the testing module created by the TestBed. The TestBed class has a configureTestModule() helper method that we use to set up the test module as needed. The TestBed can also instantiate components.
- ComponentFixture is a class wrapping the component instance. This means that it has some functionality on it and it has a member that is the component instance itself.
- The DebugElement, much like the ComponentFixture, acts as a wrapper. It, however, wraps the DOM element and not the component instance. It’s a bit more than that though, as it has an injector on it that allows us to access the services that have been injected into a component.
This was a brief overview of our testing environment, the frameworks, and libraries used. Now let’s discuss component testing.
Introduction to component testing
A usual method of operation for doing anything Angular is to use the Angular CLI. Working with tests is no different. The Angular CLI lets us create tests, debug them, and run them; it also gives us an understanding of how well our tests cover the code and its many scenarios.
Component testing with dependencies
We have learned a lot already, but let’s face it, no component that we build will be as simple as the one we wrote in the preceding section. There will almost certainly be at least one dependency, looking like this:
@Component({}) export class ExampleComponent { constructor(dependency:Dependency) {} }
We have different ways of dealing with testing such a situation. One thing is clear though: if we are testing the component, then we should not test the service as well. This means that when we set up such a test, the dependency should not be the real thing. There are different ways of dealing with that when it comes to unit testing; no solution is strictly better than the other:
- Using a stub means that we tell the dependency injector to inject a stub that we provide, instead of the real thing.
- Injecting the real thing, but attaching a spy, to the method that we call in our component.
Regardless of the approach, we ensure that the test is not performing a side effect such as talking to a filesystem or attempting to communicate via HTTP; we are, using this approach, isolated.
Using a stub to replace the dependency
Using a stub means that we completely replace what was there before. It is as simple to do as instructing the TestBed in the following way:
TestBed.configureTestingModule({ declarations: [ExampleComponent] providers: [{ provide: DependencyService, useClass: DependencyServiceStub }] });
We define a providers array like we do with the NgModule, and we give it a list item that points out the definition we intend to replace and we give it the replacement instead; that is our stub.
Let’s now build our DependencyStub to look like this:
class DependencyServiceStub { getData() { return 'stub'; } }
Just like with an @NgModule, we are able to override the definition of our dependency with our own stub. Imagine our component looks like the following:
import { Component } from '@angular/core'; import { DependencyService } from "./dependency.service"; @Component({ selector: 'example', template: `{{ title }}` }) export class ExampleComponent { title: string; constructor(private dependency: DependencyService) { this.title = this.dependency.getData(); } }
Here we pass an instance of the dependency in the constructor. With our testing module correctly set up, with our stub, we can now write a test that looks like this:
it(`should have as title 'stub'`, async(() => { const fixture = TestBed.createComponent(AppComponent); const app = fixture.debugElement.componentInstance; expect(app.title).toEqual('stub'); }));
The test looks normal, but at the point when the dependency would be called in the component code, our stub takes its place and responds instead. Our dependency should be overridden, and as you can see, the expect(app.title).toEqual('stub') assumes the stub will answer, which it does.
Spying on the dependency method
The previously-mentioned approach, using a stub, is not the only way to isolate ourselves in a unit test. We don’t have to replace the entire dependency, only the parts that our component is using. Replacing certain parts means that we point out specific methods on the dependency and assign a spy to them. A spy is an interesting construct; it has the ability to answer what you want it to answer, but you can also see how many times it is being called and with what argument/s, so a spy gives you a lot more information about what is going on. Let’s have a look at how we would set a spy up:
beforeEach(() => { TestBed.configureTestingModule({ declarations: [ExampleComponent], providers: [DependencyService] });
dependency = TestBed.get(DependencyService); spy = spyOn( dependency,'getData'); fixture = TestBed.createComponent(ExampleComponent); })
Now as you can see, the actual dependency is injected into the component. After that, we grab a reference to the component, our fixture variable. This is followed by us using the TestBed.get('Dependency') to get hold of the dependency inside of the component. At this point, we attach a spy to its getData() method through the spyOn( dependency,'getData') call.
This is not enough, however; we have yet to instruct the spy what to respond with when being called. Let us do just that:
spyOn(dependency,'getData').and.returnValue('spy value');
We can now write our test as usual:
it('test our spy dependency', () => { var component = fixture.debugElement.componentInstance; expect(component.title).toBe('spy value'); });
This works as expected, and our spy responds as it should. Remember how we said that spies were capable of more than just responding with a value, that you could also check whether they were invoked and with what? To showcase this, we need to improve our tests a little bit and check for this extended functionality, like so:
it('test our spy dependency', () => { var component = fixture.debugElement.componentInstance; expect(spy.calls.any()).toBeTruthy(); })
You can also check for the number of times it was called, with spy.callCount, or whether it was called with some specific arguments: spy.mostRecentCalls.args or spy.toHaveBeenCalledWith('arg1', 'arg2'). Remember if you use a spy, make sure it pays for itself by you needing to do checks like these; otherwise, you might as well use a stub.
Async services
Very few services are nice and well-behaved, in the sense that they are synchronous. A lot of the time, your service will be asynchronous and the return from it is most likely an observable or a promise. If you are using RxJS with the Http service or HttpClient, it will be observable, but if using the fetch API, it will be a promise. These are two good options for dealing with HTTP, but the Angular team added the RxJS library to Angular to make your life as a developer easier. Ultimately it’s up to you, but we recommend going with RxJS.
Angular has two constructs ready to tackle the asynchronous scenario when testing:
- async() and whenStable(): This code ensures that any promises are immediately resolved; it can look more synchronous though
- fakeAsync() and tick(): This code does what the async does but it looks more synchronous when used
Let’s describe the async() and whenStable() approaches. Our service has now grown up and is doing something asynchronous when we call it like a timeout or an HTTP call. Regardless of which, the answer doesn’t reach us straightaway. By using async() in combination with whenStable(), we can, however, ensure that any promises are immediately resolved. Imagine our service now looks like this:
export class AsyncDependencyService { getData(): Promise
{ return new Promise((resolve, reject) => { setTimeout(() => { resolve('data') }, 3000); }) } }
We need to change our spy setup to return a promise instead of returning a static string, like so:
spy = spyOn(dependency,'getData') .and.returnValue(Promise.resolve('spy data'));
We do need to change inside of our component, like so:
import { Component, OnInit } from '@angular/core'; import { AsyncDependencyService } from "./async.dependency.service";
@Component({ selector: 'async-example', template: `{{ title }}` }) export class AsyncExampleComponent { title: string; constructor(private service: AsyncDependencyService) { this.service.getData().then(data => this.title = data); } }
At this point, it’s time to update our tests. We need to do two more things. We need to tell our test method to use the async() function, like so:
it('async test', async() => { // the test body })
We also need to call fixture.whenStable() to make sure that the promise will have had ample time to resolve, like so:
import { TestBed } from "@angular/core/testing"; import { AsyncExampleComponent } from "./async.example.component"; import { AsyncDependencyService } from "./async.dependency.service"; describe('test an component with an async service', () => { let fixture; beforeEach(() => { TestBed.configureTestingModule({ declarations: [AsyncExampleComponent], providers: [AsyncDependencyService] });
fixture = TestBed.createComponent(AsyncExampleComponent); }); it('should contain async data', async () => { const component = fixture.componentInstance; fixture.whenStable.then(() => { fixture.detectChanges(); expect(component.title).toBe('async data'); }); }); });
This version of doing it works as it should, but feels a bit clunky. There is another approach using fakeAsync() and tick(). Essentially, fakeAsync() replaces the async() call and we get rid of whenStable(). The big benefit, however, is that we no longer need to place our assertion statements inside of the promise’s then() callback. This gives us synchronous-looking code. Back to fakeAsync(), we need to make a call to tick(), which can only be called within a fakeAsync() call, like so:
it('async test', fakeAsync() => { let component = fixture.componentInstance; fixture.detectChanges(); fixture.tick(); expect(component.title).toBe('spy data'); });
As you can see, this looks a lot cleaner; which version you want to use for async testing is up to you.
Testing pipes
A pipe is basically a class that implements the PipeTransform interface, thus exposing a transform() method that is usually synchronous. Pipes are therefore very easy to test. We will begin by testing a simple pipe, creating, as we mentioned, a test spec right next to its code unit file. The code is as follows:
import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'formattedpipe' }) export class FormattedPipe implements PipeTransform { transform(value: any, ...args: any[]): any { return "banana" + value; } }
Our code is very simple; we take a value and add banana to it. Writing a test for it is equally simple. The only thing we need to do is to import the pipe and verify two things:
- That it has a transform method
- That it produces the expected results
The following code writes a test for each of the bullet points listed earlier:
import FormattedTimePipe from './formatted-time.pipe'; import { TestBed } from '@angular/core/testing'; describe('A formatted time pipe' , () => { let fixture;
beforeEach(() => { fixture = new FormattedTimePipe(); })
// Specs with assertions it('should expose a transform() method', () => { expect(typeof formattedTimePipe.transform).toEqual('function'); });
it('should produce expected result', () => { expect(fixture.transform( 'val' )).toBe('bananaval'); }) });
In our beforeEach() method, we set up the fixture by instantiating the pipe class. In the first test, we ensure that the transform() method exists. This is followed by our second test that asserts that the transform() method produces the expected result.
We saw how to code powerful tests for our components and pipes. If you found this post useful, be sure to check out the book Learning Angular (Second Edition) to learn about mocking HTTP responses and unit testing for routes, input, and output, directives, etc.
Read Next
Getting started with Angular CLI and build your first Angular Component
Building Components Using Angular
Why switch to Angular for web development – Interview with Minko Gechev | https://hub.packtpub.com/unit-testing-angular-components-and-classes/ | CC-MAIN-2019-26 | refinedweb | 2,323 | 52.6 |
- A Minimal Snapper Example
- File Format
- Nodes
- SymGraph
- Handles
- Components
- Transforms
- Bounds
- Geometry
- Rendering and Appearance
- Picking
- Visibility
- Connectors
- Grouping
- Props
- Files and External References
- Caching
- Updating Graphics
- Parameters and Programs
- Iteration
- Filtering
- Dump
- Debug
- Vocabulary
CmSym is a scene graph API and file format that allows you to create, edit, read and write graphics entities. Most often it's simply referred to as "sym".
A Minimal Snapper Example
This is a minimal example of getting CmSym graphics from a snapper:
private class ExSnapper extends Snapper { public SymNode buildSym() { SymNode root(); root << reps3D(#medium); root << SymGMaterial(plainColorGMaterial3D(255, 255, 0)); root << SymBox(); return root; } }
File Format
The CmSym file format (.cmsym), built on top of Dex, supports streaming of parametric graphics (2D and 3D) and should be considered as the successor to Cm3D.
Meshes are compressed using OpenCTM, which can reduce the file size with a 2-5x factor compared to Cm3D. Other data is compressed using lzma (equivalent to 7-zip default). Always remember that the compression amount depends on the type of data included in the sym.
CmSym files are versioned using semantic versioning which will give a clear meaning for when to change each number in the major.minor.patch versioning scheme:
major: Backwards incompatible API changes minor: New backwards compatible features patch: Backwards compatible bug fixes
This will allow readers to easily check if it can load a newer file version or not by simply comparing the major part. However, no such guarantee can be made before CmSym file version 2.0.0.
Major changes will only be made during the major CET releases.
See the CmSym file structure reference
Nodes
SymNode is the main class of the cmsym package. Nodes are used to create cmsym based scene graphs, also referred to as sym graphs. The graphs are DAGs, directed acyclic graphs, which cannot contain cycles.
Here's an example of how to construct a simple sym graph:
SymNode root("root"); SymNode a("a"); root.addChild(a); SymNode b("b"); root.addChild(b);
Which will create a node "root" with two children; "a" and "b". Each node has an identifier which must be unique among it's siblings, so you cannot have two children named "a" for example. The id can be used to access different parts of a sym graph in a simple manner. A node id should be unique among its siblings. Ids allow you to access children easily:
SymNode a = root.child("a");
If you want an unique id to reference the node you can use the "rid" field, which is unique during runtime. Note that you shouldn't store SymNode on your snapper (or anywhere really) because they're not guaranteed to survive edits.
Nodes can exist several times in a symgraph, they're then referred to as shared. For example:
SymNode x("x"); SymNode root("root"); SymNode a("a"); a.addChild(x); root.addChild(a); SymNode b("b"); b.addChild(x); root.addChild(b);
To see the structure we can use the dump method:
root.dump();
Which shows that "x" is shared by the "a" and "b" nodes:
SymNode(root) SymNode(a) SymNode(x), shared=2 SymNode(b) SymNode(x), shared=2
You can access nodes further below the structure with a path, specifying each node below the current one separated by a dot.
SymNode x = root.node("a.x");
For convenience the sym graph can be constructed with a special syntax. Here's the same example with shared nodes, but with a syntax:
sym SymNode root("root") { SymNode x("x"); symadd SymNode("a") { symadd x; }; symadd SymNode("b") { symadd x; }; }
Here the "sym" prefix begins the sym syntax and child nodes and components are added with the "symadd" prefix. It's not required to use the syntax if you don't like it, but it can make the hierarchy of some sym graphs easier to understand. Especially the overall structure is more visible with the syntax.
SymGraph
SymGraph is the connection between the owning object, such as a Snapper, and the SymNode.
It gets automatically created and you should not store it as a field. You can always access it with the .sym method:
Snapper z = anySnapper(ExSnapper, place=true); SymGraph sym = z.sym(); pln(sym.owner);
sym.owner=ExSnapper((id=3, orig))
If you need to create it yourself you can override the buildSymGraph method:
public SymGraph buildSymGraph() { SymGraph graph(buildSym(), this); graph.disableCacheAndOptimizedGfx(); return graph; }
Handles
A SymHandle specifies a specific node using a starting node as an anchor and a path to the target node.
Sometimes it might be easier to use a handle than a node. See for example this example where a shared node is offset by a transform:
SymNode root("root"); SymNode x("x"); SymNode a("a"); a << SymTransform((1, 0, 0)); a << x; SymNode b("b"); b << SymTransform((2, 0, 0)); b << x; root << a << b;
or with syntax:
sym SymNode root("root") { SymNode x("x"); symadd SymNode("a") { symadd SymTransform((1, 0, 0)); symadd x; }; symadd SymNode("b") { symadd SymTransform((2, 0, 0)); symadd x; }; }
You can then use a handle to get the actual position with:
SymHandle ax = root.handle("a.x"); // AC stands for anchor coordinates, in this case same as root coordinates. pln(#ax.posAC); SymHandle bx = root.handle("b.x"); pln(#bx.posAC);
ax.posAC=(1, 0, 0) bx.posAC=(2, 0, 0)
Even though handles are often more convenient to use than nodes there are pitfalls to be aware of:
- Don't store a handle because the nodes might become invalid. In some cases, you can get away with calling "invalidate()" on the handle, but that comes with performance trade-offs.
- Handles might be slower than using nodes directly. Handle construction introduces some overhead, so constructing handles during iteration should be avoided unless necessary. And especially searching upwards in handles during iteration, for example calculating the anchor coordinate for all leafs, is very slow.
Components
SymComponent(s) are used to assign attributes to nodes.
Here's how to add components to a SymNode:
SymNode root("root"); // A convenient way to define a SymReps component. root.setComponent(reps3D(#medium)); // Append concatenates the transforms. root.appendComponent(SymTransform((1, 0, 0))); root.appendComponent(SymTransform((1, 1, 0))); // Or use the append operator. root << SymGMaterial(coloredGM(255, 0, 125));
You can see the components with the dump method:
root.dump(":c");
SymNode(root) symReps 3D(medium) symTransform (2, 1, 0) symGMaterial colorF(255, 0, 125)
Note that components are unique to each node and cannot be shared. Instead, if you want to reuse a component you need to share nodes.
Components have their own id which you can use, but it's recommended to use access functions instead:
pln(#root.symReps); pln(#root.pos);
root.symReps=SymReps(3D(medium)) root.pos=(2, 1, 0)
You can also use the sym syntax to add components:
sym SymNode root("root") { symadd reps3D(#medium); symadd SymTransform((1, 0, 0)); symadd SymTransform((1, 1, 0)); symadd SymGMaterial(coloredGM(255, 0, 125)); };
The component-based design sym uses do not allow multiple components of the same type per node. This decision was made to simplify and make the implementation more efficient but it can make it more inconvenient to use. To get around the limitation some components, like SymText2D or SymLines2D, support multiple geometry instances. If it's still insufficient you need to create more nodes.
Transforms
SymTransform is used to position, scale and rotate nodes. Transforms of parent nodes affect their children, for example:
SymNode root("root"); root << SymTransform((1, 0, 0)); SymNode a("a"); a << SymTransform((1, 0, 0), (90deg, 0deg, 0deg)); a << SymBox(); root << a;
or with syntax:
sym SymNode root("root") { symadd SymTransform((1, 0, 0)); symadd SymNode("a") { symadd SymTransform((1, 0, 0), (90 deg, 0 deg, 0 deg)); symadd SymBox(); }; };
The box in "a" would be placed on position (2, 0, 0) and with yaw 90°.
Coordinate Systems
If you want the absolute transform of your geometry it's easiest to work with handles:
SymHandle a = root.handle("a"); pln(a.transformAC); // AC is short for "Anchor Coordinates"
Transform(pos=(2, 0, 0), rot=(yaw=90°, pitch=0°, roll=0°), scale=1)
Here anchor coordinates applies all transform from the handle anchor to the node the handle points to (sometimes referred to as root coordinates if the anchor starts from root, which is the common case). Keep in mind that calculating these coordinates may be expensive, because we need to traverse the handle and append the transform of all nodes.
When working with transforms it's important to remember that the transforms are applied from the bottom up, not top-down. This is relevant when calculating pivot points during rotation and scaling. See this example:
SymNode root("root"); root << SymTransform((0, 0, 2)); SymNode a("a"); a << SymTransform((0deg, 45deg, 0deg)); SymNode b("b"); b << SymTransform((1, 0, 0)); b << SymBox(); a << b; root << a;
or with syntax:
sym SymNode root("root") { symadd SymTransform((0, 0, 2)); symadd SymNode("a") { symadd SymTransform((0 deg, 45 deg, 0 deg)); symadd SymNode("b") { symadd SymTransform((1, 0, 0)); symadd SymBox(); }; }; };
The box will first move 1 x-wise, then rotated with 45 degree pitch and then finally moved 2 z-wise. During the rotation, the box will rotate around (1, 0, 0), instead of (0, 0, 2) which would be the case if the transforms were applied top-down.
Local coordinates is the coordinate system of a node before the transform has been applied. This is used when transforming between the coordinate systems of two nodes (used if you want to position geometry relative to another node):
sym SymNode root("root") { symadd SymNode("a", (1, 0, 0)); symadd SymNode("b", (0, 2, 0)); }; SymHandle a = root.handle("a"); SymHandle b = root.handle("b"); // LC is short for "Local Coordinates" pln("LCToLC ", a.transformLCToLC(b).pos);
LCToLC (1, -2, 0)
The local coordinates for the "a" node are without any transform, so the position starts at (0, 0, 0). Then we apply the "a" transform (1, 0, 0) and move upwards towards "root". Then we go down towards "b", applying the inverse of (0, 2, 0) which is (0, -2, 0) and we end up with the resulting transform (1, -2, 0).
Pivots
An important difference between SymTransform and Transform is that you can also modify the pivot of the transform. For example rotation without a pivot:
SymTransform x(90 deg); pln((1, 0, 0).transformed(x.transform));
(~0, 1, 0)
Rotation with a pivot:
SymTransform x(90 deg, pivot=(1, 0, 0)); pln((1, 0, 0).transformed(x.transform));
(1, 0, 0)
During rotation, the pivot defines which point the rotation should rotate around. And during scaling, it defines the point that should remain at the same position.
Note that changing the pivot will change the position of the transform, but not the resulting transformation. So for example if you've positioned everything where you want it and realize that you want to rotate or scale around a different pivot, you don't have to reposition all graphics, just set the pivot. For example:
SymTransform x(90 deg); x.setPivot((1, 0, 0)); pln((1, 0, 0).transformed(x.transform));
(~0, 1, 0)
Note how it's not the same to initialize the pivot in the constructor compared to changing it after.
Bounds
By default, sym calculates and caches all bounds on graphics. It's still recommended to override localBound() on Snapper to prevent the initial calculation cost. It's also possible to manually override the bounds on a SymNode:
public void setLocalBound(SymNode this, box bound) { public void setBound(SymNode this, box bound) {
Or if using a parametric sym you can create a program to update localBound on the root:
sym.addProg(symprog(double width, double depth, double height) { setLocalBound(width, depth, height); });
If you don't want a node to contribute to bounds calculation you can use a gfx setting to turn it off:
node.putGfx(symIgnoreBound, true);
Geometry
There are different types of geometry components. They can be rendered but it's not required. See Rendering and Appearance and Visibility for how to customize them.
Meshes
ATriMeshF defines the meshes in CmSym. ATriMeshF is a size optimized float-based triangle mesh. They are in generally used in 3D but they can be shown in 2D as well.
The SymMesh component holds a raw ATriMeshF:
ATriMeshF mesh(); mesh << boxMesh(box((0, 0, 0), (1, 1, 1))); mesh << boxMesh(box((1, 0, 0), (1.5, 0.2, 0.2))); SymNode node(); node << SymMesh(mesh); pln(node.mesh);
ATriMeshF(triangles=24, vertices=48, normals)
Other components may generate a mesh parametrically:
SymNode node(); node << SymCylinder(1, 0.5); pln(node.mesh);
ATriMeshF(triangles=92, vertices=96, normals, verticalTextureFlip=true)
Shapes
AShape2D defines the shapes in CmSym. It's a general description of shapes. They are generally used in 2D but they can be shown in 3D as well.
The SymShape components hold a raw AShape2D:
APolyline2D shape([point2D: (0, 0), (3, 0), (3, 3), (0, 3)], close=true); SymNode node(); node << SymShape(shape); pln(node.shape);
APolyline2D((0, 0), (3, 0), (3, 3), (0, 3), (0, 0), closed)
Other components may generate a shape parametrically:
SymNode node(); node << SymRect(3); pln(node.shape);
APolyline2D((0, 0), (3, 0), (3, 3), (0, 3), (0, 0), closed)
Points
Points are stored in the SymPoints component, even if you only want to store a single point.
Planes
SymPlane defines a plane in CmSym. It is however never rendered.
Lines
Lines are stored pre transformed in a compact format with vertice and index sequences:
public class SymLines2D extends SymPrimitive2D { /** * Vertices. */ private point2D[] vertices : public readable; /** * Lines (as integer/index pairs) * A single line is defined by two integers/indices that are referring * two vertices from the 'vertices' sequence. */ private int[] lines : public readable;
public class SymLines3D extends SymPrimitive3D { /** * Vertices. */ private point[] vertices : public readable; /** * Lines (as integer/index pairs) * A single line is defined by two integers/indices that are referring * two vertices from the 'vertices' sequence. */ private int[] lines : public readable;
This is similar to how ATriMeshF stores its data. This is a very efficient way to store a large number of lines. For performance reasons, it should be preferred to let SymLines2D and SymLines3D hold as much data as possible, in contrast to using SymShape or creating more nodes.
Text
Text can either be meshed (traces the outline of the text and extrudes to a mesh) or a textured shape.
For example, this is the meshed text:
SymNode node(); node << reps3D(#medium); node << SymText3D("Text", d=0.1, h=0.4); node << SymGMaterial(plainColorGMaterial3D(240, 0, 120));
And this is text with a textured shape (displayed in 3D):
SymNode node(); node << reps3D(#medium); node << SymText2D("Text");
Note that SymText2D is displayed lying down by default as it's most commonly used in 2D.
Primitives
The primitives define common parametric geometry. They will either generate a mesh or a shape.
3D primitives (generates meshes):
- SymBox
A box defined by width (x-dir), depth (y-dir) and height (z-dir). The box has its lower-left corner positioned in origo and the upper right corner is positioned in (width, depth, height).
- SymCylinder
A cylinder defined by radius (x- and y-dir), length (z-dir) (may be closed or opened ended). The cylinder is standing up with the center of one end in origo and the center of the other end at (0, 0, length).
- SymCone
A cone defined by two radiuses (x- and y-dir) end height (z-dir) (may be closed or opened-ended). The cone is standing up with center of r1 end in origo and the center of r2 end at (0, 0, length).
- SymSphere
A sphere defined by a radius. The sphere has its center in position in origo.
2D primitives (generates shapes):
- SymRect
A rectangle defined by width (x-dir) and depth (y-dir). The rectangle has its lower-left corner in origo and its upper right corner in (w, d, 0).
- SymCircle
A circle defined by a radius (x- and y-dir). Defined with the center of sphere placed at origo.
- SymEllipse
An ellipse defined by two radiuses (x- and y-dir). Defined with the center of ellipse placed at origo.
The flat primitives should be preferred for 2D but they work in 3D as well. You may control double sidedness by setting the "symRenderMeshAsDoubleSided" property on SymGfx.
The meshes for curved primitives are generated with different refinement depending on the LOD for speed reasons. For example:
SymNode node(); node << reps3D(#medium, #high); node << SymSphere(1); pln(node.mesh(rep3D(#medium))); pln(node.mesh(rep3D(#high)));
ATriMeshF(triangles=676, vertices=340, normals, uv=340) ATriMeshF(triangles=1560, vertices=782, normals, uv=782)
Notice how the medium mesh contains fewer triangles than the high mesh. The generated meshes are cached on the sym for speed reasons.
Rendering and Appearance
These components control how the sym is rendered. The properties are moved out from all individual components and placed on specialized components on the node.
Material
SymGMaterial holds a GMaterial3D which will affect all nodes below unless overridden. The material will be applied to all meshes. Unless specified via a gfx property, the average color will be used as the shape fill color.
For example to set a nice yellow material on a node:
node << SymGMaterial(plainColorGMaterial3D(255, 255, 0));
If you want to control material per visibility you can create separate nodes with different visibility, but you can also use the SymGMaterialSelector class (similar to MaterialSelector3D). For example:
node << SymGMaterialSelector([symOptionGM(color(0, 170, 100), layer(#architectural)), symOptionGM(color(255, 255, 0))]);
Would provide a green material in the architectural view and a yellow material otherwise. (Make sure the snapper contains the #architectural category for this to be visible.)
UV Mapping
SymUVMapper defines the texture mapping technique intended for the meshes on the node. This is not propagated and only affects the node of the component. If not set the different components to decide which technique to use as default.
Gfx Properties
By setting properties on SymGfx it's possible to control the appearance of the sym. Properties will be propagated down and affect everything below unless overridden. For example:
SymNode node(); node.putGfx(symShapeFillColor, color(255, 123, 123));
The complete list is documented above the definition of the SymGfx class. Default values can be found in the function "defaultSymGfx" in the same file.
You can also add properties inside the sym syntax:
sym SymNode node() { symadd gfx(symShapeFillColor, color(255, 123, 123)); };
To dump all properties you can use:
node.dump(":props");
SymNode(0364edbe-78a0-4c5b-88b1-11d1b339069e) symGfx 1 props shapeFillColor color(255, 123, 123)
Picking
When selecting objects in the 2D/3D-view the mouse position is translated to a ray that we follow to find the intersection between the ray and objects in the scene. This is referred to as picking. By default all geometry (SymShape/SymShape etc) found under a SymNode supports picking, but sometimes it may be useful to disable picking for specific surfaces. For instance, you might want to be able to select the interior of an object by making the exterior transparent.
Here is a simple example:
SymNode node("unpickable"); node << gfx(symEnablePicksurface, false);
Note: As with all other gfx properties this property is inherited but may be overridden by descendants.
If you want information about which nodes you pick you can add the SymPickInfo component:
SymNode node(); node << SymPickInfo("button");
Which you can access for example in the clickAnimation method:
public Animation clickAnimation(StartClickAnimationEnv env) { if (SymPickInfo info = env.pickSymInfo()) { pln(info.key); } }
SymPickInfo will be valid both for the node the component lives on and for nodes below. If you want to pick against the actual node you can instead use the SymPickSurface class in clickAnimation:
public Animation clickAnimation(StartClickAnimationEnv env) { if (SymPickSurface pick = env.pickSym()) { pln(pick.node); } }
If you only want the actual node you pick on you don't need to add a SymPickInfo, but in general, it's good practice to use SymPickInfo for your pick implementation.
Visibility
Each node has visibility controls specifying when they're visible. There are different kinds of controls: 2D and 3D visibility, LOD visibility, category visibility, and the render can be disabled completely.
Reps and LODs
SymReps (reps means representation) controls 2D and 3D visibility as well as LOD visibility.
For example, a node that is only visible in 3D during render (the super LOD):
SymNode node(); node << SymReps(symGfxMode.x3D, detailLevel.super);
Because it's quite verbose to use symGfxMode and detailLevel directly there are shorthand forms for creating SymReps. For example:
reps3D(#super); reps2D(#super); reps3D(#high, #medium, #low);
Several different LODs can exist on one node, and the node can be visible in 2D and 3D at the same time:
SymReps(reps3D(#super), reps2D(#super)); reps(#super); // Same as above // Different LODs for 3D and 2D SymReps(reps3D(#super, #high), reps2D(#super));
CmSym supports five LOD-levels for both 2D and 3D (the other ones found in the detailLevel enum should not be used):
- low
- medium
- high
- super
- base
The base LOD-level is generally not used outside of Model Lab and it refers to the original, non-reduced, model.
Note that while CmSym supports different LOD-levels for 2D, CET always uses the super 2D-LOD. If a model doesn't contain a specific LOD then the closest matching one will be chosen. Therefore it's highly recommended to only specify the LODs you need. If you only need a single LOD-level for the whole sym, consider using medium for 3D and super for 2D.
SymReps only affects the node the component lives on. For example:
sym SymNode root() { symadd SymNode("mid", reps3D(#low)) { symadd SymNode("leaf", reps3D(#medium)); }; };
While it's a strange way to model, the low LOD will only contain geometry from "mid" and the medium LOD will only contain geometry from "leaf".
Look at Model Lab for more information about creating LODs from existing models.
Category Visibility
SymVisibility controls the visibility of the node. By setting a LayerExpr on a node it's possible to control when a part of the sym is visible. For example, two nodes should be visible only in #normal or #architectural:
sym SymNode root() { symadd SymNode("normal") { symadd SymVisibility(layer(#normal)); }; symadd SymNode("architectural") { symadd SymVisibility(layer(#architectural)); }; }
By default all nodes inherit the visibility of the parents so if a node is invisible so will all the children. To override this set "includeParent" to false:
sym SymNode root() { symadd SymNode("normal") { symadd SymVisibility(layer(#normal)); symadd SymNode("architectural") { symadd SymVisibility(layer(#architectural), includeParent=false); }; }; }
In both examples, the node "normal" will only be visible in the normal view mode and the node "architectural" in the architectural view mode.
If used in a Snapper remember to enable any custom categories as well:
/** * Add categories. */ public void addSystemCategoriesTo(symbol{} categories) { super(..); categories << #architectural; }
Disable Render
SymGfx has a setting that can disable the render of a node, and it's subgraph, completely:
SymNode node(); node.putGfx(symRenderDisabled, true);
It serves as a complement to SymVisibility allowing you to turn on and off a subgraph that contains categories.
Connectors
SymConnector means the node and its descendants represent a connector. Keep in mind that this is not the same as a snapper connector. SymConnector is mainly a data carrier that can be used to produce "real connectors".
Grouping
There are currently two ways to group nodes.
LODGroups
When a node has a SymLODGroup component it means that its descendants represent the same object, for instance, an arm rest of a chair. CmSym does not require LODs representing the same object to be arranged under a SymLODGroup but it is considered to be best practice. It makes it possible to perform changes, such as setting a material in a simple manner without the need for programs/parameters.
For example in this case:
sym SymNode() { symadd SymLODGroup(); symadd SymNode("super") { symadd reps3D(#super); }; symadd SymNode("high2") { symadd reps3D(#high); }; symadd SymNode("high3") { symadd reps3D(#high); }; };
Because the nodes are grouped by a LOD group we know that changing the material of node "super" means we should also change the material for nodes "high2" and "high3". Otherwise, we might not know that the nodes in fact represents the same object.
If the sym is generated by code this problem is easy to avoid, but LOD groups are useful when generic tools should manipulate the sym. They're used extensively in Model Lab for example.
SymTags provide a way define groups for nodes by using a LayerSet. For example:
SymTags(layerSet(#special, #incredible)); SymTags(layerSet(#a, #b, #c), main=#b);
Props
Props on syms have native support for PropObj with the SymProps component.
You can add prop defs allowing you to control how data is copied and streamed. Remember to not store any snapper instance-specific data on the sym since it can and will be shared across snappers!
root."a" = 2; root.put("b", 7, {#copy_null}); root.put(PropDef("c", {#copy_reference}));
By default, streaming is only supported for some basic types. To add your own to/from conversions register functions with:
registerSymPropToDex(MyClass, function toDex); registerSymPropFromDex("MyClass", function fromDex);
Where "MyClass" is the dex type you're returning. It is recommended to use the class name as the type:
private DexObj toDex(Object v, CmSymExportEnv env) { DexObj dex(p.class.name); ... return dex; }
See the SymProps implementation for more details.
Files and External References
Syms can be loaded from files or from streams, but cannot be streamed as fields.
Use loadSym to load from a .cmsym file or from a stream:
Url url = cmFindUrl("custom/modelLab/models/default.cmsym"); SymNode sym = loadSym(url);
By default the result will be cached and loaded lazy. Lazy means nodes and heavy geometry such as meshes won't be loaded until they're requested. For example:
Url url = cmFindUrl("custom/modelLab/models/default.cmsym"); SymNode sym = loadSym(url); sym.node("1.2"); // Loads the node "1" and it's child "2". sym.dump("+symMesh :lazy"); // Without ":lazy" dump will load everything.
SymNode(0), cached SymNode(1) SymNode(2) symMesh not loaded children 4 children 1
Here we can see that the root node "0" has one loaded child and one which hasn't been loaded yet. The node "2" has a SymMesh, but the actual mesh hasn't been requested yet.
It's possible to change the behavior if needed:
loadSym(url, lazy=false, cache=false)
External References
A sym can contain external references to other sym files which can be automatically loaded on demand. For example:
sym SymNode leg("leg") { symadd SymNXRef(cmFindUrl("custom/developer/examples/cmsym/training/leg.cmsym")); };
Here "leg" will contain a child node taken from the file. It's forbidden for "leg" to contain any other children when using an xref component.
With SymNXRef it's possible to combine multiple sym files to form a single sym graph. It's also possible to replace parts of a sym to other parts, for example exchanging an armrest to another type of armrest. See SymChair in custom.developer.examples.cmsym as an example.
Caching
It's a very good idea to use caching when creating syms. Note that a cache, both static and parametric, must always describe everything within the cache block. Changing a parameter that isn't part of the cache key will not be seen when reusing the cached sym.
Static Caching
Static caching, which is the same as the cache3D syntax. Inside a class you can use:
symStaticCache(cw, cd, "other stuff") { ... } symStaticCache(w=1.0, d=2.0) { ... } symStaticCache(props{w=1.0, d=2.0}) { ... }
Which will namespace the cache within the class. Outside a class you can use:
symNoOwnerCache(cw, cd, "my very unique key") { ... }
But please make sure to namespace your key properly if you use the no owner cache.
Also, a simpler unique cache if the cache should only be accessed at a single location, for example wrapping small graphics generating functions:
symUniqueCache { ... }
Which will be reset during reload.
If you want to edit a statically cached sym you must always break it loose from the cache. This happens automatically during beginEdit or symEdit. There is no automatic way to recache the sym again after editing, but you can do it manually:
node.staticRecache("new static cache key");
A cache might be used like this in a Snapper's buildSym():
public SymNode buildSym() { return symStaticCache("d", 2) { SymNode node(); node << reps3D(#medium); node << SymGMaterial(plainColorGMaterial3D(255, 255, 0)); node << SymBox(2, 2, 2); result node; }; }
Parametric Caching
If you want to edit the sym inside a cache then parameters should be used:
symCache("w", "d", "h") { ... }
Where the values are given by extending:
public str->Object symParams() { ... }
If you want to cache on all parameters you can leave it empty, and it will automatically cache on all parameters:
symCache() { ... }
Here's how to use it in a Snapper:
private class ExSnapper extends Snapper { private double w = 2; private double d = 1.5; private double h = 1; public str->Object symParams() { return props{w=w, d=d, h=h}; } public SymNode buildSym() { return symCache() { // same as symCache("w", "d", "h") SymNode node(); node << reps3D(#medium); node << SymGMaterial(plainColorGMaterial3D(255, 255, 0)); node << SymBox(w, d, h); node << symprog(double w, double d, double h) { set(symBox(), w, d, h); }; result node; }; } }
User Materials
To support user materials you need to explicitly include a user materials key in your static cache routine:
symStaticCache("someKeyHere", symUserMaterialsCacheKey()) { ... }
Parameter caching handles user material implicitly:
symCache() { ... }
And to support user materials you should override:
public bool supportsMaterialChange() { return true; }
Updating Graphics
Building and Editing a Sym
The sym system is built around the idea of creating the sym once and then editing the parts you want, instead of throwing away all the graphics just because something small has changed. Therefore the sym API seems more complex, which it is, but it's to enable this approach where you tell the system to only update the parts that changed.
There are of course trade-offs to this approach. If you want to replace the whole graphics with something that already exists somewhere else, the "invalidate-rebuild" approach might fit you better. Therefore you can circumvent this via "forceSymRebuild". Please note that it's generally not a good idea, although it might be for some use-cases.
Changing Parameters
If all you want to do is update parameters of a sym, it's quite simple:
sym.editParams("w", 2, "d", 3);
And it will take the sym prepared for edit (or replacing the sym with a cached result if possible) and updating the graphics. If you don't want to cache the node (for example during stretch) you can use:
sym.editParamsNoCache("w", 2, "d", 3);
But it's also possible to change parameters explicitly:
sym.beginEdit(); sym.setParam("w", 2); sym.setParam("d", 3); sym.endEdit();
But please note that you'll have to be careful to break loose cached child nodes if the programs will change them.
Preparing a Sym for Edit
If you're using a static cache and you want to manually edit your sym, you need to break it loose from the cache. You also need to notify the backend so it can reflect graphics edits, but more on that later.
All edits should be wrapped inside beginEdit/endEdit like so:
sym.beginEdit(); ... sym.endEdit();
Or in symEdit/symUndoableEdit:
symEdit(sym) { ... }
But note that we only break loose the root node here. If you only use caching on the root, then the above is fine.
But if you want to edit a child node that is cached then you must explicitly target that node to be broken loose:
sym.beginEdit("a"); sym.child("a").xsetMaterial(..); sym.endEdit();
or
SymNode a = sym.beginEdit("a"); a.xsetMaterial(..); sym.endEdit();
or
SymHandle a = sym.beginEditHandle("a"); a.xsetMaterial(..); a.endEdit(); // or sym.endEdit()
Note that if you do beginEdit on node it will make sure all nodes on the path will also be broken loose. Furthermore a non-cached child will also always be copied if a parent is cached.
If you want you can specify several paths or force everything to be broken loose:
sym.beginEdit("a", "b.c.d"); sym.beginEdit(editBelow=true); // Breaks loose everything from the cache
The symEdit syntax behaves the same:
symEdit(sym, "a", "b.c.d") { ... } symEdit(sym, editBelow=true) { ... }
symEdit on a handle only prepares that node for edit:
SymHandle h = sym.handle("a"); symEdit(h) { h.xsetMaterial(..); }
For more examples see SymCachedEditSnapper.cm in cm.core.cmsym.test.
Reflecting Graphics Changes
When manually editing a sym there are two separate systems to be aware of:
- The sym structure itself (nodes and components)
- The backend graphics (compromised of REDShapes)
While you're usually not interacting with REDShapes directly you do need to notify the backend so it knows what to update. Functions that do update the graphics should be prefixed with an 'x' and be wrapped in beginEdit()/endEdit() or the symEdit syntax (which simply wraps a block with beginEdit/endEdit).
For example, if you want to change a mesh via a handle you can do:
symEdit(myHandle) { myHandle.xsetMesh(newMesh); }
or
SymHandle h = sym.beginEditHandle("path.to.node"); h.xsetMaterial(greenGM); h.endEdit();
or
SymGraph sym = sym(); sym.beginEdit("path.to.node"); sym.handle("path.to.node").xsetMesh(newMesh); sym.endEdit();
Please be aware that any handles created before beginEdit may become invalid if the sym is shared between snappers (which it may become implicitly after copy for example). A possible workaround is to call invalidate() on the handle before use, but please note that it might be slow.
You can also make changes to nodes directly:
SymNode node = sym.beginEdit("path.to.node"); b.xsetMesh(newMesh); sym.endEdit();
If you're missing an "x" function to do what you want, you can still explicitly request graphics updates via symRt. For example:
sym.beginEdit(); // Initiates symRt GMaterial3D gm = sym.symGMaterial.material; // ... Change GMaterial3D in some way // Explicitly update the graphics after material has changed. symRt.updateMaterial(sym); sym.endEdit(); // Flushes symRt
For more info see cm/format/cmsym/edit.cm and refer to the implementation of mentioned functions.
Parameters and Programs
Sym allows you to define parameters and have programs reacting to changes of those parameters. They are stored on nodes and will be included in the files you save, making the files contain parametric graphics.
For example:
sym SymNode root("root") { symadd SymGMaterial(plainColorGMaterial3D(255, 255, 0)); symadd param("w", 2); symadd SymNode("x", reps3D(#medium)) { symadd SymBox(); symadd symprog(double w) { set(symBox(), w, w, w); }; }; }; root.dump(":code");
SymNode(root) w = 2 SymNode(x) RProg(RSymNode this, rdouble w) { insns=14})
Where changes to the parameter "w" will run the program on the child node, changing the size of the box. Note that the symProg parameter name needs to be the same as the sym parameter name in order for the symProg to understand it is supposed to react when the sym parameter changing.
The "symprog" directive generates a Z-script program that will check for missing functions and types during compile time. Available functions and types can be found in SymRt.cm in cm.format.cmzym.z.
Here's a full example of a Snapper using parameters and programs:
private class ExSnapper extends Snapper { private double w = 2; public str->Object symParams() { return props{w=w}; } public SymNode buildSym() { return symCache() { result sym SymNode("root") { symadd SymGMaterial(plainColorGMaterial3D(255, 255, 0)); symadd param("w", 2); symadd SymNode("x", reps3D(#medium)) { symadd SymBox(); symadd symprog(double w) { set(symBox(), w, w, w); }; }; }; }; } final public void change() { if (w == 2) { w = 1; } else { w = 2; } sym.editParam("w", w); } } { ?ExSnapper z = anySnapper(ExSnapper); if (z) { z.change(); } else { placeSnapper(ExSnapper()); } }
Iteration
It's often wanted to iterate over the sym graph and visit all, or some, of the nodes. There are efficient and convenient ways of doing this.
Visit Nodes
If you want to visit all nodes in a sym graph, a simple for loops works like you might expect:
sym SymNode root("root") { symadd SymNode("a") { symadd SymNode("b"); }; symadd SymNode("x") { symadd SymNode("y"); }; } for (x in root) pln(x);
SymNode(root) SymNode(a) SymNode(b) SymNode(x) SymNode(y)
Which visits the nodes in a depth-first manner.
Important to note that continue skips the whole subgraph, not just the node we're visiting. So this might not do what you expect:
for (x in root) { if (x.id == "a") continue; pln(x); }
SymNode(root) SymNode(x) SymNode(y)
Continue skipped both the node "a" and it's child "b".
Iteration accepts filters to control which nodes to visit. For example:
sym SymNode root("root") { symadd SymNode("a") { symadd SymNode("b") { symadd reps3D(#high); }; }; symadd SymNode("x") { symadd SymNode("y") { symadd reps3D(#medium); }; }; } for (x in root, filter=SymNodeRepFilter(rep3D(#medium))) { pln(x); }
SymNode(y)
Because information about reps is propagated upwards, the iteration is efficient and aborts as soon as possible. In this case, when checking "a" it notices there are no medium reps there or below and it can abort.
Iteration exists for handles as well:
for (x in root.iter, filter=SymNodeRepFilter(rep3D(#medium))) { pln(x); }
SymHandle(root.x.y)
But please be aware that they're less efficient than iterating over pure nodes.
Track Data
It's often needed to traverse the whole sym graph and calculate data on the fly. For example, during export, you want to know the combined transform to a node from the root. It's possible to do this using handles but it's very inefficient to recalculate the transform from root to all nodes.
Instead, there's an iteration env that does this for you:
for (nodeEnv in SymNodeItEnv(root)) { ... } for (handleEnv in SymHandleItEnv(root)) { ... }
It combines the transform and gfx props and keeps track of the latest material, visibility, tags, and other data efficiently. The distinction between node and handle iteration exists because handle iteration is slightly slower. If you want to combine and store other information simply subclass the env you want.
With the iteration env it can for example be straightforward to implement conversions from sym to another data structure using this skeleton:
sym SymNode root("root") { symadd SymGMaterial(plainColorGMaterial3D(255, 120, 0)); symadd SymNode("a") { symadd SymBox(); }; }; for (env in SymNodeItEnv(root)) { pln(env.node); if (ATriMeshF mesh = env.mesh) { pln(" ", env.material); } }
SymNode(root) SymNode(a) GMaterial3D(106, Diffuse3D, Ambient3D)
Instead of having to search upwards for the material when from node "a" the env keeps track of it during the iteration.
See the CmSym to Graph generation in cm.core.cmsym symToGraph.cm for a real-life example.
Filtering
If you want to collect nodes from a sym graph you can use a filter:
sym SymNode root("root") { symadd SymNode("a", reps3D(#high)); symadd SymNode("b", reps3D(#high)); symadd SymNode("c", reps3D(#medium)); } SymNode[] nodes = root.filter(SymNodeRepFilter(rep3D(#high))); pln(nodes);
[SymNode, count=2: SymNode(b), SymNode(a)]
Or if you want to collect handles:
SymHandle[] handles = root.filterHandles(SymNodeRepFilter(rep3D(#high))); pln(handles);
[SymHandle, count=2: SymHandle(root.b), SymHandle(root.a)]
Keep in mind that this will create a collection and if you only want to visit the nodes it's more efficient to iterate over them directly.
There different kinds of filters you can use, refer to the base class SymNodeFilter in cm.format.cmsym.filter.
Dump
It can be useful to look at how the sym graph is structure. You can do this with the dump() method:
sym SymNode root("root") { symadd prop("customProp", 2); symadd SymTags({#a, #b, #c}, main=#a); symadd SymNode("3D") { symadd reps3D(#medium); symadd SymGMaterial(coloredGM(255, 0, 0)); symadd SymBox(); symadd gfx(symRenderMeshAsWireframe, true); }; symadd SymNode("2D") { symadd reps2D(#super); symadd SymRect(); symadd gfx(symShapeFillColor, color(255, 0, 0)); }; }; root.dump();
SymNode(root) SymNode(2D) SymNode(3D)
The method accepts an optional string to customize its output. A "+" prefix specifies which components to dump, a "-" prefix which components to ignore, and ":" specifies general settings. These settings can be combined as you please. You can get a help text for the options using:
dumpSymDumpOptions();
For example, if you only want to dump the nodes to reach 3D:
root.dump("3D");
SymNode(root) SymNode(3D)
To dump all components:
root.dump(":c");
SymNode(root) symReps 3D(medium) 2D(super) symTags #a, #b, #c symProps 1 props SymNode(2D) symReps 2D(super) symProps 1 props symGfx 1 props symRect SymRect(rect(p0=(0, 0), p1=(1, 1))) SymNode(3D) symReps 3D(medium) symProps 1 props symGfx 1 props symGMaterial colorF(255, 0, 0) symBox SymBox(box(p0=(0, 0, 0), w=1, d=1, h=1))
To ignore some components:
root.dump(":c -symProps -symReps");
SymNode(root) symTags #a, #b, #c SymNode(2D) symGfx 1 props symRect SymRect(rect(p0=(0, 0), p1=(1, 1))) SymNode(3D) symGfx 1 props symGMaterial colorF(255, 0, 0) symBox SymBox(box(p0=(0, 0, 0), w=1, d=1, h=1))
To ignore children:
root.dump(":c -children");
SymNode(root) symReps 3D(medium) 2D(super) symTags #a, #b, #c symProps 1 props
The props (on SymGfx and SymProps) can be dumped, note that there might be some props used internally as well:
root.dump(":props");
SymNode(root) symProps 1 props customProp 2 SymNode(2D) symProps 1 props _features {"shape"->"symRect"} Object {#stream_null} symGfx 1 props shapeFillColor color(255, 0, 0) SymNode(3D) symProps 1 props _features {"solid"->"symBox", "mesh"->"symBox"} Object {#stream_null} symGfx 1 props renderMeshAsWireFrame true
To dump only specific properties you're interested in:
root.dump(":props(customProp shapeFillColor)");
SymNode(root) symProps 1 props customProp 2 SymNode(2D) symGfx 1 props shapeFillColor color(255, 0, 0) SymNode(3D)
There is a couple of other special commands dump can take:
root.dump(":bound"); root.dump(":shape"); root.dump(":mesh");
SymNode(root) bound box(p0=(0, 0, 0), w=1, d=1, h=1) SymNode(2D) bound box(p0=(0, 0, 0), w=1, d=1, h=0) SymNode(3D) bound box(p0=(0, 0, 0), w=1, d=1, h=1) SymNode(root) SymNode(2D) symRect SymRect(rect(p0=(0, 0), p1=(1, 1))) SymNode(3D) SymNode(root) SymNode(2D) SymNode(3D) symBox SymBox(box(p0=(0, 0, 0), w=1, d=1, h=1))
If the output is too verbose you can specify only the components you're interested in:
root.dump("+symTags");
SymNode(root) symTags #a, #b, #c SymNode(2D) SymNode(3D)
Some components may include color-coded information. For example, SymReps codes propagated reps (from the children) as gray while the reps on the node are colored black:
root.dump("+symReps");
SymNode(root) symReps 3D(medium) 2D(super) SymNode(2D) symReps 2D(super) SymNode(3D) symReps 3D(medium)
Sym nodes are loaded on demand. Dump normally force loads everything but you can tell it not to:
SymNode sym = loadSym(cmFindUrl("custom/modelLab/models/default.cmsym")); sym.node("1.2"); // Loads the node "1" and it's child "2". sym.dump("+symMesh :lazy"); // Without ":lazy" dump will load everything.
SymNode(0), cached SymNode(1) SymNode(2) symMesh not loaded children 4 children 1
There are more advanced dumps available for some components. For example parameters and programs:
sym SymNode root("root") { symadd param("w", 2); symadd SymNode("a") { symadd param("d", 3); symadd symprog(double w, double d) { pln(#w; #d); }; }; }; root.dump("+symProgs +symParams");
SymNode(root) symParams (1 param) w 2 SymNode(a) symProgs (1 prog) w, d invalid symParams (1 param) d 3
Their internals can be dumped (only really relevant if you're debugging sym itself):
root.dump("+symProgs(_propagated) +symParams(_propagated internal)");
SymNode(root) symProgs (0 progs) "a" 1 prog symParams (1 param) w 2 deps{"a"} invalidProgs{"a"} "a" {d->3} SymNode(a) symProgs (1 prog) w, d invalid symParams (1 param) d 3 deps{this} invalidProgs{""}
The dump method also accepts a SymNodeFilter for more granular control. See SymDump.cm cm.format.cmsym.debug or the dump methods of individual components for more info.
Debug
You can identify CmSym Snappers via the release debug dialog (Ctrl + Alt + F12).
There is a CmSym debug dialog that can be used to get a deeper understanding of the CmSym usage in a drawing. The tool is started from symDebugDialog.cm found under cm.core.cmsym.debug.
Vocabulary
Here you can find the CmSym vocabulary
Please sign in to leave a comment. | https://support.configura.com/hc/en-us/articles/360053994153-CmSym | CC-MAIN-2021-39 | refinedweb | 7,420 | 53.21 |
Javascript `this` 101
Igor Irianto
・5 min read
this is one of the most common JS keywords. You see them everywhere, but it can be hard to tell what
this is.
I will cover 3 scenarios where
this may be used: globally, inside a regular function, and inside arrow function. This should cover most usage.
- Global
this
thisinside a regular function
thisinside arrow function
Let's start by looking at some examples!
Btw, I will be doing this inside browser (chrome) console, not node. I will also assume strict mode is not used.
Global
this
If we just type
this in our browser console, it will refer to window/ global object.
this // Window {...} var helloVar = 'helloVar' this.helloVar // helloVar window.helloWindow = 'helloWindow' this.helloWindow // 'helloWindow' const helloConst = 'helloConst' this.helloConst // undefined let helloLet = 'helloLet' this.helloLet // undefined
You see that
let and
const can't be called through
this. They are not stored inside "object environment record", but inside "declarative environment records". Explaining this will be outside of this article's scope. Here's a link if you're interested.
this inside a regular function
Let's start by an example:
const obj = { breakfast: 'donut', wutBreakfast: function() {console.log(`I had ${this.breakfast} this morning!`)} } window.breakfast = 'waffles'; obj.wutBreakfast() // I had donut this morning!
Here we observe that
this inside
this.breakfast refers to the object itself. Look at where the function call is when calling
obj.wutBreakfast(). Ask yourself: "Is there an object to the left of my function call?" That object is where your
this refers to.
What if there is no object to the left of function call? If you are calling a function without an object to the left of function call, you can assume it is the global object. In this case, the
Window object.
Let's look at the next example:
function sayBrunch(){ console.log(`I had ${this.brunch} for brunch!`) } sayBrunch() // I had undefined for brunch
We haven't defined anything for brunch yet, so it returns undefined. Let's define it inside window object
window.brunch = 'oatmeal' function sayBrunch(){ console.log(`I had ${this.brunch} for brunch!`) } sayBrunch() // I had oatmeal for brunch!
Let's do few more examples to build your intuition:
window.dinner = 'pizza' const foodObj = { dinner: 'spaghetti', sayDinner: function(){ console.log(`I had ${this.dinner} for dinner!`) } } foodObj.sayDinner() // what does it return?
Another one, with a little twist. We defined a window appetizer string and a mealObj.appetizer string. We call sayAppetizers from two different objects. What do you think each will return?
window.appetizer = 'chocolate'; function sayAppetizer(){ console.log(`I had ${this.appetizer} for appetizer!`) } const mealObj = { appetizer: 'ice cream', sayAppetizer: sayAppetizer } mealObj.sayAppetizer() // what does it return? sayAppetizer() // what does it return?
Just remember,
this inside regular JS function refers to the object immediately to the left where the function is called. If there is no object, assume it is a window object.
With this in mind, even if we have
obj1.obj2.obj3.someFunc(), we know that
this inside
someFunc() will refer to
obj3 because it is the closest object to where function is called.
this inside arrow function
This behaves differently inside an arrow function. There are three things you need to keep in mind the whole time:
- Only regular function and global function can have
this.
- Arrow function does not have
thison its own
- When
thisis referred to inside an arrow function, it will look up the scope to find this value. It behaves like lexical scope.
Let's look at first example:
let myObj = { breakfast: 'taco', sayBreakfast: () => { console.log(`I had ${this.breakfast} for breakfast`) } } window.breakfast = 'pizza' myObj.sayBreakfast() // pizza
Let's see if this makes sense while keeping the 3 rules above in mind:
when we call myObj.sayBreakfast(), it looks up to myObj, but since myObj does not have
this (rule #2), it will look one more up, the global/ window object (rule #1). It saw that global/window has
this.breakfast = 'pizza', so it prints pizza.
Now add a regular function to object:
let myObj = { breakfast: 'taco', sayBreakfast: () => { console.log(`I had ${this.breakfast} for breakfast`) }, sayRegBreakfast: function() { console.log(`I had ${this.breakfast} and it was yummy`) } } window.breakfast = 'pizza' myObj.sayBreakfast() // pizza myObj.sayRegBreakfast() // taco
You'll see that using regular function gives "taco" and arrow gives "pizza".
Let's call an arrow function from global object scope. We should expect it to have
this from global scope. Is it true?
window.secondBreakfast = 'eggs'; const saySecondBreakfast = () => { console.log(`I had ${this.secondBreakfast} for second breakfast!`) } saySecondBreakfast() // eggs
I was in disbelief when I see this either, so let's prove it further. The example below is from getify archive:
function foo() { return function() { return function() { return function() { console.log("Id: ", this.id); } } } } foo.call( { id: 42} )()()() // undefined
vs
function foo2() { return () => { return () => { return () => { console.log("id:", this.id); }; }; }; } foo2.call( { id: 42 } )()()() // 42
(Btw, call assigns
this to function we are calling - foo/ foo2 itself - with the argument object we pass)
Remember that only arrow function looks up lexically; the first example looks for
this inside the third nested function and found nothing, so it returns undefined.
While foo2, finding no
this inside third nested function, lexically looks up for next available reg/ global function's
this. It found foo2's
this (from
foo2.call({id: 42})) first (remember rule #1), so it prints 42.
If there had been a regular function on the second example earlier, it wouldn't have found it, like:
function foo3() { return () => { return function() { // this is regular function now return () => { console.log("id:", this.id); }; }; }; } foo3.call({id:101})()()() // undefined
But if we gave
this to where the
return function() {...}) is, it would have found it. Because when arrow function lexically looks up and found the first regular function, that function is given
this value of 101.
function foo3() { return () => { return function() { return () => { console.log("id:", this.id); }; }; }; } foo3()().call({id: 101})() // 101
So that's it folks! This is definitely only the tip of iceberg, but
this should be enough to get you started - pun intended 😁.
Let me know if you have questions/ found mistakes - thanks for reading and happy codin'!!
I personally never had issues with 'this'. Pun intended.
But i do understand why 'this' can be tricky to some.
Good article.
This article sums it up quite well. Good job! Nice!
Thanks! Glad you found it helpful 👍 | https://practicaldev-herokuapp-com.global.ssl.fastly.net/iggredible/javascript-this-101-13j5 | CC-MAIN-2019-51 | refinedweb | 1,070 | 61.83 |
* Ingo Molnar <mingo@elte.hu> wrote:> thanks for the detailed report, i think i know what's going on. Could > you try the patch below, does it fix your problem?find below the fix with a more complete changelog and with no debugging printouts. Ingo-------------->Subject: x86: fix boot crash on HIGHMEM4G && SPARSEMEMFrom: Ingo Molnar <mingo@elte.hu>Denys Fedoryshchenko reported a bootup crash when he upgradedhis system from 3GB to 4GB RAM: bug is due to HIGHMEM4G && SPARSEMEM kernels making pfn_to_page() to return an invalid pointer when the pfn is in a memory hole. The 256 MB PCI aperture at the end of RAM was not mapped by sparsemem, and hence the pfn was not valid. But set_highmem_pages_init() iterated this range without checking the pfn's validity first - crashing the bootup.this bug was probably present in the sparsemem code ever since sparsemem has been introduced in v2.6.13. It was masked due to HIGHMEM64G using larger memory regions in sparsemem_32.h: #ifdef CONFIG_X86_PAE #define SECTION_SIZE_BITS 30 #define MAX_PHYSADDR_BITS 36 #define MAX_PHYSMEM_BITS 36 #else #define SECTION_SIZE_BITS 26 #define MAX_PHYSADDR_BITS 32 #define MAX_PHYSMEM_BITS 32 #endifwhich creates 1GB sparsemem regions instead of 64MB sparsemem regions. So in practice we only ever created true sparsemem holes on x86 with HIGHMEM4G - but that was rarely used by distros.( btw., we could probably save 2MB of mem_map[]s on X86_PAE if we reduced the sparsemem region size to 256 MB. )Signed-off-by: Ingo Molnar <mingo@elte.hu>--- arch/x86/mm/init_32.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)Index: linux/arch/x86/mm/init_32.c===================================================================--- linux.orig/arch/x86/mm/init_32.c+++ linux/arch/x86/mm/init_32.c@@ -321,8 +321,13 @@);+ for (pfn = highstart_pfn; pfn < highend_pfn; pfn++) {+ /*+ * Holes under sparsemem might not have no mem_map[]:+ */+ if (pfn_valid(pfn))+ add_one_highpage_init(pfn_to_page(pfn), pfn, bad_ppro);+ } totalram_pages += totalhigh_pages; } #endif /* CONFIG_FLATMEM */ | http://lkml.org/lkml/2008/1/15/144 | CC-MAIN-2017-04 | refinedweb | 308 | 54.73 |
I like Angular 2 design, but am concerned it is not stable enough and I cannot easily or efficiently use a lot of pre-built pieces out there. Any thoughts.
Thanks!
Greg
I like Angular 2 design, but am concerned it is not stable enough and I cannot easily or efficiently use a lot of pre-built pieces out there. Any thoughts.
Thanks!
Greg
Given the choice of the two and the limited time, I think I would ask first…
Which one are you skilled at? Blaze or Angular2?
Have worked only a little on each. Comfortable with Typescript and MVC/P.
So about the same really.
For two months, and if you’re familiar with both, I think Blaze would be better. Angular2 support is getting there, but there’s still a few issues that people (like me) raise. So it’s better you stick to a more stable target.
Angular 1.x on the other hand is pretty stable so far. But you’d lose the typescriptiness. At least you’d have a whole slew of angular modules at your disposal.
If you do not have past Angular experience, I believe Blaze is a lot faster to get everything you need done. It’s 90%+ just HTML so as long as you know HTML you can use Blaze, and the learning curve is very light (only a few commands you actually need to use, that are all similar usage to the very basics of programming), with the problems you have to learn to resolve being few and far between once you grasp the basics.
Basically if you learn how helpers work (make a few basic ones for help with comparison/whatever your project needs - for example an “equals” helper that checks if 2 arguments are equivalent - it helps make functionality similar to a “switch” statement, another helper to reformat dates or price values to $x.xx format, etc), how to use loops, #with, and #if, you pretty much know everything you will likely need to know.
Oh, and for DOM manipulation (JQuery, etc) put the code you’d normally throw in document.ready in to the onRendered function.
Complete that, and congratulations, you know Blaze =)
(Edit: For an example of that “equals” helper I mentioned, add this in to a client js file:
Template.registerHelper("equals", function (a, b) { return (a == b); });
Now you can use it any time in any template as:
{{#if equals myBoolean true}} <p>success!</p> {{/if}}
if you need functionality similar to a switch statement in a Blaze template, you can use multiple of these!)
If you stay at Blaze, use ViewModel.
Why?
ViewModel is indeed awesome.
But if you are on a short deadline, lower learning curve might be a priority. Try ViewModel out and decide for yourself! | https://forums.meteor.com/t/new-project-blaze-or-angular-2-deliver-in-2-months/23485 | CC-MAIN-2018-51 | refinedweb | 466 | 70.94 |
Working with Controls Dynamically in Code
Each control in the Toolbox is a member of the Control class in the System.Windows.Forms namespace. Because each control in the Toolbox is a class, similar to a Window Forms class, you can dynamically create controls in code at runtime. Earlier you looked at the InitializeComponent method, which created the controls on the form. You can do the same type of dynamic code creation when writing applications. Doing so gives you flexibility in the user interface and enables you to create complete forms based on user settings that might be stored in a database or configuration file.
To find out how to create controls dynamically at runtime, add the code in Listing 3.4 to the Form_Load event of firstForm.
Listing 3.4 Creating Controls Dynamically at Runtime
VB.NET
Private Sub firstForm_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load ' Declare new instances of the RadioButton control class Dim Rd1 As RadioButton = New RadioButton() Dim Rd2 As RadioButton = New RadioButton() Dim Rd3 As RadioButton = New RadioButton() ' Position the controls Rd1.Location = New System.Drawing.Point(15, 90) Rd2.Location = New System.Drawing.Point(15, 120) Rd3.Location = New System.Drawing.Point(15, 150) ' Assign a text value for these controls Rd1.Text = "Red" Rd2.Text = "White" Rd3.Text = "Blue" ' Add to the forms controls collection Me.Controls.AddRange(New Control() {Rd1, Rd2, Rd3}) ' Add event handlers for the controls AddHandler Rd1.Click, AddressOf GenericClick AddHandler Rd2.Click, AddressOf GenericClick AddHandler Rd3.Click, AddressOf GenericClick End Sub Public Sub GenericClick(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Select Case sender.text Case "Red" Me.BackColor = Color.Red Case "White" Me.BackColor = Color.White Case "Blue" Me.BackColor = Color.Blue End Select End Sub
C#
private void firstForm_Load(object sender, System.EventArgs e) { // Declare new instances of the RadioButton control class RadioButton rd1 = new RadioButton(); RadioButton rd2 = new RadioButton(); RadioButton rd3 = new RadioButton(); // Position the controls rd1.Location = new System.Drawing.Point(15, 90); rd2.Location = new System.Drawing.Point(15, 120); rd2.Location = new System.Drawing.Point(15, 150); // Assign a text value for these controls rd1.Text = "Red"; rd2.Text = "White"; rd3.Text = "Blue"; // Add to the forms controls collection this.Controls.AddRange(new Control[] {rd1, rd2, rd3}); // Add the generic event handler rd1.Click += new System.EventHandler(genericClick); rd2.Click += new System.EventHandler(genericClick); rd3.Click += new System.EventHandler(genericClick); } private void genericClick(object sender, System.EventArgs e) { RadioButton rdb; rdb = (RadioButton)sender; this.BackColor = Color.FromName(rdb.Text); }
After the code is in, press F5 to run the application. You'll see that the three RadioButton controls appear on your form. Just like any other object, declaring a new instance of a control gives you the properties, methods, and events for that control. In Visual Basic 6, you could dynamically create controls with the New keyword, and then set the Right, Left, and Top properties to position them, but you needed to set the Visible property to True for them to show up on the form. In .NET, you set the X and Y screen coordinates of the newly created controls, and then add the controls to the form's Controls collection with the AddRange method.
The AddRange method takes an array of controls and adds them to the Controls collection. After the controls are added to the Controls collection, you add an event handler delegate to tell the control what event should fire and what method should handle the event. Inside the genericClick event, you accept the System.Object and System.EventArgs parameters. By accepting the System.Object parameter, the object that's associated with the event handler is passed to the event handler. So, you can convert the System.Object to the correct type (in this case, the type is RadioButton), and have available its properties, methods, and events. This is an immensely efficient way of handling multiple events with a single event handler. In the genericClick event, the Text property of the RadioButton is converted to a Color type, which has a FromName method to convert a common color name to the actual System.Drawing.Color type.
In the Visual Basic .NET code, I used the Select Case statement as an example of another great feature in .NET. When you typed in Select Case sender.Text and pressed the Enter key, the End Select was automatically added. This same behavior occurs in If...Then statements and With...End With statements, to name a few. On Day 8, "Core Language Concepts in Visual Basic .NET and C#," you learn about the language features of both Visual Basic .NET and C#, and you'll see the advantages of the Code Editor in more detail when you're using all the language features. | https://www.informit.com/articles/article.aspx?p=31560&seqNum=3 | CC-MAIN-2021-21 | refinedweb | 804 | 61.43 |
appends one or more datasets together into a single unstructured grid More...
#include <vtkAppendFilter.h>
appends one or more datasets together into a single unstructured grid
vtkAppendFilter is a filter that appends one of more datasets into a single unstructured grid. All geometry is extracted and appended, but point attributes (i.e., scalars, vectors, normals, field data, etc.) are extracted and appended only if all datasets have the point attributes available. (For example, if one dataset has scalars but another does not, scalars will not be appended.)
You can decide to merge points that are coincident by setting
MergePoints. If this flag is set, points are merged if they are within
Tolerance radius. If a point global id array is available (point data named "GlobalPointIds"), then two points are merged if they share the same point global id, without checking for coincident point.
Definition at line 45 of file vtkAppendFilter.h.
Definition at line 49 of file vtkAppendFilterGridAlgorithm.
Reimplemented from vtkUnstructuredGridAlgorithm.
Methods invoked by print to print information about the object including superclasses.
Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.
Reimplemented from vtkUnstructuredGridAlgorithm.
Get any input of this filter.
Definition at line 57 of file vtkAppendFilter.h.
Get/Set if the filter should merge coincidental points Note: The filter will only merge points if the ghost cell array doesn't exist Defaults to Off.
Get/Set the tolerance to use to find coincident points when
MergePoints is
true.
Default is 0.0.
This is simply passed on to the internal vtkLocator used to merge points.
vtkLocator::SetTolerance.
Get/Set whether Tolerance is treated as an absolute or relative tolerance.
The default is to treat it as an absolute tolerance. When off, the tolerance is multiplied by the diagonal of the bounding box of the input.
Remove a dataset from the list of data to append.
Returns a copy of the input array.
Modifications to this list will not be reflected in the actual inputs.
Set/get the desired precision for the output types.
See the documentation for the vtkAlgorithm::Precision enum for an explanation of the available precision settings.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkUnstructuredGridAlgorithm.
This is called by the superclass.
This is the method you should override.
Reimplemented from vt 126 of file vtkAppendFilter.h.
Definition at line 130 of file vtkAppendFilter.h.
Definition at line 132 of file vtkAppendFilter.h.
Definition at line 133 of file vtkAppendFilter.h.
Definition at line 137 of file vtkAppendFilter.h. | https://vtk.org/doc/nightly/html/classvtkAppendFilter.html | CC-MAIN-2021-17 | refinedweb | 434 | 52.15 |
alarm - schedule an alarm signal
#include <unistd.h> unsigned int alarm(unsigned int seconds);
The alarm() function causes the system to generate a SIGALRM signal for the process after the number of real-time seconds specified by seconds have elapsed. Processor scheduling delays may prevent the process from handling the signal as soon as it is generated.
If seconds is 0, a pending alarm request, if any, is cancelled.
Alarm requests are not stacked; only one SIGALRM generation can be scheduled in this manner; if the SIGALRM signal has not yet been generated, the call will result in rescheduling the time at which the SIGALRM signal will be generated.
Interactions between alarm() and any of setitimer(), ualarm() or usleep() are unspecified.
If there is a previous alarm() request with time remaining, alarm() returns a non-zero value that is the number of seconds until the previous request would have generated a SIGALRM signal. Otherwise, alarm() returns 0.
The alarm() function is always successful, and no return value is reserved to indicate an error.
None.
The fork() function clears pending alarms in the child process. A new process image created by one of the exec functions inherits the time left to an alarm signal in the old process' image.
None.
exec, fork(), getitimer(), pause(), sigaction(), ualarm(), usleep(), <signal.h>, <unistd.h>.
Derived from Issue 1 of the SVID. | http://pubs.opengroup.org/onlinepubs/7990989775/xsh/alarm.html | CC-MAIN-2014-10 | refinedweb | 227 | 62.68 |
--- David Abrahams <dave at boost-consulting.com> wrote: > We're using the builtin facilities provided by the GCC runtime to > demangle those strings. The only expanation I can imagine is that I > got the meaning of __GNUC_MINOR__ wrong: > > # ifndef BOOST_PYTHON_HAVE_GCC_CP_DEMANGLE > # if defined(__GNUC__) \ > && ((__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1))) \ > && !defined(__EDG_VERSION__) > # define BOOST_PYTHON_HAVE_GCC_CP_DEMANGLE > # endif > # endif > > Isn't that the part of the version number after the first decimal > point? I think your interpretation of the macros is correct, e.g.: __GNUC__ = 3 __GNUC_MINOR__ = 4 __GNUC_PATCHLEVEL__ = 0 Some observations: Unmangling works correctly under Redhat 8.0 gcc3.2, Redhat WS3 gcc 3.2.3, Mac OS 10.3 gcc 3.3. User-defined types (incl. std::string etc.) are also unmangled correctly with gcc 3.4.0 (under Redhat 8.0) but the builtin types are not unmangled. With regards to name unmangling, is there a distinction anywhere in Boost.Python between user-defined types and builtin types? If not the strange behavior must be a feature of gcc 3.4.0 in which case I should probably file a bug report. Ralf __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard | http://mail.python.org/pipermail/cplusplus-sig/2004-May/007080.html | CC-MAIN-2013-20 | refinedweb | 193 | 62.34 |
Talk:Systemd-nspawn
Contents
user namespaces
systemd-nspawn never uses user namespaces, as you can see from the source. User namespaces do not appear to work with a chroot at all right now, because you can't enter one while in a chroot and you can't use chroot while in a user namespace. - thestinger (talk) 19:35, 23 April 2014 (UTC)
- The report is still open, I'm restoring the link here just in case: FS#36969. The removed content is [1]. -- Kynikos (talk) 01:45, 25 April 2014 (UTC)
- In conclusion from above mentioning user namespaces was not relevant on this page (pity really). So the only way to restrict the nspawn-container appears to be limiting its capabilities on start-up (as per man systemd-nspawn). Regarding FS#36969: it was originally opened for lxc-containers anyway and those appear to support user namespaces now. Hence, the only question remaining for this article at this point would be, if there are any remaining issues arising for systemd in general when activating CONFIG_USER_NS for lxc (opinion on that?).
- I have added the bug and a couple links with background info to talk:linux containers so the reference does not get lost.
- --Indigo (talk) 08:56, 3 May 2014 (UTC)
systemd-nspawn as a build environment
I've been struggling trying to set this up and i assume others will as well. Would be nice to have an example of a build workflow using this tool on this or on a seperate page. Captaincurrie (talk) 18:32, 19 January 2015 (UTC)
- The devtools package implements this for Arch packaging, and is used for building everything in the repositories. It's as simple as replacing
makepkgwith
extra-i686-build+
extra-x86_64-build. -- thestinger 18:41, 19 January 2015 (UTC)
- Cool, i'll give that a try. Thanks :) Captaincurrie (talk) 10:05, 20 January 2015 (UTC)
systemd-nspawn usage examples
This page needs lots of awesome usage examples because its such an awesome tool. Please give suggestions. Captaincurrie (talk) 10:08, 20 January 2015 (UTC))
Wayland desktop environment inside nspawn
It would be great if someone with expertise wrote a section regarding starting graphical environments inside nspawn containers. It looks like there is some info on Github. This example shows how to run desktop environments in nspawn containers win kwin_wayland compositor. It should be possible to achieve this with mutter too, as it even supports nested mode with something like mutter --wayland --nested. Also we should be able to open new dbus session with something like eval $(dbus-launch --sh-syntax). Also it would be great if someone explained which packages could be omitted inside the container (like we don't need xorg org wayland installed if I get it right) on some popular distros.
—This unsigned comment is by Unb0rn (talk) 20:07, 23 June 2018. Please sign your posts with ~~~~!
linux-firmware causing issues with systemd-tmpfiles-setup.service - still relevant?
The systemd bug report connected with the issue was closed 27 Apr 2018: Do issues remain or is the fix good enough to remove the note?
—This unsigned comment is by Buovjaga (talk) 09:12, 25 October 2018. Please sign your posts with ~~~~! | https://wiki.archlinux.org/index.php?title=Talk:Systemd-nspawn&direction=prev&oldid=561573 | CC-MAIN-2020-16 | refinedweb | 535 | 62.58 |
The last component of this quickstart is retrieving your call logs using the Twilio REST API. You can find your logs by going to the the Logs page of your account on the Twilio web site, but getting access to this data programmatically is useful for automated reporting, or building other fully-featured systems like a voicemail application.
To get a list of recent calls, perform an HTTP GET request to the Calls resource URI.
GET{AccountSid}/Calls/{CallSid}
Here is all the code you need to retrieve your list of calls.
This tutorial assumes you have a Python development environment with
pip and
the
twilio-python helper library. Please see our post on [setting up your
environment][devenvironment] if you need help installing those programs.
from twilio.rest import Client # To find these visit account_sid = "ACXXXXXXXXXXXXXXXXX" auth_token = "YYYYYYYYYYYYYYYYYY" client = Client(account_sid, auth_token) for call in client.api.account.calls.list(): print("From: " + call.from_formatted + " To: " + call.to_formatted)
Then open the Terminal and run the above script to retrieve your list of calls:
$ python get_logs.py
For more information, check out the Twilio REST API documentation. If you have questions, comments, or suggestions about this quickstart, then give us a howl in the forums.
Next: Turn your browser into a phone with Twilio Client »
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the Twilio tag on Stack Overflow. | https://www.twilio.com/docs/quickstart/python/rest/call-log | CC-MAIN-2017-47 | refinedweb | 243 | 63.49 |
I tried this and found out that calling addConnector() post-start won't do anything. But I
can make a tiny sub-class that will:
public class BrokerService2
extends BrokerService
{
@Override
public TransportConnector addConnector(final TransportConnector connector) throws
Exception {
TransportConnector c = super.addConnector(connector);
if (isStarted()) {
c.setBrokerService(this);
c = startTransportConnector(c);
}
return c;
}
}
It looks like the setBrokerService() + startTransportConnector() are the calls that BrokerService.start()
would end up calling anyways.
The above code also looks like it hooks up to the bits on shutdown, so BrokerService.stop()
will actually stop this connector added post-start.
Does this look correct?
Is there any risk to adding a connector post-start like this?
--jason
On Dec 8, 2011, at 3:50 PM, Jason Dillon wrote:
> Is it possible to add a new transport connector to a broker post starting the BrokerService?
>
> I'd like to start up a broker w/o any transports for vm:// use only, but may need to
configure a transport (tcp or ssl) after the application is already up and using the broker,
so it would be a pita to stop the broker add the transport connector and then start it again.
>
> Is this possible? If so any thing to watch out for here?
>
> --jason | http://mail-archives.apache.org/mod_mbox/activemq-users/201112.mbox/%3C5F86DC59-0053-47B8-B95F-3F0FCE74FA6B@planet57.com%3E | CC-MAIN-2015-18 | refinedweb | 208 | 56.45 |
I have IntelliJ IDEA 9.0.2 build 95.66. I am trying to debug an AIR app that loads other Flex module swf using flash.net.URLLoader.
I have the AIR application and the Flex modules loaded within IDEA as modules. When I debug the AIR application by launching it from within IDEA, I can see the break points within the code of AIR application to take effect.
However, I am unable to debug the modules that get loaded within the AIR application.
When I connect IDEA to externally running adl, I can see the trace statements within IDEA console but am still unable to debug the loaded swf modules.
Has anybody had success debugging module swfs? If you have, can you share your debugger configuration and possibly walk me through the configuration porocess? Thanks.
I have IntelliJ IDEA 9.0.2 build 95.66. I am trying to debug an AIR app that loads other Flex module swf using flash.net.URLLoader.
How do breakpoints in modules that you are unable to debug look like? I mean is there a (x) symbol drawn over a red circle?
If breakpoints just remain red circle without anything drawn on top of it then you need to make sure that they are compiled with debug information. If you are sure that they are then please provide some more information about you project configuration. The best thing to investigate is a sample project where the issue is reproducible. If you can't provide one then following information could help us:
- Flex SDK version
- how do you compile main AIR application and modules
- code snippet that loads module
- logs from Flex debugger. To get logs please do following:
-- remove old logs from <idea.system.path>/log/*.* (idea.system.path property is set in the file <IDEA installation>/bin/idea.properties)
-- edit file <IDEA installation>/bin/log.xml and add following category there:
<log4j:configuration>
<category name="com.intellij.lang.javascript.flex.debug">
<priority value="DEBUG"/>
</category>
...
-- launch IDEA, reproduce the problem
After that please attach
<idea.system.path>/log/idea.log file here.
Hi Alexander
Thanks for answering. Here are the details you requested
* Break points for the AIR app appear red without a cross in them.
* Break points for the loaded modules are red and then change to have a black cross in them.
Attached is a screenshot of my IDEA project showing the break point with cross in it.
Flex version: 3.3.0.4852.
I have this version sdk configured in IDEA and use it to build both the AIR app and the module swfs.
I launched debugger from within IDEA for the AIR app project module.
Here is the code snippet for loading module swf:
var loader : URLLoader = new URLLoader();
loader.dataFormat = URLLoaderDataFormat.BINARY;
loader.addEventListener(Event.COMPLETE, handleDataLoadComplete, false, 0, true);
loader.addEventListener(IOErrorEvent.IO_ERROR, handleIOError, false, 0, true);
loader.load(new URLRequest(""));
I changed the debug log settings in IDEA and deleted my old idea.log file. After restarting IDEA, I dont see the log file anymore.
I won't be able to package a sample application for you, but I can provide all other information you may need.
Thanks for your help.
Attachment(s):
Picture 9.png
Alexander,
I was wondering if you got the chance to look at Nirupama's question. I work in the same company as her and both of us (and other IDEA users) can do everything but debug our AIR app in the IDE of our choice.
Hi Nirupama and Manish!
I'm sorry for delay, but I'm really interested in finding the cause of the problem.
Absence of logs is rather strange. Please make sure that there are no syntax errors in log.xml file and that you are looking for logs in a correct place.
But I'm not sure that logs would give me a clue. The best thing would be reproducing the problem. I created a sample project myself but I have breakpoints working in loaded module. Could you please take a look at attached project if it works for you and modify it in order to reproduce non-breakpointable code?
My workflow:
- open project
- launch 'appContext' run configuration
- launch 'air-app.xml' run confuguration (Debug mode)
- click 'Load module' button in started AIR application
- breakpoint in FlexModule.mxml file is reached (by some reason I sometimes need to click 'Load Module' button twice).
Attachment(s):
DebugLoadedModule.zip
Hi Alexander
I was able to debug the sample application/module you have attached. I then modified it to use the exact loading mechanism that we have in our code and I was able to debug that as well.
The problem in our application could be because of the layers that we have. Here are the details:
adchemy-lib1.swc
- declares a namespace with a specified hand-written manifest.xml
adchemy-lib2.swc depends on adchemy-lib1.swc
- declares same namespace as adchemy-lib1.swc but has its own manifest.xml
When I import both of these modules in IDEA, it automatically creates the module dependency - adchemy-lib2.swc depends on adchemy-lib1.swc.
With no further changes, I can compile IDEA module adchemy-lib1.swc without any issue. However, I cannot compile adchemy-lib2.swc as there is a compilation error that says it cannot find classes defined in namespace for classes defined in adchemy-lib1.swc's manifest.
These 2 modules compile fine with flex-mojos on command line. All other application modules rely on these 2 libraries in our product.
Do you have any idea why I will not be able to compile the second library? I can provide any other info you need.
Thanks
Nirupama
Please make sure that custom flex compiler configuration files (target/...-config-report.xml) are used for compilation. Details are here. If it is true then please attach -config-report.xml files for both swc modules.
Hi Alexander
I am attaching the config report files for the 2 modules in our codebase. adchemy-flex-shell module depends on adchemy-flex-common module.
To experiment further, I combined the 2 modules into one with a single manifest and then imported this single module into IDEA. With this configuration, I was able to debug the loaded flex modules. Yay!
Before I go down this path and change our code base, I'd like to know if there is some configuration I am missing to add support for multiple manifest files under same namespace. Please let me know asap.
Thanks for all your help on this!
Nirupama
Attachment(s):
adchemy-flex-shell-1.8-SNAPSHOT-config-report.xml
adchemy-flex-common-1.8-SNAPSHOT-config-report.xml
To resolve compilation issues without changing module structure please try to change line 43 in adchemy-flex-shell-1.8-SNAPSHOT-config-report.xml from
<path-element>/Users/nirupama/.m2/repository/com/adchemy/adchemy-flex-common/1.8-SNAPSHOT/adchemy-flex-common-1.8-SNAPSHOT.swc</path-element>
to
<path-element>/Users/nirupama/trunk/adchemy-flex/adchemy-flex-common/target/adchemy-flex-common-1.8-SNAPSHOT.swc</path-element>
As for debugging issues - they are caused by Adobe's debugger bug:
But I still would like to get a sample project to reproduce - may be we could find a workaround in IDEA. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206948885-Debugging-Flex-module-swf-in-IDEA | CC-MAIN-2019-18 | refinedweb | 1,216 | 59.5 |
Wouter Verhelst wrote: > > On Fri, 20 Apr 2001 idalton@ferret.phonewave.net wrote: > > > (I've just subscribed to the list, so am replying via the list archive) > > > > > > Hamish Moffatt <[8]hamish@debian.org> so spoke: > > > > On Tue, Mar 06, 2001 at 09:56:07AM -0300, Daniel Macedo Lorenzini wrote: > > > inittab with the inittab.real the system starts, > > > but when i try to > > > install some packages with apt-get or dpkg i get > > > some error messages > > > like Segmentation fault of Bus error, which sounds > > > very bad to me. > > > i have tryied installing it again, but the problem > > > repeats. i tryied to > > > install the kernel-souce deb package to compile the > > > kernel, but the dpkg > > > said that the end of file was strange and couldn t > > > install. > > > > > > anyone has a clue of what is going on? > > > >). > > That is almost correct (an "real" 68040 does not have an FPU either, it's > only got a MMU). > Quite wrong! A 68040 has both an built in MMU and FPU. The 68030 has a built in MMU but requires an external FPU, usually a 68882. > > To date: > > > > dpkg-reconfigure fails to run. > > apt-get fails to run. > > > > dpkg -i <foo.deb> runs. > > I have experienced the same symptoms on a Centris 610 -- yes, with LC040. > > The system works for the most part, but when it comes to dealing with > floating point operations, things fail. > > Never mind that ;-) > These problems are caused by LC040 processors with a bad mask revision. Apple seems to have used a lot of them. > -- > wouter dot verhelst at advalvas in belgium > #ifdef NOT_A_GODDAMN_YANK > { 0x10, "Minimise Delay" }, > #else > { 0x10, "Minimize Delay" }, > #endif > /* ipchains.c */ > > -- > To UNSUBSCRIBE, email to debian-68k-request@lists.debian.org > with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org -- Ray Knight audilvr@speakeasy.org 1983 XJ900 Seca | https://lists.debian.org/debian-68k/2001/04/msg00105.html | CC-MAIN-2016-30 | refinedweb | 301 | 75.71 |
Stronghold.
pip install django-stronghold
Add stronghold to your INSTALLED_APPS in your Django settings file
INSTALLED_APPS = ( #... 'stronghold', )
Then add the stronghold middleware to your MIDDLEWARE_CLASSES in your Django settings file
from stronghold.decorators import public @public def someview(request): # do some work #...
for class based views (decorator))
for class based views (mixin)
from stronghold.views import StrongholdPublicMixin class SomeView(StrongholdPublicMixin, View): pass
Configuration (optional)
STRONGHOLD_DEFAULTS
Use Strongholds defaults in addition to your own settings.
Default:
STRONGHOLD_DEFAULTS = True
You can add a tuple of url regexes in your settings file with the
STRONGHOLD_PUBLIC_URLS setting. Any url that matches against these patterns
will be made public without using the
@public decorator.
STRONGHOLD_PUBLIC_URLS
Default:
STRONGHOLD_PUBLIC_URLS = ()
If STRONGHOLD_DEFAULTS is True STRONGHOLD_PUBLIC_URLS contains:
(:
STRONGHOLD_PUBLIC_NAMED_URLS = ()
If STRONGHOLD_DEFAULTS is True additionally we search for
django.contrib.auth
if it exists, we add the login and logout view names to
STRONGHOLD_PUBLIC_NAMED_URLS
STRONGHOLD_USER_TEST_FUNC
Optionally, set STRONGHOLD_USER_TEST_FUNC to a callable to limit access to users
that pass a custom test. The callback receives a
User object and should
return
True if the user is authorized. This is equivalent to decorating a
view with
user_passes_test.
Example:
STRONGHOLD_USER_TEST_FUNC = lambda user: user.is_staff
Default:
STRONGHOLD_USER_TEST_FUNC = lambda user: user.is_authenticated()
Compatiblity
Tested with:
- Django 1.4.x
- Django 1.5.x
- Django 1.6.x
- Django 1.7.x
- Django 1.8.x
- Django 1.9.x
- Django 1.10.x
Contribute
See CONTRIBUTING.md | https://devhub.io/repos/mgrouchy-django-stronghold | CC-MAIN-2020-05 | refinedweb | 231 | 52.97 |
import tensorflow as tf import numpy as np
Tensors are multi-dimensional arrays with a uniform type (called a
dtype). You can see all supported
dtypes at
tf.dtypes.DType.
If you're familiar with NumPy, tensors are (kind of) like
np.arrays.
All tensors are immutable like Python numbers and strings: you can never update the contents of a tensor, only create a new one.
Basics
Let's create some basic tensors.
Here is a "scalar" or "rank-0" tensor . A scalar contains a single value, and no "axes".
# This will be an int32 tensor by default; see "dtypes" below. rank_0_tensor = tf.constant(4) print(rank_0_tensor)
tf.Tensor(4, shape=(), dtype=int32)
A "vector" or "rank-1" tensor is like a list of values. A vector has one axis:
# Let's make this a float tensor. rank_1_tensor = tf.constant([2.0, 3.0, 4.0]) print(rank_1_tensor)
tf.Tensor([2. 3. 4.], shape=(3,), dtype=float32)
A "matrix" or "rank-2" tensor has two axes:
# If you want to be specific, you can set the dtype (see below) at creation time rank_2_tensor = tf.constant([[1, 2], [3, 4], [5, 6]], dtype=tf.float16) print(rank_2_tensor)
tf.Tensor( [[1. 2.] [3. 4.] [5. 6.]], shape=(3, 2), dtype=float16)
Tensors may have more axes; here is a tensor with three axes:
# There can be an arbitrary number of # axes (sometimes called "dimensions") rank_3_tensor = tf.constant([ [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], [[10, 11, 12, 13, 14], [15, 16, 17, 18, 19]], [[20, 21, 22, 23, 24], [25, 26, 27, 28, 29]],]) print(rank_3_tensor)
tf.Tensor( [[[ 0 1 2 3 4] [ 5 6 7 8 9]] [[10 11 12 13 14] [15 16 17 18 19]] [[20 21 22 23 24] [25 26 27 28 29]]], shape=(3, 2, 5), dtype=int32)
There are many ways you might visualize a tensor with more than two axes.
You can convert a tensor to a NumPy array either using
np.array or the
tensor.numpy method:
np.array(rank_2_tensor)
array([[1., 2.], [3., 4.], [5., 6.]], dtype=float16)
rank_2_tensor.numpy()
array([[1., 2.], [3., 4.], [5., 6.]], dtype=float16)
Tensors often contain floats and ints, but have many other types, including:
- complex numbers
- strings
The base
tf.Tensor class requires tensors to be "rectangular"---that is, along each axis, every element is the same size. However, there are specialized types of tensors that can handle different shapes:
- Ragged tensors (see RaggedTensor below)
- Sparse tensors (see SparseTensor below)
You can do basic math on tensors, including addition, element-wise multiplication, and matrix multiplication.
a = tf.constant([[1, 2], [3, 4]]) b = tf.constant([[1, 1], [1, 1]]) # Could have also said `tf.ones([2,2])` print(tf.add(a, b), "\n") print(tf.multiply(a, b), "\n") print(tf.matmul(a, b), "\n")
tf.Tensor( [[2 3] [4 5]], shape=(2, 2), dtype=int32) tf.Tensor( [[1 2] [3 4]], shape=(2, 2), dtype=int32) tf.Tensor( [[3 3] [7 7]], shape=(2, 2), dtype=int32)
print(a + b, "\n") # element-wise addition print(a * b, "\n") # element-wise multiplication print(a @ b, "\n") # matrix multiplication
tf.Tensor( [[2 3] [4 5]], shape=(2, 2), dtype=int32) tf.Tensor( [[1 2] [3 4]], shape=(2, 2), dtype=int32) tf.Tensor( [[3 3] [7 7]], shape=(2, 2), dtype=int32)
Tensors are used in all kinds of operations (ops).
c = tf.constant([[4.0, 5.0], [10.0, 1.0]]) # Find the largest value print(tf.reduce_max(c)) # Find the index of the largest value print(tf.math.argmax(c)) # Compute the softmax print(tf.nn.softmax(c))
tf.Tensor(10.0, shape=(), dtype=float32) tf.Tensor([1 0], shape=(2,), dtype=int64) tf.Tensor( [[2.6894143e-01 7.3105854e-01] [9.9987662e-01 1.2339458e-04]], shape=(2, 2), dtype=float32)
About shapes
Tensors have shapes. Some vocabulary:
- Shape: The length (number of elements) of each of the axes of a tensor.
- Rank: Number of tensor axes. A scalar has rank 0, a vector has rank 1, a matrix is rank 2.
- Axis or Dimension: A particular dimension of a tensor.
- Size: The total number of items in the tensor, the product of the shape vector's elements.
Tensors and
tf.TensorShape objects have convenient properties for accessing these:
rank_4_tensor = tf.zeros([3, 2, 4, 5])
print("Type of every element:", rank_4_tensor.dtype) print("Number of axes:", rank_4_tensor.ndim) print("Shape of tensor:", rank_4_tensor.shape) print("Elements along axis 0 of tensor:", rank_4_tensor.shape[0]) print("Elements along the last axis of tensor:", rank_4_tensor.shape[-1]) print("Total number of elements (3*2*4*5): ", tf.size(rank_4_tensor).numpy())
Type of every element: <dtype: 'float32'> Number of axes: 4 Shape of tensor: (3, 2, 4, 5) Elements along axis 0 of tensor: 3 Elements along the last axis of tensor: 5 Total number of elements (3*2*4*5): 120
While axes are often referred to by their indices, you should always keep track of the meaning of each. Often axes are ordered from global to local: The batch axis first, followed by spatial dimensions, and features for each location last. This way feature vectors are contiguous regions of memory.
Indexing
Single-axis indexing
TensorFlow follows standard Python indexing rules, similar to indexing a list or a string in Python, and the basic rules for NumPy indexing.
- indexes start at
0
- negative indices count backwards from the end
- colons,
start:stop:step
rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34]) print(rank_1_tensor.numpy())
[ 0 1 1 2 3 5 8 13 21 34]
Indexing with a scalar removes the axis:
print("First:", rank_1_tensor[0].numpy()) print("Second:", rank_1_tensor[1].numpy()) print("Last:", rank_1_tensor[-1].numpy())
First: 0 Second: 1 Last: 34
Indexing with a
:slice keeps the axis:
print("Everything:", rank_1_tensor[:].numpy()) print("Before 4:", rank_1_tensor[:4].numpy()) print("From 4 to the end:", rank_1_tensor[4:].numpy()) print("From 2, before 7:", rank_1_tensor[2:7].numpy()) print("Every other item:", rank_1_tensor[::2].numpy()) print("Reversed:", rank_1_tensor[::-1].numpy())
Everything: [ 0 1 1 2 3 5 8 13 21 34] Before 4: [0 1 1 2] From 4 to the end: [ 3 5 8 13 21 34] From 2, before 7: [1 2 3 5 8] Every other item: [ 0 1 3 8 21] Reversed: [34 21 13 8 5 3 2 1 1 0]
Multi-axis indexing
Higher rank tensors are indexed by passing multiple indices.
The exact same rules as in the single-axis case apply to each axis independently.
print(rank_2_tensor.numpy())
[[1. 2.] [3. 4.] [5. 6.]]
Passing an integer for each index, the result is a scalar.
# Pull out a single value from a 2-rank tensor print(rank_2_tensor[1, 1].numpy())
4.0
You can index using any combination of integers and slices:
# Get row and column tensors print("Second row:", rank_2_tensor[1, :].numpy()) print("Second column:", rank_2_tensor[:, 1].numpy()) print("Last row:", rank_2_tensor[-1, :].numpy()) print("First item in last column:", rank_2_tensor[0, -1].numpy()) print("Skip the first row:") print(rank_2_tensor[1:, :].numpy(), "\n")
Second row: [3. 4.] Second column: [2. 4. 6.] Last row: [5. 6.] First item in last column: 2.0 Skip the first row: [[3. 4.] [5. 6.]]
Here is an example with a 3-axis tensor:
print(rank_3_tensor[:, :, 4])
tf.Tensor( [[ 4 9] [14 19] [24 29]], shape=(3, 2), dtype=int32)
Read the tensor slicing guide to learn how you can apply indexing to manipulate individual elements in your tensors.
Manipulating Shapes
Reshaping a tensor is of great utility.
# Shape returns a `TensorShape` object that shows the size along each axis x = tf.constant([[1], [2], [3]]) print(x.shape)
(3, 1)
# You can convert this object into a Python list, too print(x.shape.as_list())
[3, 1]
You can reshape a tensor into a new shape. The
tf.reshape operation is fast and cheap as the underlying data does not need to be duplicated.
# You can reshape a tensor to a new shape. # Note that you're passing in a list reshaped = tf.reshape(x, [1, 3])
print(x.shape) print(reshaped.shape)
(3, 1) (1, 3)
The data maintains its layout in memory and a new tensor is created, with the requested shape, pointing to the same data. TensorFlow uses C-style "row-major" memory ordering, where incrementing the rightmost index corresponds to a single step in memory.
print(rank_3_tensor)
tf.Tensor( [[[ 0 1 2 3 4] [ 5 6 7 8 9]] [[10 11 12 13 14] [15 16 17 18 19]] [[20 21 22 23 24] [25 26 27 28 29]]], shape=(3, 2, 5), dtype=int32)
If you flatten a tensor you can see what order it is laid out in memory.
# A `-1` passed in the `shape` argument says "Whatever fits". print(tf.reshape(rank_3_tensor, [-1]))
tf.Tensor( [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29], shape=(30,), dtype=int32)
Typically the only reasonable use of
tf.reshape is to combine or split adjacent axes (or add/remove
1s).
For this 3x2x5 tensor, reshaping to (3x2)x5 or 3x(2x5) are both reasonable things to do, as the slices do not mix:
print(tf.reshape(rank_3_tensor, [3*2, 5]), "\n") print(tf.reshape(rank_3_tensor, [3, -1]))
tf.Tensor( [[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14] [15 16 17 18 19] [20 21 22 23 24] [25 26 27 28 29]], shape=(6, 5), dtype=int32) tf.Tensor( [[ 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19] [20 21 22 23 24 25 26 27 28 29]], shape=(3, 10), dtype=int32)
Reshaping will "work" for any new shape with the same total number of elements, but it will not do anything useful if you do not respect the order of the axes.
Swapping axes in
tf.reshape does not work; you need
tf.transpose for that.
# Bad examples: don't do this # You can't reorder axes with reshape. print(tf.reshape(rank_3_tensor, [2, 3, 5]), "\n") # This is a mess print(tf.reshape(rank_3_tensor, [5, 6]), "\n") # This doesn't work at all try: tf.reshape(rank_3_tensor, [7, -1]) except Exception as e: print(f"{type(e).__name__}: {e}")
tf.Tensor( [[[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14]] [[15 16 17 18 19] [20 21 22 23 24] [25 26 27 28 29]]], shape=(2, 3, 5), dtype=int32) tf.Tensor( [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23] [24 25 26 27 28 29]], shape=(5, 6), dtype=int32) InvalidArgumentError: Input to reshape is a tensor with 30 values, but the requested shape requires a multiple of 7 [Op:Reshape]
You may run across not-fully-specified shapes. Either the shape contains a
None (an axis-length is unknown) or the whole shape is
None (the rank of the tensor is unknown).
Except for tf.RaggedTensor, such shapes will only occur in the context of TensorFlow's symbolic, graph-building APIs:
More on
DTypes.
You can cast from type to type.
the_f64_tensor = tf.constant([2.2, 3.3, 4.4], dtype=tf.float64) the_f16_tensor = tf.cast(the_f64_tensor, dtype=tf.float16) # Now, cast to an uint8 and lose the decimal precision the_u8_tensor = tf.cast(the_f16_tensor, dtype=tf.uint8) print(the_u8_tensor)
tf.Tensor([2 3 4], shape=(3,), dtype=uint8)
Broadcasting
Broadcasting is a concept borrowed from the equivalent feature in NumPy. In short, under certain conditions, smaller tensors are "stretched" automatically to fit larger tensors when running combined operations on them.
The simplest and most common case is when you attempt to multiply or add a tensor to a scalar. In that case, the scalar is broadcast to be the same shape as the other argument.
x = tf.constant([1, 2, 3]) y = tf.constant(2) z = tf.constant([2, 2, 2]) # All of these are the same computation print(tf.multiply(x, 2)) print(x * y) print(x * z)
tf.Tensor([2 4 6], shape=(3,), dtype=int32) tf.Tensor([2 4 6], shape=(3,), dtype=int32) tf.Tensor([2 4 6], shape=(3,), dtype=int32)
Likewise, axes with length 1 can be stretched out to match the other arguments. Both arguments can be stretched in the same computation.
In this case a 3x1 matrix is element-wise multiplied by a 1x4 matrix to produce a 3x4 matrix. Note how the leading 1 is optional: The shape of y is
[4].
# These are the same computations x = tf.reshape(x,[3,1]) y = tf.range(1, 5) print(x, "\n") print(y, "\n") print(tf.multiply(x, y))
tf.Tensor( [[1] [2] [3]], shape=(3, 1), dtype=int32) tf.Tensor([1 2 3 4], shape=(4,), dtype=int32) tf.Tensor( [[ 1 2 3 4] [ 2 4 6 8] [ 3 6 9 12]], shape=(3, 4), dtype=int32)
Here is the same operation without broadcasting:
x_stretch = tf.constant([[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]]) y_stretch = tf.constant([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]) print(x_stretch * y_stretch) # Again, operator overloading
tf.Tensor( [[ 1 2 3 4] [ 2 4 6 8] [ 3 6 9 12]], shape=(3, 4), dtype=int32)
Most of the time, broadcasting is both time and space efficient, as the broadcast operation never materializes the expanded tensors in memory.
You see what broadcasting looks like using
tf.broadcast_to.
print(tf.broadcast_to(tf.constant([1, 2, 3]), [3, 3]))
tf.Tensor( [[1 2 3] [1 2 3] [1 2 3]], shape=(3, 3), dtype=int32)
Unlike a mathematical op, for example,
broadcast_to does nothing special to save memory. Here, you are materializing the tensor.
It can get even more complicated. This section of Jake VanderPlas's book Python Data Science Handbook shows more broadcasting tricks (again in NumPy).
tf.convert_to_tensor
Most ops, like
tf.matmul and
tf.reshape take arguments of class
tf.Tensor. However, you'll notice in the above case, Python objects shaped like tensors are accepted.
Most, but not all, ops call
convert_to_tensor on non-tensor arguments. There is a registry of conversions, and most object classes like NumPy's
ndarray,
TensorShape, Python lists, and
tf.Variable will all convert automatically.
See
tf.register_tensor_conversion_function for more details, and if you have your own type you'd like to automatically convert to a tensor.
Ragged Tensors
A tensor with variable numbers of elements along some axis is called "ragged". Use
tf.ragged.RaggedTensor for ragged data.
For example, This cannot be represented as a regular tensor:
ragged_list = [ [0, 1, 2, 3], [4, 5], [6, 7, 8], [9]]
try: tensor = tf.constant(ragged_list) except Exception as e: print(f"{type(e).__name__}: {e}")
ValueError: Can't convert non-rectangular Python sequence to Tensor.
Instead create a
tf.RaggedTensor using
tf.ragged.constant:
ragged_tensor = tf.ragged.constant(ragged_list) print(ragged_tensor)
<tf.RaggedTensor [[0, 1, 2, 3], [4, 5], [6, 7, 8], [9]]>
The shape of a
tf.RaggedTensor will contain some axes with unknown lengths:
print(ragged_tensor.shape)
(4, None)
String tensors
tf.string is a
dtype, which is to say you can represent data as strings (variable-length byte arrays) in tensors.
The strings are atomic and cannot be indexed the way Python strings are. The length of the string is not one of the axes of the tensor. See
tf.strings for functions to manipulate them.
Here is a scalar string tensor:
# Tensors can be strings, too here is a scalar string. scalar_string_tensor = tf.constant("Gray wolf") print(scalar_string_tensor)
tf.Tensor(b'Gray wolf', shape=(), dtype=string)
And a vector of strings:
# If you have three string tensors of different lengths, this is OK. tensor_of_strings = tf.constant(["Gray wolf", "Quick brown fox", "Lazy dog"]) # Note that the shape is (3,). The string length is not included. print(tensor_of_strings)
tf.Tensor([b'Gray wolf' b'Quick brown fox' b'Lazy dog'], shape=(3,), dtype=string)
In the above printout the
b prefix indicates that
tf.string dtype is not a unicode string, but a byte-string. See the Unicode Tutorial for more about working with unicode text in TensorFlow.
If you pass unicode characters they are utf-8 encoded.
tf.constant("🥳👍")
<tf.Tensor: shape=(), dtype=string, numpy=b'\xf0\x9f\xa5\xb3\xf0\x9f\x91\x8d'>
Some basic functions with strings can be found in
tf.strings, including
tf.strings.split.
# You can use split to split a string into a set of tensors print(tf.strings.split(scalar_string_tensor, sep=" "))
tf.Tensor([b'Gray' b'wolf'], shape=(2,), dtype=string)
# ...but it turns into a `RaggedTensor` if you split up a tensor of strings, # as each string might be split into a different number of parts. print(tf.strings.split(tensor_of_strings))
<tf.RaggedTensor [[b'Gray', b'wolf'], [b'Quick', b'brown', b'fox'], [b'Lazy', b'dog']]>
And
tf.string.to_number:
text = tf.constant("1 10 100") print(tf.strings.to_number(tf.strings.split(text, " ")))
tf.Tensor([ 1. 10. 100.], shape=(3,), dtype=float32)
Although you can't use
tf.cast to turn a string tensor into numbers, you can convert it into bytes, and then into numbers.
byte_strings = tf.strings.bytes_split(tf.constant("Duck")) byte_ints = tf.io.decode_raw(tf.constant("Duck"), tf.uint8) print("Byte strings:", byte_strings) print("Bytes:", byte_ints)
Byte strings: tf.Tensor([b'D' b'u' b'c' b'k'], shape=(4,), dtype=string) Bytes: tf.Tensor([ 68 117 99 107], shape=(4,), dtype=uint8)
# Or split it up as unicode and then decode it unicode_bytes = tf.constant("アヒル 🦆") unicode_char_bytes = tf.strings.unicode_split(unicode_bytes, "UTF-8") unicode_values = tf.strings.unicode_decode(unicode_bytes, "UTF-8") print("\nUnicode bytes:", unicode_bytes) print("\nUnicode chars:", unicode_char_bytes) print("\nUnicode values:", unicode_values)
Unicode bytes: tf.Tensor(b'\xe3\x82\xa2\xe3\x83\x92\xe3\x83\xab \xf0\x9f\xa6\x86', shape=(), dtype=string) Unicode chars: tf.Tensor([b'\xe3\x82\xa2' b'\xe3\x83\x92' b'\xe3\x83\xab' b' ' b'\xf0\x9f\xa6\x86'], shape=(5,), dtype=string) Unicode values: tf.Tensor([ 12450 12498 12523 32 129414], shape=(5,), dtype=int32)
The
tf.string dtype is used for all raw bytes data in TensorFlow. The
tf.io module contains functions for converting data to and from bytes, including decoding images and parsing csv.
Sparse tensors
Sometimes, your data is sparse, like a very wide embedding space. TensorFlow supports
tf.sparse.SparseTensor and related operations to store sparse data efficiently.
# Sparse tensors store values by index in a memory-efficient manner sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]) print(sparse_tensor, "\n") # You can convert sparse tensors to dense print(tf.sparse.to_dense(sparse_tensor))
SparseTensor(indices=tf.Tensor( [[0 0] [1 2]], shape=(2, 2), dtype=int64), values=tf.Tensor([1 2], shape=(2,), dtype=int32), dense_shape=tf.Tensor([3 4], shape=(2,), dtype=int64)) tf.Tensor( [[1 0 0 0] [0 0 2 0] [0 0 0 0]], shape=(3, 4), dtype=int32) | https://www.tensorflow.org/guide/tensor?hl=ar | CC-MAIN-2022-21 | refinedweb | 3,209 | 67.76 |
Details
Description
It might be useful to have command-line tools that can read & write arbitrary XML data from & to Avro data files.
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
Hi,
I created the following proposal for this project:
Thanks,
Mike
This would be a great addition to Avro.
A few quick comments on the proposal:
- you might use
AVRO-1402to support XML Schema decimal types
- when mapping from an XML schema to an Avro schema we might add attributes to the Avro schema indicating the XML schema. E.g., XML's unsignedShort type might map to an Avro schema like
{"type":"int", "xml-schema":"unsignedInt"}.
Then conversion back to an XML schema might be done losslessly.
Thanks for the insight! I have modified the proposal accordingly. If we have the URL to the XML Schema, we can encode that in the Avro schema. If we don't, your recommendation makes a lot of sense. It is a bit more complicated for complex XML types, as all, choice, and sequence groups may contain more groups internally.
I propose to store group metadata as JSON objects, each of which with a "type" field containing the child type: “all,” “choice,” “sequence,” or “element.” Other fields define the minimum and maximum number of occurrences, and a value field. For groups, the “value” field is an array of the members of that group. For elements, the “value” field is the element’s fully-qualified XML name. Here is an example:
{ "type": "sequence", "minOccurs": 0, "maxOccurs": "unbounded", "value": [ { "type": "element", "minOccurs": 1, "maxOccurs": 1, "value": { "namespace": "", "localPart": "complexType" } } ] }
This isn't a perfect solution - attributes, elements, groups, and types can be abstracted to separate sections of an XML Schema for reusability across the document. In addition, multiple schemas can be referenced when describing an XML document. I think the only true way to support lossless Avro Schema -> XML Schema conversion would be to encode the entire XML Schema in JSON in the Avro schema. That said, the updated proposal will allow us to create an XML Schema that validates the same documents that the original schema would, so I think it is a reasonable compromise.
Uploading AVRO-457.patch.
This change is quite large, so I thought I'd stop here and submit, even if it doesn't completely follow the spec I proposed five weeks ago.
Notable Differences:
- I did not add support to automatically generate an XML Schema from an Avro Schema. XmlDatumWriter encodes the XML Schema locations, and XmlDatumReader uses that to reconstruct the XML document.
- Avro maps are automatically created if an element has exactly one non-optional ID attribute. (I did not make it an optional feature.)
- Enums are automatically created if all of the enumeration values are valid Avro enum symbols. (This was not in the spec.)
- I unfortunately did not have much success with JRegex for evaluating XML regular expressions. As a result, regular-expression validation is not part of this release.
- I added XMLUnit to the dependencies for validating generated XML documents.
I saw there is a "Submit Patch" button ... is that what I should use instead? I tried it but I did not see a way to upload the patch file.
I'm backporting the XML-Schema-specific code to
XMLSCHEMA-36.
Regards,
Mike
Apache XML Schema v2.2.0 is released, so I removed all of the XML-Schema-specific code that went with it. This patch also includes support for
AVRO-739, and does a better job of creating valid Java package names. Likewise the Avro Schema Compiler can now generate Java code for generated schemas.
Feedback is appreciated!
Thanks,
Mike
I am unfamiliar with XML-Schema and probably not qualified to review this. Can someone who uses XML-Schema please have a look at this? It would be best to know that others think this would be useful before adding it to Avro. It might also be useful to add command-line tools that convert an Avro data file to an XML file and vice-versa, so that folks can easily see by example how it works.
Sorry for the long delay. I added two tools: FromXmlTool.java and ToXmlTool.java. Unfortunately I cannot seem to test this locally; I cannot run the command
mvn exec:java -Dexec:mainClass="org.apache.avro.tool.Main"
without getting the error
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.4.0:java (default-cli) on project avro-toplevel: The parameters 'mainClass' for goal org.codehaus.mojo:exec-maven-plugin:1.4.0:java are missing or invalid
Any help is appreciated.
Michael Pigott, I think you want to use . instead of : for the main class argument: -Dexec.mainClass="..."
Thanks Ryan Blue! That was one part, and I also needed to run mvn install in order to make all of the dependencies available.
I attached the corrected AVRO-457.patch
.
Please allow me to comment on this after having used Michael's project (from) on the official (and fairly complex) ebucore.xsd schema version 1.6 (see and)
To me, from a developer point of view, the need for the tool Michael has written is very high; nearly all official ontologies release their versions using XML schema (XSD) files. Just like the XJC (and by extent the JAXB) project, it's important to have de-facto standard projects to convert them to working memory models. Having a reliable XSD->AVSC converter would be awesome.
I've played around with Michael's code and got it to successfully generate an avro schema from the ebucore.xsd file. However, I had to make a lot of modifications to the original file because not all standards are implemented in xml-to-avro (for one, elements with default, empty types crash the converter).
After having tried four solutions:
1)
2)
3)
4)
I conclude that solution 1 is the best for now, because it works out of the box without modifications and generates a more type-safe schema (than Michael's converter), although for complex schemas like ebucore, double types are introduced (eg; Double1, Double2, ...).
All this to make a point: I, together with a lot of other developers, truly see the need for an official XSD->AVSC converter, so please consider it. I can help with testing, but I'm no XSD expert.
You might want to contact to folks at
bram
Bram,
Thank you for taking a look at this! I'm sorry you've had trouble with my implementation - I was unaware of the EBU family of XML documents when I was writing the code. My testing primarily focused on XBRL documents[1], which I'm sure is why you've had trouble. Unfortunately, I'm sure you noticed that this project has not gained a lot of traction since I proposed to work on it a year and a half ago, primarily due to a lack of XML / XML Schema experience among the project's contributors.
That said, I'm happy to look at the schema you provided and see what I can do to correct the bugs you found in the coming weeks. (Unfortunately I have not looked at this code in a long time, and I do not even program in Java regularly anymore.) If you have more information you can provide to help me get started, feel free to file an issue on GitHub[2]!
Regards,
Mike
[1]
[2]
Bram Biesbrouck, thanks for posting your summary of the current state of this. I agree with Michael's assessment that it isn't a lack of interest in having something like this, it is that we're not XSD experts either. That said, if we can get the right people together to collaborate around this, like Michael Pigott and the Stealth.ly team that put together option #1, then I can take care of the commit part. I don't think we all have to be experts if there's a portion of the community that is interested in looking at this, updating Michael's latest work, and helping us review to get it in.
The result of a first try to convert ebucore.xsd to a json schema
Hi Ryan Blue and Michael Pigott,
I think I might have found a better approach to this...
To parse XSD schemas, 99% of Java users use XJC to convert an XSD to POJOs. The results of this tool are very good, since it's a mature tool.
Because it makes sense to reuse a common POJO codebase to (de)serialize to JSON/XML/AVRO, this might be a better start to investigate a robust XSD->AVRO parser. Also because raw XSD parsing/understanding is quite error prone.
Fortunately, a lot of work has been done already. Take a look at this project.
It generates a JSON Schema from a POJO class (and recursively all it's members). The result is a JSON schema.
Now the best part: the same developers also wrote this project that converts a JSON schema to an AVRO schema. However, the json->avro converter is not production ready yet. But it has a very nice codebase to start with. This class is a good entry point to its inner workings.
I'm currently trying to find some time to work on it, but it's slow. I successfully managed to convert the EBUCore XSD schema to a JSON schema though. The next step (JSON->AVRO) is more difficult I'm afraid. Hence: do the AVRO developers have any experience with converting JSON schemas into (the more narrow) AVRO schema structure? Would be interesting to investigate in general because JSON validation is becoming more and more relevant these days.
b.
Bram Biesbrouck, it may be easier than going through a JSON schema. Avro's reflect support will analyze Java classes to produce Avro schemas and can also serialize instances of those classes. You may not need to use JSON at all, just try ReflectData.get().getSchema(MyXMLObject.class).
I wish it was that easy. When I run this code
Schema avroSchema = ReflectData.get().getSchema(EbuCoreMainType.class);
I get this exception:
org.apache.avro.AvroTypeException: Unknown type: T
because JAXBElement uses generic types and one of the members in the tree is:
protected List<JAXBElement<Object>> audioContentIDRef;
and generics don't seem to work well.
Any suggestions to get around this?
b.
Is that generic allowed in your original XSD or is that introduced when you convert to JAXB objects? If it is the latter, then I think we would have to get around that with a direct conversion to avoid losing what the type contained in that list is.
Hi Ryan,
(sorry for the delay)
The relevant pieces in the XSD schema are the following:
<complexType name="audioProgrammeType"> <sequence> <element name="audioContentIDRef" type="IDREF" minOccurs="0" maxOccurs="unbounded"> <annotation> <documentation>A list of reference to audioContents, each defining one component of an audioProgramme (e.g. background music), its association with an audioPack (e.g. a 2.0 audioPack of audioChannels for stereo reproduction), its association with a audioStream, and its set of loudness parameters. Notice that loudness values of a program are dependent of the associated audioPack mixReproductionFormat. </documentation> </annotation> </element> <element name="loudnessMetadata" type="ebucore:loudnessMetadataType" minOccurs="0"> <annotation> <documentation>A set of loudness parameters proper to the audio content of the whole programme. </documentation> </annotation> </element> ... </sequence> </complexType>
I'm no JAXB (nor XSD) expert, but I assume JAXB doesn't really know what to do with the IDREF type, and just defaults to using an Object type. What do you think?
This is similar to
AVRO-456. | https://issues.apache.org/jira/browse/AVRO-457 | CC-MAIN-2016-40 | refinedweb | 1,947 | 63.9 |
Isomap for Dimensionality Reduction in Python
Isomap (Isometric Feature Mapping), unlike Principle Component Analysis, is a non-linear feature reduction method.
We will explore the data set used by the original authors of isomap to demonstrate the use of isomap to reduce feature dimensions.
The image below, taken from the original paper by Tenenbaum et al., demonstrates how Isomap operates.
In A we see that two points that are close together in Euclidean Space in this “Swiss roll” dataset may not reflect the intrinsic similarity between these two points.
In B a graph is constructed with each point as n nearest neighbours (K=7 here). The shortest geodesic distance is then calculated by a path finding algorithm such as Djikstra’s Shortest Path.
In C, this is the 2D graph is recovered from applying classical MDS (Multidimensional scaling) to the matrix of graph distances. A straight line has been applied to represent a simpler and cleaner approximation to the true geodesic path shown in A.
Isomap should be used when there is a non-linear mapping between your higher-dimensional data and your lower-dimensional manifold (e.g. data on a sphere).
Isomap is better than linear methods when dealing with almost all types of real image and motion tracking and we will now look at the example that was used in the Tenenbaum et al. of images of faces in different poses and light conditions.
The images are 4096 dimensions (64 pixel x 64 pixel).
We will reduce this down to just 2 dimensions
Scikit-learn isomap
We start by loading our face data.
import math import pandas as pd import scipy.io pd.options.display.max_columns = 7 mat = scipy.io.loadmat('data/face_data.mat') df = pd.DataFrame(mat['images']).T num_images, num_pixels = df.shape pixels_per_dimension = int(math.sqrt(num_pixels)) # Rotate the pictures for idx in df.index: df.loc[idx] = df.loc[idx].values.reshape(pixels_per_dimension, pixels_per_dimension).T.reshape(-1) # Show first 5 rows print(df.head())
0 1 2 ... 4093 4094 4095 0 0.016176 0.000000 0.000000 ... 0.0 0.0 0.0 1 0.016176 0.000000 0.000000 ... 0.0 0.0 0.0 2 0.016176 0.000000 0.000000 ... 0.0 0.0 0.0 3 0.016176 0.110754 0.384988 ... 0.0 0.0 0.0 4 0.016176 0.000000 0.000000 ... 0.0 0.0 0.0 [5 rows x 4096 columns]
Now we fit our isomap to our data. Remember that if your data is not on the same scale, it may require scaling before this step.
We will fit a manifold using 6 nearest neighbours and our aim is to reduce down to 2 components.
from sklearn import manifold iso = manifold.Isomap(n_neighbors=6, n_components=2) iso.fit(df) manifold_2Da = iso.transform(df) manifold_2D = pd.DataFrame(manifold_2Da, columns=['Component 1', 'Component 2']) # Left with 2 dimensions manifold_2D.head()
import matplotlib.pyplot as plt import numpy as np %matplotlib inline fig = plt.figure() fig.set_size_inches(10, 10) ax = fig.add_subplot(111) ax.set_title('2D Components from Isomap of Facial Images') ax.set_xlabel('Component: 1') ax.set_ylabel('Component: 2') # Show 40 of the images ont the plot x_size = (max(manifold_2D['Component 1']) - min(manifold_2D['Component 1'])) * 0.08 y_size = (max(manifold_2D['Component 2']) - min(manifold_2D['Component 2'])) * 0.08 for i in range(40): img_num = np.random.randint(0, num_images) x0 = manifold_2D.loc[img_num, 'Component 1'] - (x_size / 2.) y0 = manifold_2D.loc[img_num, 'Component 2'] - (y_size / 2.) x1 = manifold_2D.loc[img_num, 'Component 1'] + (x_size / 2.) y1 = manifold_2D.loc[img_num, 'Component 2'] + (y_size / 2.) img = df.iloc[img_num,:].values.reshape(pixels_per_dimension, pixels_per_dimension) ax.imshow(img, aspect='auto', cmap=plt.cm.gray, interpolation='nearest', zorder=100000, extent=(x0, x1, y0, y1)) # Show 2D components plot ax.scatter(manifold_2D['Component 1'], manifold_2D['Component 2'], marker='.',alpha=0.7) ax.set_ylabel('Up-Down Pose') ax.set_xlabel('Right-Left Pose') plt.show()
We have reduced the dimensions from 4096 dimensions (pixels) to just 2 dimensions.
These 2 dimensions represent the different points of view of the face, from left to right and from bottom to top. | https://benalexkeen.com/isomap-for-dimensionality-reduction-in-python/ | CC-MAIN-2021-21 | refinedweb | 683 | 53.37 |
Polymorphic Methods
Kaz Yosh
Ranch Hand
Joined: May 22, 2003
Posts: 63
posted
May 22, 2003 10:26:00
0
Could anyone explain following code to me?
public class poly{ public static void main(String [] kaz){ A ref1 = new C(); B ref2 = (B)ref1; System.out.println(ref2.g()); } } class A{ private int f(){ System.out.println("inside f() at A"); return 0;} public int g(){ System.out.println("inside g() at A"); return 3;} } class B extends A{ private int f(){ System.out.println("inside f() at B"); return 1;} public int g(){ System.out.println("inside g() at B"); return f();} } class C extends B{ public int f(){ System.out.println("inside f() at C"); return 2;} }
This is from "A programmer's guide to
Java
certification" Chapter6 review question 6.25.
The point I dont understand is that since class C extends B, C should have all the methods B has. but method f() in B is set to private that means no way C can override f() in B.
anyway, C surely has method g() inherited from B.
and g() returns method f(), which I think is f() in class C because the method g() here is the one in the C inherited from B.
Please help me out of this confusion.
"If anything can go wrong, it will"
Corey McGlone
Ranch Hand
Joined: Dec 20, 2001
Posts: 3271
posted
May 22, 2003 11:04:00
0
The key here is that private methods are not inherited. Therefore, the method f() in C
does not
override the method f() in B - it only shadows it. Therefore, as we're treating the object as type B (because of the cast), we invoke the method f() of class B. Had this method been truly overridden, we would have used dynamic binding and invoked the method in class C. But, as private methods are not inherited, there is no overriding occurring and, hence, no dynamic binding.
I hope that helps,
Corey
SCJP Tipline, etc.
Kaz Yosh
Ranch Hand
Joined: May 22, 2003
Posts: 63
posted
May 22, 2003 11:10:00
0
If the method g() is inherited to C from B,
does dynamic method look up try to use the g() in C?
then inside g() method, f() is called. How the dynamic method lookup determines which version of f() to use?
Giselle Dazzi
Ranch Hand
Joined: Apr 20, 2003
Posts: 168
posted
May 22, 2003 11:17:00
0
Now I got confused.
My doubt is about method g() and not method f().
Class C inherits a method g() so when we call ref2.g() arent we going to run the g() in Class C ? And therefore run the f() in class C also ?
Corey, I compiled it and it runs your way, Im just trying to understand...
Giselle Dazzi<br />SCJP 1.4
Corey McGlone
Ranch Hand
Joined: Dec 20, 2001
Posts: 3271
posted
May 22, 2003 11:30:00
0
Notice that class C does not override the method g(). Therefore, it inherits the method from class B. When you invoke method G() on an object of type C, you're really invoking code defined within class B - this is the essence of inheritance.
However, f()
is not
inherited by class C. Class C defines its own f() method, but this does not cause dynamic binding as no method has been overridden (a method must first be inherited to be overridden). Therefore, when we invoke method f() on the object, it is invoked on class B because that is the compile-time type of the variable ref2.
§8.4.6 Inheritance, Overriding, and Hiding
for more info.
Just for kicks, try changing the method f() from private to public in class B and see how the behavior changes.
I hope that helps,
Corey
Barkat Mardhani
Ranch Hand
Joined: Aug 05, 2002
Posts: 787
posted
May 22, 2003 13:11:00
0
I understand it this way:
Whenever a non-static method is inherited, any reference to static or private methods/fields in it will be looked up in the class where this method is defined (i.e super class). In this case, method is defined in class B and inherited in class C. Therefore any calls to private method (f() in this case) will be looked up in super class (class B).
In other words all non-static inherited methods bring with them the implementations of all static and private methods and fields.
Hope this make sense.
Barkat
Jose Botella
Ranch Hand
Joined: Jul 03, 2001
Posts: 2120
posted
May 22, 2003 16:08:00
0
hey, method inheritance does not mean that the code appears in the subclass as if it had been written there as well. It means that the code can be called, from a subclass from where it was declared.
However field inheritance does imply having in the subclass a distinct variable from those in the base class.
SCJP2. Please Indent your code using UBB Code
I agree. Here's the link:
subject: Polymorphic Methods
Similar Threads
Overridding private methods
Polimorphism x Casting
Inheritance doubt
Polymorphism
Dynamic Method Lookup
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/241927/java-programmer-SCJP/certification/Polymorphic-Methods | CC-MAIN-2015-48 | refinedweb | 875 | 71.04 |
Creating interactive dashboards ¶
import numpy as np import pandas as pd import holoviews as hv from bokeh.sampledata import stocks from holoviews.operation.timeseries import rolling, rolling_outlier_std from holoviews.streams import Stream hv.notebook_extension('bokeh')
In the
Data Processing Pipelines section
we discovered how to declare a
DynamicMap
and control multiple processing steps with the use of custom streams as described in the
Responding to Events
guide. Here we will use the same example exploring a dataset of stock timeseries and build a small dashboard using the
paramNB
library, which allows us to declare easily declare custom widgets and link them to our streams. We will begin by once again declaring our function that loads the stock data:
def load_symbol(symbol, variable='adj_close', **kwargs): df = pd.DataFrame(getattr(stocks, symbol)) df['date'] = df.date.astype('datetime64[ns]') return hv.Curve(df, kdims=[('date', 'Date')], vdims=[variable]) dmap = hv.DynamicMap(load_symbol, kdims=['Symbol']).redim.values(Symbol=stocks.stocks) dmap
Building dashboards ¶
Controlling stream events manually from the Python prompt can be a bit cumbersome. However since you can now trigger events from Python we can easily bind any Python based widget framework to the stream. HoloViews itself is based on param and param has various UI toolkits that accompany it and allow you to quickly generate a set of widgets. Here we will use
paramnb
, which is based on
ipywidgets
to control our stream values.
To do so we will declare a
StockExplorer
class which inherits from
Stream
and defines two parameters, the
rolling_window
as an integer and the
symbol
as an ObjectSelector. Additionally we define a view method, which defines the DynamicMap and applies the two operations we have already played with, returning an overlay of the smoothed
Curve
and outlier
Scatter
.
import param import paramnb class StockExplorer(hv.streams.Stream): rolling_window = param.Integer(default=10, bounds=(1, 365)) symbol = param.ObjectSelector(default='AAPL', objects=stocks.stocks) def view(self): stocks = hv.DynamicMap(load_symbol, kdims=[], streams=[self]) # Apply rolling mean smoothed = rolling(stocks, streams=[self]) # Find outliers outliers = rolling_outlier_std(stocks, streams=[self]) return smoothed * outliers
Now that we have defined this
Parameterized
class we can instantiate it and pass it to the paramnb.Widgets function, which will display the widgets. Additionally we call the
StockExplorer.view
method to display the DynamicMap.
%opts Curve [width=600] {+framewise} Scatter (color='red' marker='triangle') explorer = StockExplorer() paramnb.Widgets(explorer, continuous_update=True, callback=explorer.event, on_init=True) explorer.view()
In HoloViews you have to declare the type of data that you want to display using Elements. Changing the types returned by a DynamicMap is generally not supported. Therefore
paramNB
provides the ability to completely redraw the output by replacing the object that is being displayed. Here we will extend the class we created above to draw directly to an
view
parameter, which we can assign to.
view
parameters like
HTML
support a
renderer
argument, which converts whatever you assign to the object to the correct representation. Therefore we will define a quick function that returns the HTML representation of our HoloViews object and also computes the size:
def render(obj): renderer = hv.renderer('bokeh') plot = renderer.get_plot(obj) size = renderer.get_size(plot) return renderer.figure_data(plot), size
Now we can extend the
StockExplorer
class from above with a custom
event
method. By default
event
is called on
Stream
instances to notify any subscribers. We can intercept this call when we want to redraw, here we will instead assign to our
output
parameter whenever the
variable
changes, which will trigger a full redraw:
class AdvancedStockExplorer(StockExplorer): output = paramnb.view.HTML(renderer=render) variable = param.ObjectSelector(default='adj_close', objects=[c for c in stocks.AAPL.keys() if c!= 'date']) def event(self, **kwargs): if self.output is None or 'variable' in kwargs: self.output = self.view() else: super(AdvancedStockExplorer, self).event(**kwargs)
explorer = AdvancedStockExplorer() paramnb.Widgets(explorer, continuous_update=True, callback=explorer.event, on_init=True)
As you can see using streams we have bound the widgets to the streams letting us easily control the stream values and making it trivial to define complex dashboards.
paramNB
is only one widget framework we could use, we could also use
paramBokeh
to use bokeh widgets and deploy the dashboard on bokeh server or manually linked
ipywidgets
to our streams. For more information on how to deploy bokeh apps from HoloViews and build dashboards see the
Deploying Bokeh Apps
. | http://holoviews.org/user_guide/Dashboards.html | CC-MAIN-2017-30 | refinedweb | 728 | 50.12 |
Writing assertions for tests seems simple: all we need do is compare results with expectations. This is usually done using the assertion methods – e.g. assertTrue() or assertEquals() – provided by testing frameworks. However, in the case of more complicated test scenarios, it can be rather awkward to verify the outcome of a test using such basic assertions.
The main issue is that by using them we obscure our tests with low-level details. This is undesirable. In my view we should rather strive for our tests to speak in the language of business.
In this article I will show how we could use so-called "matcher libraries" and implement our own custom assertions to make our tests more readable and maintainable.
For the purposes of demonstration we will consider the following task: let us imagine that we need to develop a class for the reporting module of our application that, when given two dates ("begin" and "end"), provides all one-hour intervals between those dates. The intervals are then used to fetch the required data from the database and present it to the end user in the form of beautiful charts.
Standard Approach
Let us begin with a "standard" way of writing assertions. We're using JUnit for this example, though we could equally use, say, TestNG. We will use assertion methods like assertTrue(), assertNotNull() or assertSame().
Below, one of several tests belonging to the HourRangeTest class is presented. It is quite simple. First it asks the getRanges() method to return all one-hour ranges between two dates on the same day. Then it verifies whether the returned ranges are exactly as they should be.
private final static SimpleDateFormat SDF = new SimpleDateFormat("yyyy-MM-dd HH:mm"); (3, ranges.size()); assertEquals(SDF.parse("2012-07-23 12:00").getTime(), ranges.get(0).getStart()); assertEquals(SDF.parse("2012-07-23 13:00").getTime(), ranges.get(0).getEnd()); assertEquals(SDF.parse("2012-07-23 13:00").getTime(), ranges.get(1).getStart()); assertEquals(SDF.parse("2012-07-23 14:00").getTime(), ranges.get(1).getEnd()); assertEquals(SDF.parse("2012-07-23 14:00").getTime(), ranges.get(2).getStart()); assertEquals(SDF.parse("2012-07-23 15:00").getTime(), ranges.get(2).getEnd()); }
This is definitely a valid test; however, it has a serious drawback. There are a lot of repeated fragments in the //then part. Obviously they were created using copy & paste, which – so experience has taught me – inevitably leads to errors. Moreover, if we were to write more tests like this (and we surely should write more tests to verify the HourlyRange class!), the same asserting statements would be repeated over and over again in each of them.
The readability of the current test is weakened by the excessive number of assertions, but also by the complicated nature of each assertion. There is a lot of low-level noise, which does not help to grasp the core scenario of the tests. As we all know, code is read much more often than it is written (I think this also holds for test code), so readability is something we should definitely seek to improve.
Before we rewrite the test, I also want to highlight another weakness, this time related to the error message we get when something goes wrong. For example, if one of the ranges returned by the getRanges() method were to have a different time than expected, all we would learn would be the following:
org.junit.ComparisonFailure: Expected :1343044800000 Actual :1343041200000
This message is not very clear and could definitely be improved.
Private Methods
So what, exactly, could we do about this? Well, the most obvious thing would be to extract the assertion into a private method:
private void assertThatRangeExists(List<Range> ranges, int rangeNb, String start, String stop) throws ParseException { assertEquals(ranges.get(rangeNb).getStart(), SDF.parse(start).getTime()); assertEquals(ranges.get(rangeNb).getEnd(), SDF.parse(stop).getTime()); } (ranges.size(), 3); assertThatRangeExists(ranges, 0, "2012-07-23 12:00", "2012-07-23 13:00"); assertThatRangeExists(ranges, 1, "2012-07-23 13:00", "2012-07-23 14:00"); assertThatRangeExists(ranges, 2, "2012-07-23 14:00", "2012-07-23 15:00"); }
Is it better now? I would say so. The amount of repetitive code has been reduced and the readability has been improved. This is definitely good.
Another advantage of this approach is that we are now in a much better position to improve the error message that gets printed in the event of failed verification. The asserting code is extracted to one method, so we could enhance our assertions with more readable error messages with ease.
The reuse of such assertion methods could be facilitated by putting them into some base class, which our test classes would need to extend.
Still, I think we might do even better than this: using private methods has some drawbacks, which become more evident as the test code grows and these private methods then come to be used within many test methods:
- it is hard to come up with names of assertion methods that clearly state what they verify,
- as the requirements grow, such methods tend to receive additional parameters required for more sophisticated checks (already the assertThatRangeExists() takes 4 parameters, which is too much!),
- sometimes it happens that in order to be reused across many tests, some complicating logic gets introduced into such methods (usually in the form of boolean flags which make them verify - or ignore - some special cases).
All of this means that in the long run we will encounter some issues with the readability and maintainability of tests written with the help of private assertion methods. Let us look for another solution which would be free of these drawbacks.
Matcher Libraries
Before we move on, let us learn about some new tools. As mentioned before, the assertions provided by JUnit or TestNG are not flexible enough. In the Java world there are at least two open-source libraries which fulfil our requirements: AssertJ (a fork of the FEST Fluent Assertions project) and Hamcrest. I prefer the first one, but it is a matter of taste. Both look very powerful, and both allow one to achieve similar effects. The main reason I prefer AssertJ over Hamcrest is that AssertJ's API - based on fluent interfaces - is perfectly supported by IDEs.
Integration of AssertJ with JUnit or TestNG is straightforward. All you have to do is add the required imports, stop using the default assertions provided by your testing framework, and start using those provided by AssertJ.
AssertJ provides many useful assertions out-of-the-box. They all share the same "pattern": they begin with the assertThat() method, which is a static method of the Assertions class. This method takes the tested object as an argument, and "sets the stage" for further verification. Afterwards come the real assertion methods, each of them verifying various properties of the tested object. Let us take a look at a few examples:
assertThat(myDouble).isLessThanOrEqualTo(2.0d); assertThat(myListOfStrings).contains("a"); assertThat("some text") .isNotEmpty() .startsWith("some") .hasLength(9);
As can be seen here, AssertJ provides a much richer set of assertions than JUnit or TestNG. What is more, you can chain them together – as the last assertThat("some text") example shows. One very convenient thing is that your IDE will figure out the possible methods based on the type of object being tested, and will tip you off, suggesting only those which fit. So, for example, in the case of a double variable, after you have typed assertThat(myDouble). and have pressed CTRL + SPACE (or whatever shortcut your IDE provides), you will be presented with a list of methods like isEqualTo(expectedDouble), isNegative() or isGreaterThan(otherDouble) - all making sense for double value verification. Which is actually pretty cool.
Custom Assertions
Having a more powerful set of assertions provided by AssertJ or Hamcrest is nice, but this is not really what we wanted in the case of our HourRange class. Another feature of matcher libraries is that they allow you to write your own assertions. These custom assertions will behave exactly as the default assertions of AssertJ do – i.e. you will be able to chain them together. And this is exactly what we will do next to improve our test.
We will see a sample implementation of a custom assertion in a minute, but for now let's take a look at the final effect we are going to achieve. This time we will use the assertThat() method of (our own) RangeAssert class.
@Test public void shouldReturnHourlyRanges() throws ParseException { // given Date dateFrom = SDF.parse("2012-07-23 12:00"); Date dateTo = SDF.parse("2012-07-23 15:00"); // when List<Range> ranges = HourlyRange.getRanges(dateFrom, dateTo); // then RangeAssert.assertThat(ranges) .hasSize(3) .isSortedAscending() .hasRange("2012-07-23 12:00", "2012-07-23 13:00") .hasRange("2012-07-23 13:00", "2012-07-23 14:00") .hasRange("2012-07-23 14:00", "2012-07-23 15:00"); }
Some of the advantages of custom assertions can be seen even in such a tiny example as the one above. The first thing to notice about this test is that the //then part has definitely become smaller. It is also quite readable now.
Other advantages will manifest themselves when applied to a larger codebase. Were we to continue using our custom assertion, we would notice that:
- It is very easy to reuse them. We are not forced to use all assertions, but we can select only those which are important for a specific test case.
- The DSL belongs to us, which means that for specific test scenarios we could change it according to our liking (e.g. pass Date objects instead of Strings) with ease. What is more important is that such a change would not affect any other tests.
- High readability - there is no problem with finding the right name for a verification method, because the assertion consists of many small assertions, each of them focused on just one very small aspect of the verification.
Compared to private assertion methods, the only disadvantage of the custom assertion is that you have to put more work in to create them. Let us have a look at the code of our custom assertion to judge whether it really is such a difficult task.
To create a custom assertion we should extend the AbstractAssert class of AssertJ or one of its many subclasses. As shown below, our RangeAssert extends the ListAssert class of AssertJ. This makes sense, because we want our custom assertion to verify the content of a list of ranges (List<Range>).
Each custom assertion written with AssertJ contains code which is responsible for the creation of an assertion object and the injection of the tested object, so further methods can operate on it. As the listing shows, both the constructor and the static assertThat() method take List<Range> as a parameter.
public class RangeAssert extends ListAssert<Range> { protected RangeAssert(List<Range> ranges) { super(ranges); } public static RangeAssert assertThat(List<Range> ranges) { return new RangeAssert(ranges); }
Now let us see the rest of the RangeAssert class. The hasRange() and isSortedAscending() methods (shown in the next listing) are typical examples of custom assertion methods. They share the following properties:
- Both start with a call to the isNotNull() which verifies whether the tested object is not null. This guarantees that the verification won't fail with the NullPointerException message (this step is not necessary but recommended).
- They return "this" (which is an object of the custom assertion class – the RangeAssert class, in our case). This allows for methods to be chained together.
- The verification is performed using assertions provided by the AssertJ Assertions class (part of the AssertJ framework).
- Both methods use an "actual" object (provided by the ListAssert superclass), which keeps a list of Ranges (List<Range>) being verified.
private final static SimpleDateFormat SDF = new SimpleDateFormat("yyyy-MM-dd HH:mm"); public RangeAssert isSortedAscending() { isNotNull(); long start = 0; for (int i = 0; i < actual.size(); i++) { Assertions.assertThat(start) .isLessThan(actual.get(i).getStart()); start = actual.get(i).getStart(); } return this; } public RangeAssert hasRange(String from, String to) throws ParseException { isNotNull(); Long dateFrom = SDF.parse(from).getTime(); Long dateTo = SDF.parse(to).getTime(); boolean found = false; for (Range range : actual) { if (range.getStart() == dateFrom && range.getEnd() == dateTo) { found = true; } } Assertions .assertThat(found) .isTrue(); return this; } }
And what about the error message? AssertJ allows us to add it quite easily. In simple cases, like a comparison of values, it is often sufficient to use the as() method, like this:
Assertions .assertThat(actual.size()) .as("number of ranges") .isEqualTo(expectedSize);
As you can see, as() is just another method provided by the AssertJ framework. Now, when the test fails, it prints the following message so that we know immediately what is wrong:
org.junit.ComparisonFailure: [number of ranges] Expected :4 Actual :3
Sometimes we need more than just the name of the tested object to understand what has happened. Let us take the hasRange() method. It would be really nice if we could print all the ranges in the event of failure. This can be done using the overridingErrorMessage() method, like this:
public RangeAssert hasRange(String from, String to) throws ParseException { ... String errMsg = String.format("ranges\n%s\ndo not contain %s-%s", actual ,from, to); ... Assertions.assertThat(found) .overridingErrorMessage(errMsg) .isTrue(); ... }
Now in the event of failure we would get a very detailed error message. Its content would depend on the toString() method of the Range class. For example, it could look like this:
HourlyRange{Mon Jul 23 12:00:00 CEST 2012 to Mon Jul 23 13:00:00 CEST 2012}, HourlyRange{Mon Jul 23 13:00:00 CEST 2012 to Mon Jul 23 14:00:00 CEST 2012}, HourlyRange{Mon Jul 23 14:00:00 CEST 2012 to Mon Jul 23 15:00:00 CEST 2012}] do not contain 2012-07-23 16:00-2012-07-23 14:00
Conclusions
In this article we have discussed a number of ways of writing assertions. We started with the "traditional" way, based on the assertions provided by testing frameworks. This is good enough in many cases, but as we saw, it sometimes lacks the flexibility needed to express the intent of the test. Next we improved things a little by introducing private assertion methods, but this also proved not to be an ideal solution. In our final attempt we introduced custom assertions written with AssertJ, and achieved much more readable and maintainable test code.
If I were to offer you some advice regarding assertions, I would suggest the following: you will greatly improve your test code if you stop using assertions provided by testing frameworks (e.g. JUnit or TestNG) and switch to those provided by matcher libraries (e.g. AssertJ or Hamcrest). This will allow you to use a vast range of very readable assertions and eliminate the need to use complicated statements (e.g. looping over collections) in the //then parts of your tests.
Even if the cost of writing custom assertions is very small, there is no need to introduce them just because you can. Use them when the readability and/or maintainability of your test code are endangered. From my experience, I would encourage you to introduce custom assertions in the following cases:
- when you find it hard to express the intent of the test with the assertions provided by matcher libraries,
- in place of creating private assertion methods.
My experience tells me that with unit tests, you will rarely need custom assertions. However, I'm pretty sure you will find them irreplaceable in the case of integration and end-to-end (functional) tests. They allow our tests to speak in the language of the domain (rather than that of the implementation), and they also encapsulate the technical details, making our tests much simpler to update.
About the Author
Tomek Kaczanowski works as Java developer for CodeWise (Krakow, Poland). He is focused on code quality, testing and automation. Test infected TDD enthusiast, open-source proponent, agile worshipper. Strong inclination towards sharing his knowledge. Book author, blogger and conference speaker. Twitter: @tkaczanowski
Community comments
Good article
by Miguel Garcia /
Assert assertion generator
by Joel Costigliola /
Good article
by Miguel Garcia /
Your message is awaiting moderation. Thank you for participating in the discussion.
I've come across the same readability issues when developing both unit and integration tests, particularly the latter. I find your proposal very interesting. I will give Hamcrest a try since it is already included in the Mockito dependencies, which in turn are being used in our testing framework.
Thank you and keep up the good work!
Miguel Angel García
Assert assertion generator
by Joel Costigliola /
Your message is awaiting moderation. Thank you for participating in the discussion.
Hi,
Thanks for mentioning AssertJ !
If you want to have custom assertions quickly you can use the assertions generator. Given a package with domain classes, it will create corresponding Assert class with assertions for each property.
For example, if you have a Person class with a name property, it will generate a PersonAssert class with assertion method hasName.
From here you can enrich them with whatever you need to get a nice assertion DSL.
More details here :
joel-costigliola.github.io/assertj/assertj-asse...
Cheers
Joel Costigliola - AssertJ creator | https://www.infoq.com/articles/custom-assertions/?utm_source=articles_about_JUnit&utm_medium=link&utm_campaign=JUnit | CC-MAIN-2019-39 | refinedweb | 2,881 | 54.63 |
ZXF12 - Lovebot
Banjo Kazooie Intro 8bit Remix
MMZ2 Departure [Midi Remix]
"Camping Les Tourelles" is a camping site that really exists. It's located in the south east of France near a little town called "Embrun".
When I was a teenager every year my father used to take me on holiday for a week or two with the car and a tent. I really have a lot of nice memories of those trips. And so in the summer of 1993 we ended up in "Camping Les Tourelles" near Embrun.
In the camping-grounds there was a bar where there happened to be a piano, and in good shape too. In that period I couldn't resist playing the piano everywhere I came across one. That night in that bar there was also a retired British teacher who also played the piano and it didn't take long before the evening turned into a "duel" between two people playing the piano, myself and the Brit, in which every time we played, by turns, we tried to outdo the other. No need to tell that playing the piano wasn't the only thing we did. A lot of drinking entered into this as well. The Englishman big pints of beer, and I drank double Ricard. It became a faboulous night which I shall never forget. Two pianists dueling... The more drunk we became the more we did our best to outdo the opponent and the more the audience grew enthusiastic and sang and danced along. And all of this in the nice southern climate of France. Fantastic.
Of course I dedicate this song to my father in memory of the many pleasant trips we made. I don't think memories can get much better than those. If I had the money to afford myself a car I wouldn't hesitate and do some of those trips all over again.
The song I wrote should be a good representation of the atmosphere that was present in that bar that night.
Hope you enjoy this waltz... And if you do... don't hesitate to listen to my other songs :)
With friendly regards,
John Sercu
Reviews
Rated 5 / 5 stars May 26, 2010
wow, a song with a story
i think this song rly captures the mood of the scene u described
i can just imagine the ruckus of the night with ppl dancing all over the place to a pianist duel
i hope i get to experience that someday...
Rated 4.5 / 5 stars February 23, 2010
CAKES
The best songs have stories within them.
Rated 5 / 5 stars February 23, 2010
Great, and for some reason people poorly received
I'm not sure I can tell you anything about this composition you don't already know, but I'll say it's sure good, and there are an awful lot of fucktards rating you zero.
Yes I notice to alot rate me zero. But on the other hand I'm very pleased with the much nice reviews I got and happy that I got so far. Also the lots of "zero" scores I get don't come as a complete suprise. I do am aware of the fact that the kind of music I make isn't populair for all and that there's for instance a big difference between my music and the kind of music that is displayed on music- channels like MTV.
With friendly regards,
John
Rated 5 / 5 stars February 23, 2010
Sounds great
Great composition and it must have been a great party that night in that campingbar in France. Wish I had been there too. 10/10 5/5
Rated 0.5 / 5 stars February 23, 2010
ehh
def knock of to chim chim cher-ee
If you say that I gues it must haven been like 40 years ago you've heard chim chim cheree and you don't quite remember the song very well cause that song has totally other chords. That is if you even know what chords are ofcourse.
With friendly regards,
John | http://www.newgrounds.com/audio/listen/305487 | CC-MAIN-2014-10 | refinedweb | 680 | 77.27 |
Java performs I/O through Streams. A Stream is linked to a physical layer by java I/O system to make input and output operation in java. In general, a stream means continuous flow of data. Streams are clean way to deal with input/output without having every part of your code understand the physical.
Java encapsulates Stream under java.io package. Java defines two types of streams. They are,
Byte stream is defined by using two abstract class at the top of hierarchy, they are InputStream and OutputStream.
These two abstract classes have several concrete classes that handle various devices such as disk files, network connection etc.
These classes define several key methods. Two most important are
read(): reads byte of data.
write(): Writes byte of data.
Character stream is also defined by using two abstract class at the top of hierarchy, they are Reader and Writer.
These two abstract classes have several concrete classes that handle unicode character.
We use the object of BufferedReader class to take inputs from the keyboard.
read() method is used with BufferedReader object to read characters. As this function returns integer type value has we need to use typecasting to convert it into char type.
int read() throws IOException
Below is a simple example explaining character input.
class CharRead { public static void main( String args[]) { BufferedReader br = new Bufferedreader(new InputstreamReader(System.in)); char c = (char)br.read(); //Reading character } }
To read string we have to use
readLine() function with BufferedReader class's object.
String readLine() throws IOException
import java.io.*; class MyInput { public static void main(String[] args) { String text; InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); text = br.readLine(); //Reading String System.out.println(text); } }
import java. Io *; class ReadTest { public static void main(String[] args) { try { File fl = new File("d:/myfile.txt"); BufferedReader br = new BufferedReader(new FileReader(fl)) ; String str; while ((str=br.readLine())!=null) { System.out.println(str); } br.close(); fl.close(); } catch (IOException e) { e.printStackTrace(); } } }
import java. Io *; class WriteTest { public static void main(String[] args) { try { File fl = new File("d:/myfile.txt"); String str="Write this string to my file"; FileWriter fw = new FileWriter(fl) ; fw.write(str); fw.close(); fl.close(); } catch (IOException e) { e.printStackTrace(); } } } | http://www.studytonight.com/java/java-io-stream.php | CC-MAIN-2017-09 | refinedweb | 378 | 60.92 |
@AlManja Could you advise people to create a ~/.bashrc alias that calls your program with sudo su
I used to do that with the allservers script in its early days to bypass the need for password input.
These days though due to the major upgrade of pacman sometime back, allservers now needs to request the input of the user password.
@handy Yes that works, but probably not that useful for what script is intended when someone register for this forum and writes: "Can't boot, help me please"
My plan with the script is to be able that new user with problem would be guided with these simple steps:
I have an idea. There will be two 'cli' versions, logs and logsp (p for "password required") this second one will not need to be used most of the time but when it will be, it will provide even more info, such as: show all un-commented lines from /etc/mkinitcpio.conf and some other that generally need password to be able to see it. And user will just need to be fore warned, run the script but you will also need to enter password.
But for every day use, if someone just want to quickly check different logs for errors from time to time, there will be version that will not require the password. problem solved
If you want to ensure the script is run as root then
snippet from pacman-mirrors.py
import os
if os.getuid() != 0:
print("{}: {}".format("Error", "Script must run as root"))
exit(1)
Thank you, will check this out!
@AlManja You could get your script & the ~/.bashrc alias included in the official releases. Could even have a menu option on the desktop that calls it.
I was using the following to check for root access early in the allservers script:
if [ "$EUID" != 0 ];
then
err "Must use 'sudo su' before you run this script."
exit 1
fi
Which is all but the same thing that fhdk quoted above.
@handy
Thank you, will check this out. While my script is in python, it probably does not matter if it is properly packaged before uploaded to AUR is my guess? I will look into this, thank you very much!
@AlManja there is another bash script of mine floating around with the terrible name of mhwd-kern it has a little python in it courtesy of Joshua. I don't think you'll have any problems mixing them up. It is a rare machine where the owner has removed python (I think).
@handyThank you, I will check it out. All this may be way above my head, I'm just starting with small baby steps, clueless about Linux, bash, just starting with python, but maybe I can figure it out. If not, I will leave it on to-do list for some later time
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed. | https://forum.manjaro.org/t/provide-details-when-seeking-assistance/19578?page=2 | CC-MAIN-2017-51 | refinedweb | 494 | 78.79 |
go to bug id or search bugs for
New/Additional Comment:
Description:
------------
Using an import alias and defining the constant in the same scope leads to inconsistent error message : Undefined constant 'C'
C is aliased into D first.
Then it is created.
Its creation is validated by 'echo C', which yields the right value.
'echo D' fails, mentioning that 'C' doesn't exists.
Extra note :
+ splitting the namespace A in two (one with const, one with use) raise the same issue.
+ use and const may be in any order (use first, const first), still raise the same issue.
Test script:
---------------
<?php
namespace A {
use const C as D;
const C = 2;
echo C;
echo D;
}
Expected result:
----------------
22
Actual result:
--------------
2PHP Fatal error: Uncaught Error: Undefined constant 'C' in test.php:8
Stack trace:
#0 {main}
thrown in test.php on line 8
Add a Patch
Add a Pull Request
Uses are always relative to the root namespace, you are looking for "use const A\C as D".
It took me a second to understand but the code is correct. @dams is talking specifically about the error message (which points to the "echo D" line) saying "C" when it would be better if it said "D".
Yes to both comments.
When C is defined in the global space (in the example), it works as expected.
The error message mentions C, which is confusing with D.
The error message is also correct: It shows the constant PHP is actually looking for, and the constant that needs to be defined for the lookup to succeed. The name "D" used locally to refer to it is ultimately irrelevant. | https://bugs.php.net/bug.php?id=77628&edit=1 | CC-MAIN-2021-17 | refinedweb | 275 | 71.34 |
Creating a Skin (Windows)
This guide outlines the tools for RhinoCommon developers to wrap their application around Rhino by creating custom Skin. Custom skins are supported on Windows only.
Overview
Rhino allows developers to customize most of Rhino’s interface so that the application appears to be their own. We call this a custom Skin. With a custom Skin, you can change the application icon, splash screen, the application name etc.
Creating a custom Skin for Rhino involves creating a custom skin assembly:
skin name.rhs This is a regular .NET Assembly (.DLL) that implements the skin’s icon, splash screen, application name, etc. In this guide, we will refer this to the Skin DLL. See a full list of methods and properties on the Skin class documentation.
Create the Skin DLL
To create the Skin DLL:
- Launch Visual Studio and add a new Class Library project to your solution.
- In the new Class Library project, add a reference to RhinoCommon.dll, which is found in Rhino’s System folder. Note: make sure, after adding the reference, to set the properties of the reference to Copy Local = False.
- Create a new class that inherits from
Rhino.Runtime.Skin.
- Add a post build event to the project to rename the assembly from .dll to .rhs:
(TargetPath)" "$(TargetDir)$(ProjectName).rhs" Erase "$(TargetPath)"
Skin Class
The skin class can override basic properties, like the
ApplicationName:
namespace MySkin { public class MyHippoSkin : Rhino.Runtime.Skin { protected override string ApplicationName { get { return "Hippopotamus"; } } } // You can override more methods and properties here }
Namespace MySkin Public Class MyHippoSkin Inherits Rhino.Runtime.Skin Protected Overrides ReadOnly Property ApplicationName() As String Get Return "Hippopotamus" End Get End Property End Class ' You can override more methods and properties here End Namespace
Installation
WARNING
Modifying the registry incorrectly can have negative consequences on your system’s stability and even damage the system.
To install your custom Skin, use REGEDIT.EXE to add a scheme key to your registry with a path to your Skin DLL. For example:
Rhino 64-bit
Testing
You can now test your custom Skin by creating shortcut to your Rhino executable with
/scheme="<scheme name from the previous step>" as command line argument. For example:
C:\Program Files\Rhinoceros 5 (64-bit)\System\Rhino.exe” /scheme=MySkin
Rhino 32-bit
Testing
You can now test your custom Skin by creating shortcut to your Rhino executable with
/scheme="<scheme name from the previous step>" as command line argument. For example:
C:\Program Files (x86)\Rhinoceros 5\System\Rhino4.exe” /scheme=MySkin | https://developer.rhino3d.com/5/guides/rhinocommon/creating-a-skin/ | CC-MAIN-2021-39 | refinedweb | 422 | 56.25 |
Background
I am working in a project in which we are using Visual Studio 2012 and C# as the programming language, We do have so many functions in our logical layers. In those functions we used both Named and Optional Arguments. So I thought of sharing this with you. Please be noted that this article is for the one who have not tried these Named and Optional arguments yet.
Before going to the coding part, we will learn what named and optional argument is? What are all the features of these two?
Named Arguments
Ref: MSDN Named arguments enable you to specify an argument for a particular parameter by associating the argument with the parameter’s name rather than with the parameter’s position in the parameter list.
As said by MSDN, A named argument ,
No we will describe all about a named argument with a simple program. I hope we all know how to find out the area of a rectangle. Yes you are right it is A= wl (Where w is the width and l is length and A is area.) So we will be using this formula in our function. Consider following is our function call.
FindArea(120, 56);
In this our first argument is length (ie 120) and second argument is width (ie 56). And we are calculating the area by that function. And following is the function definition.
private static double FindArea(int length, int width) { try { return (length* width); } catch (Exception) { throw new NotImplementedException(); } }
So in the first function call, we just passed the arguments by its position. Right?
double area; Console.WriteLine("Area with positioned argument is: "); area = FindArea(120, 56); Console.WriteLine(area); Console.Read();
If you run this, you will get an output as follows.
Now here it comes the features of a named arguments. Please see the preceding function call.
Console.WriteLine("Area with Named argument is: "); area = FindArea(length: 120, width: 56); Console.WriteLine(area); Console.Read();
Here we are giving the named arguments in the method call.
area = FindArea(length: 120, width: 56);
Now if you run this program, you will get the same result. Please see the below image.
As I said above, we can give the names vice versa in the method call if we are using the named arguments right? Please see the preceding method call.
Console.WriteLine("Area with Named argument vice versa is: "); area = FindArea(width: 120, length: 56); Console.WriteLine(area); Console.Read();
Please run the program and see the output as below.
You get the same result right? I hope you said yes.
One of the important use of a named argument is, when you use this in your program it improves the readability of your code. It simply says what your argument is meant to be, or what it is?.
Now you can give the positional arguments too. That means, a combination of both positional argument and named argument. So shall we try that?
Console.WriteLine("Area with Named argument Positional Argument : "); area = FindArea(120, width: 56); Console.WriteLine(area); Console.Read();
In the above example we passed 120 as the length and 56 as a named argument for the parameter width.
I hope you enjoyed using named arguments, there are some limitations too. We will discuss the limitation of a named arguments now.
Limitation of using a Named Argument
Named argument specification must appear after all fixed arguments have been specified.
If you use a named argument before a fixed argument you will get a compile time error as follows.
Named argument specification must appear after all fixed arguments have been specified
Optional Arguments
Ref: MSDN The definition of a method, constructor, indexer, or delegate can specify that its parameters are required or that they are optional. Any call must provide arguments for all required parameters, but can omit arguments for optional parameters.
As said by MSDN, a Optional Argument,
1. Constant expression.
2. Must be a value type such as enum or struct.
3. Must be an expression of the form default(valueType)
Now consider preceding is our function definition with optional arguments.
private static double FindAreaWithOptional(int length, int width=56) { try { return (length * width); } catch (Exception) { throw new NotImplementedException(); } }
Here we have set the value for width as optional and gave value as 56. right? Now we will try to call this function.
If you note, the IntelliSense itself shows you the optional argument as shown in the below image.
Now if you call the function as shown in the preceding code block. The function will be fired and give you the same output.
Console.WriteLine("Area with Optional Argument : "); area = FindAreaWithOptional(120); Console.WriteLine(area); Console.Read();
Note that we did not get any error while compiling and it will give you an output as follows.
Conclusion
I hope you will like this article. Please share me your valuable thoughts and comments. Your feedback is always welcomed.
Thanks in advance. Happy coding!
Kindest Regards
Sibeesh Venu | https://sibeeshpassion.com/named-and-optional-arguments-in-csharp/ | CC-MAIN-2019-13 | refinedweb | 831 | 66.33 |
OperatorSetData for "Lists"?
Am i understanding this correctly? Im trying to populate an Xpresso Link List nodes (node[1])
"Link List" field with the children of Tiles. Just a partial snippet of code here, i dont get any errors, the objects just dont populate.
R20
Ive tried many permutations, tried GeListNode as type, tried enumerating children, putting that in a list()...im guessing my "Children" arent an actual "List" type, and im blindly dancing around the right answer, just cant quite get it? Do i need to first make these children a "list" (and how would i go about that?), or do i need to "insert" into document at any point? Based on the documentation it sounds like OperatorSetData should take care of the insertion. Or maybe im assuming incorrectly that "simulates user dragging onto the node" means it can simulate dragging into attribute manager of the node? Just learning Python for C4D so just trying to wrap my head around it all. Thanks!
Edit: Have given up on OperatorSetData and trying to follow some examples set by "Inserting" objects to IN/Exclude, though it has its own class to call on so working out those discrepancies, I tried a doc.InsertObject on a single child from the children list and console told me to "add them separately", so also trying to decipher that message since c4d.BaseObjects apparently don't like to be iterated, says console. Maybe a makeshift Python node acting as a Link List node is the way to go?
obj = doc.SearchObject("Tiles") children = obj.GetChildren() node[1].OperatorSetData(c4d.GV_LINK_LIST, children ,c4d.GV_OP_DROP_IN_BODY)
Hi esan, thanks for reaching out us.
With regard to your issue, being the Link List parameter containing a InExcludeData, you need to use the InsertObject method to get objects added to the list or DeleteObject to get object out of the list.
def main(): # retrieve the objects in the scene cube = doc.SearchObject("Cube") sphere = doc.SearchObject("Sphere") if cube is None or sphere is None: return # assume the cube has an Xpresso Tag xpresso = cube.GetTag(c4d.Texpresso) if xpresso is None: return nm = xpresso.GetNodeMaster() if nm is None: return # assume the graphview contains a LinkList node linklist = nm.GetRoot().GetDown() if linklist is None or linklist.GetType() != 1001101: return # allocate the iInExcludeData inexdata = c4d.InExcludeData() # add the object inexdata.InsertObject(sphere, 1) linklist[c4d.GV_LINK_LIST] = inexdata c4d.EventAdd()
Best, Riccardo
@r_gigante said in OperatorSetData for "Lists"?:
inexdata.InsertObject(sphere, 1)
Thanks so much for your reply, I started getting way off track trying other methods lol! I got it working with:
inexdata = c4d.InExcludeData() for i, obj in enumerate(children): inexdata.InsertObject(children[i], 1) node[1][c4d.GV_LINK_LIST] = inexdata
@r_gigante Another quick question. is there still a bug on c4d.GV_OBJECT_OPERATOR_OBJECT_OUT? I dont get object ports on my nodes when trying to add this and I saw it was a long standing bug.
...just saw notes that the bug was fixed in R21. But I (and the people ill be distributing this script to) are all on R20, is there any patch or easy workaround to getting Object ports showing up? This is the last step needed to finish my script!
thanks
@esan said in OperatorSetData for "Lists"?:
Another quick question. is there still a bug on c4d.GV_OBJECT_OPERATOR_OBJECT_OUT?
Yes it's still there and being R20 maintenance ended it's likely to remain.
Best, R | https://plugincafe.maxon.net/topic/12187/operatorsetdata-for-lists | CC-MAIN-2020-50 | refinedweb | 569 | 59.4 |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I'm trying to set the response status line, and failing.
def handler(req):
raise apache.SERVER_RETURN, (400, "You must supply a foo")
This is with the mod_python on Ubuntu feisty (3.2.10). From the code in
apache.py, it sets req.status with the text message although the doc
says that is an integer field.
However the http client always receives standard text (eg 'Bad Request'
for code 400).
The PythonDebug option is also confusing. If I turn it on, then
responses go back as 200 OK with the body of tracebacks etc, whereas
with it off then you get 500 Internal Server Error with no traceback
type information.
It seems to me that if you want to return anything other than success
that Apache goes to great lengths to obliterate whatever it is that you
were trying to send back and replaces it with its own canned
information, hence the behaviour of PythonDebug. Is there any way
around this?
Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFGPkEjmOOfHg372QQRAtuxAKDLsTPlKUZEbhGBS7DBAo0lgFd+8ACfRY5B
Kr0KeQjYPQEAc90GEY5dnbM=
=Rm72
-----END PGP SIGNATURE----- | https://modpython.org/pipermail/mod_python/2007-May/023567.html | CC-MAIN-2022-21 | refinedweb | 186 | 66.23 |
#include <Wire.h> uint8_t outbuf[] = {0xf2, 0x79, 0x2e, 0x7d, 0x4d, 0x43, 0x01, 0xfd}; // sample data setvoid receiveEvent(int howMany) { while(Wire.available()) { char c = Wire.receive(); // receive byte as a character }} void requestEvent() { Wire.send(outbuf, 8); // send data packet}void setup() { Wire.begin(0x52); // join i2c bus with address 0x52 Wire.onReceive(receiveEvent); // register event Wire.onRequest(requestEvent); // register event} void loop() { delay(100);}
since the Arduino BT board costs about 100 USD more than the Arduino NG..
Arduino BT $103 USD = 77 Euro from PCB Europe. = integrated code and hardware + ready to go.
The BT boards are in stock and they are being shipped allover the world.. (daniel got 35... and I'm getting 10 tomorrow...)
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php/topic,7756.0.html | CC-MAIN-2015-14 | refinedweb | 153 | 79.16 |
Introduction:
File is a collection of bytes stored in secondary storage device i.e. disk. Thus, File handling is used to read, write, append or update a file without directly opening it.
Types of File:
- Text File
- Binary File
Text File contains only textual data. Text files may be saved in either a plain text (.TXT) format and rich text (.RTF) format like files in our Notepad while Binary Files contains both textual data and custom binary data like font size, text color and text style etc.
Why we use File Handling?
The input and output operation that we have performed so far were done through screen and keyboard only. After the termination of program all the entered data is lost because primary memory is volatile. If the data has to be used later, then it becomes necessary to keep it in permanent storage device. So the Java language provides the concept of file through which data can be stored on the disk or secondary storage device. The stored data can be read whenever required.
Note: For handling files in java we have to import package named as java.io which contains all the required classes needed to perform input and output (I/O) in Java.
File Class in Java:
Files and directories are accessed and manipulated by the File class. The File class does not actually provide for input and output to files. It simply provides an identifier of files and directories.
Note: Always remember that just because a File object is created, it does not mean there actually exists on the disk a file with the identifier held by that File object.
For Defining a file in a File Class there are several types of constructors.
File Class constructors:
File class methods:
Listing 1: Check Permission on a File
import java.io.File; public class FileDemo { public static void main(String[] args) { File f = null; String[] strs = {"test.txt", "/test.println(a); // prints System.out.println(" is executable: "+ bool); // returns true if the file can be read Boolean w = f.canWrite(); // print System.out.println("File can be writing: "+w); // returns true if the file can be read Boolean r = f.canRead(); // print System.out.println("File can be read: "+r); } }catch(Exception e){ // if any I/O error occurs e.printStackTrace(); } } }
Output of the program is:
C:\test.txt is executable: True
File can be writing: False
File can be read: True
Listing 2: Program to Create and Delete a file (); } } }
Output of the Program is:
File deleted: false
createNewFile() method is invoked
File deleted: true
Listing 3: Program to Compare Two Files
import java.io.File; public class FileDemo { public static void main(String[] args) { File f = null; File f1 = null; try{ // create new files f = new File("test.txt"); f1 = new File("File/test1.txt"); // returns integer value int value = f.compareTo(f1); // argument = abstract path name if(value == 0) { System.out.println (" Both Files are Equal. "); } // argument < abstract path name else if(value > 0) { System.out.println ("First file is greater."); } // the argument > abstract path name else { System.out.println ("Second file is greater."); } // prints the value returned by compareTo() System.out.println("Value returned: "+value); }catch(Exception e){ e.printStackTrace(); } } }
Output of the Program is:
First file is greater.
Value returned: 14
Listing 4: Program to check whether it is a File or a Directory and check it is a hidden file
import java.io.File; public class FileDemo { public static void main(String[] args) { File f = null; String path; boolean bool = false; try{ // create new file f = new File("c.txt"); // true if the file path is a file, else false bool = f.isFile(); // get the path path = f.getPath(); // prints System.out.println (path+" is file? "+ bool); // create new file f = new File("c:/test.txt"); // true if the file path is a file, else false p = f.isDirectory(); // get the path path = f.getPath(); // prints System.out.println (path+" is Directory? "+p); // create new file f = new File("c:/test.txt"); // true if the file path is a file, else false h = f.isHidden(); // get the path path = f.getPath(); // prints System.out.println (path+" is Hidden? "+h); }catch(Exception e){ // if any error occurs e.printStackTrace(); } }
Output of the Program is:
C.txt is File? False
c:\test.txt is Directory? True
C:\test.txt is hidden? False
Conclusion
The java.io package contains many classes that our programs can use to read and write data. The java.io.File package provides extensive support for file and file system I/O. This is a very comprehensive API, but the key entry points are as follows:
- The Path class has methods for manipulating a path.
- The Files class has methods for file operations, such as moving, copy, deleting, and also methods for retrieving and setting file attributes.
- The File System class has a variety of methods for obtaining information about the file system. | http://mrbool.com/working-with-file-handling-in-java/27720 | CC-MAIN-2017-04 | refinedweb | 823 | 68.87 |
feof man page
Prolog
Synopsis
#include <stdio.h> int feof(FILE *stream);
Description
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard.
The feof() function shall test the end-of-file indicator for the stream pointed to by stream.
The feof() function shall not change the setting of errno if stream is valid.
Return Value
The feof() function shall return non-zero if and only if the end-of-file indicator is set for stream.
Errors
No errors are defined.
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
clearerr(), ferror(), fopen()
ferror(3p), fgetc(3p), fgetwc(3p), fread(3p), gets(3p), stdin(3p), stdio.h(0p). | https://www.mankier.com/3p/feof | CC-MAIN-2018-05 | refinedweb | 146 | 61.43 |
[Solved]Pythonista built in images, index access Vrs named
Does anyone know if there is anyway to iterate through the built in images. If possible, I guess it would not be a single index. But a category, index something like that.
Just asking. My main reason for asking is just for getting random images for testing without having to define the names in a list and picking a random item from the list etc...
import os app_path = os.path.abspath(os.path.join(os.__file__, '../../../..')) os.chdir(app_path + '/Textures') print('\n'.join(os.listdir(os.curdir)))
@ccc, thanks so much. It is so cool, almost every problem/issue/want to have , seems to have a solution. I am sure you can access them in a similar way in 1.5 , maybe exactly the same way. In 1.6 I have noticed that the naming of the in built images has changed to something like 'iob:something' or 'cards:something' . I am guessing underneath somewhere @omz is planning to release a Module with dictionary access to these assets.
But what you have given is great. Can still filter etc... But it means more mess out of more testing code which is great!
app_path = os.path.abspath(os.path.join(os.__file__, '../..')) # Pythonista v1.5 app_path = os.path.abspath(os.path.join(os.__file__, '../../../..')) # Pythonista v1.6 Beta
When a post solves your issue, you should click that ^ at the upper right of the answer post. I do have my reputation to consider.
^ done, with kisses.
Joking aside, I do get it.
The second part of your title sticks in my head: index access v.s.named...
I find that when I want a random element from a sequence (tuple, list, dict, etc.) I always tend to favor random.choice() over messing with indexes.
those who have worked out how to do repos :)
Are you asking for a 12 step program?!?
- In Pythonista, click in your latest script and do "Select All".
- Click again and do "Copy".
- Hit the Home button and switch to Safari.
- Go to your old code in your Github repo.
- Click the pencil icon at the upper right of your old code.
- Click in your old code and do "Select All".
- Do "Paste".
- Scroll down below your code underneath the "Commit Changes" label.
- Enter a brief description of the changes made.
- In the box below that, write a more detailed description of the changes made.
- Click the "Commit Changes" button.
- You might have to repeat 11 on the next screen if you are not the owner of the repo (I.e. You are creating a Pull Request).
Try these 12 steps to update your VirtualViews repo and let me know if they work for you.
@ccc , i can see I frustrate you sometimes. But it's ok, frustrating people are often interesting people :)
Yes, I know how to create a manual repo. Maybe my problem is that I have been trying to automate the whole thing. Also dealing with pulls, merges etc... Worry me
But I will create a manual one and see how I go. No VirtualView at the moment though. In culinary terms it's deconstructed at the moment :).
I am on focusing only on the cell at the moment. Threaded and not. With the idea that the cell could be used in other containers other than just in a scrollview.
I don't think you or JonB will like it ;) But I have to try what I think also.
But will try to make a repo of that today or tomorrow.
I also will switch to choice :)
I am actually seldom frustrated. My writing might give the wrong impression but I am quite laid back. The nice thing about a repo is that you can always temporarily add a VirtualView_deconstructed.py to allow others to to look over your shoulder as you experiment. A few daze later when you have it sussed out, you can fold the changes into the main file and delete VirtualView_deconstructed.py.
Even, my best friends will tell you I am frustrating :) oh, yes, choice is a lot easier, thanks :)
@ccc , btw, I didn't mean it in a bad way. If you didn't care, you will just not speak at all :) I really do get your meaning. As I say, I have friends from all around the world, they all communicate a little differently, written and verbal. I like you care enough to tell me! Silence is not always golden :)
I am a respectful and decent person.from what I have seen, so are most if not all the people here, even the 13 and 14 yr olds. Great to see!
@ccc
12 steps?
In stash this is 4 steps or less
- type
git add VirtualView_deconstructed.py
- type
git commit
- type username, email, and commit message when prompted (this could have been included in step 2)
- type
git push
- click the + button for each changed file.
- type a commit message, and press commit
- press the push button.
(though i dont necessarily recomment gitview right now until i update the way branching works, and implement fetch/merge. right now it only includes pull, which gets hosed unless you are diligent about always pulling before committing new work, and pushing after committing, and not making changes on the remote in between... git in stash supports fetch/merge for the times when you have not been careful)
You guys missed my humor for my drinking buddy. Even if it took 3 steps or 20 steps, I would have made it a 12 step program.
Git is awesome but it has way too many modes and options. I like the web approach that I described above because it all happens in front of me on a ui that I can look at each step of the way.
On the Mac I use PyCharm to check in/out of repos and it works almost every time but it is not as bulletproof as the manual approach described above.
@ccc , I didn't miss or trip over the 12 step AA humour :)
Look really, I know it seems so easy for you guys. And I am sure it will be easy for me one day also. Hard to put yourself in a place of not knowing something you know so well sometimes.
Ask Stephen Hawkings to explain the universe to you, from at point of view he knows nothing about it. Hmmm, we he tried...failed big time in my opinion. He wrote a book about the Big Bang for beginners. I read it. I have always meant to write to him about that one! | https://forum.omz-software.com/topic/2176/solved-pythonista-built-in-images-index-access-vrs-named | CC-MAIN-2021-49 | refinedweb | 1,110 | 83.76 |
I am new to data visualization and attempting to make a simple time series plot using an SQL output and seaborn. I am having difficulty inserting the data retrieved from the SQL query into Seaborn. Is there some direction you can give me on how to visualize this dataframe using Seaborn?
My Python Code:
#!/usr/local/bin/python3.5
import cx_Oracle
import pandas as pd
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import seaborn as sns
orcl = cx_Oracle.connect('sql_user/sql_pass//sql_database_server.com:9999/SQL_REPORT')
sql = '''
select DATETIME, FRUIT,
COUNTS
from FRUITS.HEALTHY_FRUIT
WHERE DATETIME > '01-OCT-2016'
AND FRUIT = 'APPLE'
'''
curs = orcl.cursor()
df = pd.read_sql(sql, orcl)
display(df)
sns.kdeplot(df)
plt.show()
DATETIME FRUIT COUNTS
0 2016-10-02 APPLE 1.065757e+06
1 2016-10-03 APPLE 1.064369e+06
2 2016-10-04 APPLE 1.067552e+06
3 2016-10-05 APPLE 1.068010e+06
4 2016-10-06 APPLE 1.067118e+06
5 2016-10-07 APPLE 1.064925e+06
6 2016-10-08 APPLE 1.066576e+06
7 2016-10-09 APPLE 1.065982e+06
8 2016-10-10 APPLE 1.072131e+06
9 2016-10-11 APPLE 1.076429e+06
TypeError: cannot astype a datetimelike from [datetime64[ns]] to [float64]
Instead of
sns.kdeplot try the following:
# make time the index (this will help with plot ticks) df.set_index('DATETIME', inplace=True) # make figure and axis objects fig, ax = sns.plt.subplots(1, 1, figsize=(6,4)) df.plot(y='COUNTS', ax=ax, color='red', alpha=.6) fig.savefig('test.pdf') plt.show()
The function
kdeplot() is not what you want if you're trying to make a line graph. It does make a line, but the line is intended to approximate the distribution of a variable rather than show how a variable changes over time. By far the easiest way to make a line plot is from pandas
df.plot(). If you want the styling options of seaborn, you can use
sns.plt.subplots to create your axis object (what I do). You can also use
sns.set_style() like in this question. | https://codedump.io/share/2BhUn1WPqUli/1/python-plotting-pandas-sql-dataframe-with-seaborn | CC-MAIN-2016-50 | refinedweb | 356 | 71.71 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.