text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
Thanks for the comments. One question inlined. Preparing another patch addressing the comments. Regards, Wei Mi. On Tue, Oct 15, 2013 at 1:35 PM, Jeff Law <law@redhat.com> wrote: > On 10/03/13 12:24, Wei Mi wrote: >> >> Thanks, >> Wei Mi. >> >> 2013-10-03 Wei Mi <wmi@google.com> >> >> * gcc/config/i386/i386.c (memory_address_length): Extract a part >> of code to rip_relative_addr_p. >> (rip_relative_addr_p): New Function. >> (ix86_macro_fusion_p): Ditto. >> (ix86_macro_fusion_pair_p): Ditto. >> * gcc/config/i386/i386.h: Add new tune features about >> macro-fusion. >> * gcc/config/i386/x86-tune.def (DEF_TUNE): Ditto. >> * gcc/doc/tm.texi: Generated. >> * gcc/doc/tm.texi.in: Ditto. >> * gcc/haifa-sched.c (try_group_insn): New Function. >> (group_insns_for_macro_fusion): Ditto. >> (sched_init): Call group_insns_for_macro_fusion. >> * gcc/sched-rgn.c (add_branch_dependences): Keep insns in >> a SCHED_GROUP at the end of BB to remain their location. >> * gcc/target.def: Add two hooks: macro_fusion_p and >> macro_fusion_pair_p. > > I'm not going to comment on the x86 specific stuff -- I'll defer to the port > maintainers for that. > > > >> index 61eaaef..d6726a9 100644 >> --- a/gcc/haifa-sched.c >> +++ b/gcc/haifa-sched.c >> @@ -6519,6 +6519,44 @@ setup_sched_dump (void) >> ? stderr : dump_file); >> } >> >> +static void >> +try_group_insn (rtx insn) > > You need a comment for this function. > Ok, will add comment for it. > > >> +{ >> + unsigned int condreg1, condreg2; >> + rtx cc_reg_1; >> + rtx prev; >> + >> + targetm.fixed_condition_code_regs (&condreg1, &condreg2); >> + cc_reg_1 = gen_rtx_REG (CCmode, condreg1); >> + prev = prev_nonnote_nondebug_insn (insn); >> + if (!any_condjump_p (insn) >> + || !reg_referenced_p (cc_reg_1, PATTERN (insn)) >> + || !prev >> + || !modified_in_p (cc_reg_1, prev)) >> + return; > > I'd test !any_condjump_p at the start of this function before calling the > target hook. If insn isn't a conditional jump, then all the other work is > totally useless. Ok. will fix it. > > Aren't you just trying to see if we have a comparison feeding the > conditional jump and if they're already adjacent? Do you actually need to > get the condition code regs to do that test? > Yes, I am trying to see if we have a comparison feeding the conditional jump and if they're already adjacent. Do you have more easier way to do that test? > >> + >> + /* Different microarchitectures support macro fusions for different >> + combinations of insn pairs. */ >> + if (!targetm.sched.macro_fusion_pair_p >> + || !targetm.sched.macro_fusion_pair_p (prev, insn)) >> + return; >> + >> + SCHED_GROUP_P (insn) = 1; > > I'm surprised that SCHED_GROUP_P worked -- I've tried to do similar stuff in > the past and ran into numerous problems trying to hijack SCHED_GROUP_P for > this kind of purpose. > > > >> >> static void haifa_init_only_bb (basic_block, basic_block); >> diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c >> index e1a2dce..156359e 100644 >> --- a/gcc/sched-rgn.c >> +++ b/gcc/sched-rgn.c >> @@ -2443,6 +2443,8 @@ add_branch_dependences (rtx head, rtx tail) >> cc0 setters remain at the end because they can't be moved away from >> their cc0 user. >> >> + Predecessors of SCHED_GROUP_P instructions at the end remain at the >> end. >> + >> COND_EXEC insns cannot be moved past a branch (see e.g. PR17808). >> >> Insns setting TARGET_CLASS_LIKELY_SPILLED_P registers (usually >> return >> @@ -2465,7 +2467,8 @@ add_branch_dependences (rtx head, rtx tail) >> #endif >> || (!reload_completed >> && sets_likely_spilled (PATTERN (insn))))) >> - || NOTE_P (insn)) >> + || NOTE_P (insn) >> + || (last != 0 && SCHED_GROUP_P (last))) >> { >> if (!NOTE_P (insn)) >> { > > This looks like a straighforward bugfix and probably should go forward > independent of this enhancement. Ok, I will separate it into another patch. > > Jeff | https://gcc.gnu.org/legacy-ml/gcc-patches/2013-10/msg01132.html | CC-MAIN-2020-40 | refinedweb | 541 | 53.07 |
01 March 2011 22:30 [Source: ICIS news]
HOUSTON (ICIS)--US February auto sales of passenger cars and light trucks increased by 27.3% year on year, according to sales data released on Tuesday, potentially raising demand for several chemicals.
The ?xml:namespace>
The American Chemistry Council (ACC) estimates that each automobile contains an average of $2,700 (€1,944) worth of chemicals.
Total sales for February 2011 were 993,387 units, up from 780,265 units in February 2010, according to Autodata.
Each of the six largest manufacturers in the
Meanwhile, sales for auto manufacturers outside the big six increased by 22% year on year, to 202,750 units in February 2011 from 166,126 units a year ago.
"After two months of improving sales, we feel good about the way 2011 has started," said
Month on month,
The table below reflects February sales data
($1 = €0.72)
For more on ABS | http://www.icis.com/Articles/2011/03/01/9439929/us-feb-auto-sales-jump-27.3-year-on-year-may-raise-chem-demand.html | CC-MAIN-2014-42 | refinedweb | 153 | 59.33 |
Efficiently tagging existing AWS resources03 Jan 2020
This guide is for you if you have a bunch of untagged AWS resources and want to understand your bills better.
Already familiar with Cost Explorer and tags? Skip to “Is there a more efficient way?”
If you’ve been using AWS for a while, you’ve probably noticed that pricing and understanding your bills is more complex here.
We can see which services cost how much, but to drill down on a project or department basis we need to add tags. This guide looks at how we can add those cost tags.
How do I start using tags for cost analysis?
Most AWS services support tags which are key-value-pairs that you define per resource (e.g. a REST API). An API for a management dashboard could have the following tags:
team: market-insights project: profit-tracking
You can use tags for many more use cases than cost tracking. Check out the AWS tagging strategies for more examples.
Tags however won’t show up in the Cost Explorer unless you tell AWS to start tracking them. Go to AWS’ Billing service and in the lefthand navigation click on cost allocation tags. If you’ve already defined tags somewhere, then those will show up in a table below. If you haven’t done so, tag some resources now and come back when you’re done. Click refresh to update the tag table.
The next step is to tell AWS which tags are relevant for cost analysis. In our example it’s the two “team” and “project”. Select and activate them. Your selected tags will now start being tracked. You might not see the tags in the Cost Explorer diagram for a day or two.
Now you could go into every single service and resource, manually add some tags and wait for the details to show up. You will however notice that this is quite tedious and that’s what we’ll look into in the next chapter.
Is there a more efficient way?
Yes! If your deployment tooling (e.g. the serverless framework) supports tags, then add them and redeploy your resources where possible. This will make sure your tags remain even if you remove and redeploy your stack. It’s also easier to define tags per stack than per resource.
For all other situations I suggest an iterative API based approach. Under the assumption that insignificant costs may be neglected, we go through three steps:
Identify the most expensive services which are untagged
Use a script to tag all resources of that service
Collect data and repeat
To identify expensive services which are untagged, we open the Cost Explorer and start by showing only those resources that are missing the tag.
Then we set the diagram’s granularity to Monthly, the type to Bar and group by Service. This gives us an overview of services which have not been tagged yet. You may also go for Daily, but should then shorten the time period to e.g. seven days.
We can now pick one or two of those services and tag all the resources. We’ll start by picking Lambda and scripting with Python and with the boto3 library.
Using the API we can list all functions, list tags for each function and add our tags if they are missing. Here is a simple script:
import sys import boto3 region = 'us-east-1' target_tag = 'project' client = boto3.client('lambda', region) functions = client.list_functions().get('Functions', []) for function in functions: tags = client.list_tags(Resource=function['FunctionArn']).get('Tags', []) if target_tag not in tags: print(f"Lambda {function['FunctionArn']} is missing tag {target_tag}") tag_value = None if 'aws-scheduler' in function['FunctionArn']: tag_value = 'serverless-scheduler' elif 'testing-mail' in function['FunctionArn']: tag_value = 'research' if tag_value is not None: client.tag_resource(Resource=function['FunctionArn'], Tags={target_tag: tag_value})
To figure out which names you have, just comment out the code starting at line 16. Then adjust the tag_value logic for your needs. Once you’ve run it for all your functions there should be no more uncategorised lambda costs anymore. It might take a day for Cost Explorer to show those changes, but then we can move on to the next service. Rinse and repeat until you tagged all your relevant cost drivers.
You can find an improved version of the script on GitHub.
Here’s how my Cost Explorer view changed after running the script for Lambda. The last two days show that we have less than 1$/day untagged.
What’s next?
Repeat the process: The next cost driver would be CloudWatch.
There’s an improved version ready for you on GitHub. I will extend this tooling for more services over time. Do you find it useful? Do you need help or would like to contribute for other services? Let me know!
Also follow AWS hero Yan Cui as he might release something related soon ;)
Haha yeah, it's in my backlog of things to catch up on after reinvent! Watch this space... patiently.. (lots to do after reinvent)— Yan Cui (@theburningmonk) December 6, 2019
Further Reads
GitHub repository with improved tooling
Script for tagging CloudWatch resources
Boto3 documentation for lambda (you can find all other services here too)
Enjoyed this article? I publish a new article every month. Connect with me on Twitter and sign up for new articles to your inbox! | https://bahr.dev/2020/01/03/efficient-resource-tagging/ | CC-MAIN-2020-50 | refinedweb | 899 | 73.78 |
Simulating stdin Inputs from User
I recently ran into a problem where I was trying to automate unit testing for a function that paused, mid-execution, and waited for a user to input some value.
For example
def dummy_fn(): name = input() return('Hello, ', name)
dummy_fn()
Nick ('Hello, ', 'Nick')
Simulating a user input wound up being a non-trivial thing to figure out, so I figured it beared writing a note involving:
- The
StringIOclass
- Temporarily overwriting
sys.stdin
StringIO
io.StringIO is used to convert your typical string into something that can be read as a stream of inputs, much like reading lines in a file.
For instance, imagine we’re trying to simulate loading a
.csv
from io import StringIO import csv
Here, the iterator is considering each unique character. Because every other “line” is “a comma that splits two empty strings” we get a bunch of nonsense.
for char in csv.reader('a,b,c,d,e'): print(char)
['a'] ['', ''] ['b'] ['', ''] ['c'] ['', ''] ['d'] ['', ''] ['e']
Whereas using
StringIO, we can tell python to read our input as one unified line.
f = csv.reader(StringIO('a,b,c,d,e')) for char in f: print(char)
['a', 'b', 'c', 'd', 'e']
Temporarily Overwriting
sys.stdin
sys.stdin is the default that gets called when you find yourself writing something that that uses the standard
input() function. It’s got this cryptic
TextIOWrapper for a repr, but it essentially takes whatever a user submits to standard in by typing and hitting Enter.
import sys sys.stdin
<_io.TextIOWrapper
But if we inspect, this
TextIOWrapper inherits from the same base class as
StringIO
sys.stdin.__class__.__base__
_io._TextIOBase
StringIO.__base__
_io._TextIOBase
Meaning we can leverage the same underlying functionality if we spoof a function designed to call
sys.stdin. In this case,
input()
with StringIO('asdf') as f: stdin = sys.stdin sys.stdin = f print("'" + input() + "' wasn't actually typed at the command line") sys.stdin = stdin
'asdf' wasn't actually typed at the command line
(Note: Because Jupyter Notebooks use a different
stdin scheme, this, ironically, is just markdown. But running it in IPython or regular ol’ Python works just fine)
Putting it All Together
with StringIO('Nick') as f: stdin = sys.stdin sys.stdin = f dummy_fn() sys.stdin = stdin
('Hello', 'Nick') | https://napsterinblue.github.io/notes/python/development/sim_stdin/ | CC-MAIN-2021-04 | refinedweb | 382 | 55.13 |
A Gatsby theme is a reusable block of a Gatsby site that can be shared, extended, and customized (source). It enables us to separate functionalities of our site to share, reuse, and modify in multiple sites in a modular way.
Early this week, Gatsby theme was announced stable! They have two official themes, the blog theme and the notes theme. They also have three starter sites (gatsby-starter-blog-theme, gatsby-starter-notes-theme, and gatsby-starter-theme) to get you started using the blog theme, notes theme, and both themes together respectively.
Using a starter site is ideal if:
- You want to get started quickly
- You don’t already have an existing site
However, I’d like to set up a Gatsby site from scratch to:
- get a better idea how the themes work, and
- see the minimum possible modifications to run a site
Follow along as I create a site, add the themes, add my own content and customizations! You can find the code for this post on my Github under the
using-official-themes-without-starter branch.
Table of contents
- Create a Gatsby site
- Install themes
- Modify theme options and metadata
- Add Markdown content and avatar image
- Shadow layout and bio components
- Customize the styles
⚠️ Note: This post describes my personal experience and perspective using the official themes for the first time. If you want to learn Gatsby themes, it’s a good idea to start from their docs and tutorial.
1) Create a Gatsby site
I do this by manually creating a minimal
package.json file in my root folder, then running
yarn install. You may also use any regular, non-theme starter site such as gatsby-starter-hello-world if you prefer.
{ "name": "eka-personal-site", "private": true, "description": "Personal site of @ekafyi", "version": "0.1.0", "license": "MIT", "scripts": { "build": "gatsby build", "develop": "gatsby develop", "start": "npm run develop", "serve": "gatsby serve", }, "dependencies": { "gatsby": "^2.13.4", "react": "^16.8.6", "react-dom": "^16.8.6" } }
2) Install themes
We are installing two official themes,
gatsby-theme-blog and
gatsby-theme-notes.
We do it the same way we install any regular Gatsby plugin; first we install the theme packages by running
yarn add gatsby-theme-blog gatsby-theme-notes.
Next, we add it to the
plugins array in
gatsby-config.js. I’m creating a new file as I’m starting from scratch; if you do this in an existing site, your config file would look different from mine. The exact content does not matter, as long as we add our themes in the
plugins like so:
// gatsby-config.js module.exports = { plugins: [ { resolve: `gatsby-theme-notes`, options: {} }, { resolve: `gatsby-theme-blog`, options: {} } ], siteMetadata: { title: "`Ekaʼs Personal Site`" } };
As you can see, I start with the most barebones config. I only have
title in my metadata and I have not modified any options yet. Let’s do that in the next step.
3) Modify theme options and metadata
How do we know what options are available to modify? I peek around and find two places where we can find that information:
- Published theme packages
- Theme files in
node_modules
At the time of writing, none of the three theme-specific starter sites provide exhaustive theme options list.
3a) Modify blog theme options
We can see the following theme options in the gatsby-theme-blog package README:
basePath
contentPath
assetPath
mdx
Let’s say we’d like to change the blog posts folder, from the default
/content/posts to
/content/writing. We can do so by passing
contentPath to the theme’s
options.
// gatsby-config.js module.exports = { plugins: [ // gatsby-theme-notes { resolve: `gatsby-theme-blog`, // Default options are commented out options: { // basePath: `/`, // Root url for all blog posts contentPath: `content/writing`, // Location of blog posts // assetPath: `content/assets`, // Location of assets // mdx: true, // Configure gatsby-plugin-mdx } } ], // siteMetadata };
The theme’s README also contains an additional configuration section that describes what
siteMetadata items are supported. I duly updated my config’s
siteMetadata with my name, site description, and social links.
3b) Modify notes theme options
As with the blog theme, we can find the theme options in the gatsby-theme-notes package README:
basePath
contentPath
mdx
homeText
breadcrumbSeparator
I’m going to modify the
homeText into “Home” and
breadcrumbSeparator into
». (Note: It turns out breadcrumbs are only for Notes in subfolders, so we will not see the breadcrumb in action in this post.)
// gatsby-config.js module.exports = { plugins: [ { resolve: `gatsby-theme-notes`, // Default options are commented out options: { basePath: `/notes`, // Root url for all notes pages // contentPath: `content/notes`, // Location of notes content // mdx: true, // Configure gatsby-plugin-mdx homeText: "Home", // Root text for notes breadcrumb trail breadcrumbSeparator: "»", // Separator for the breadcrumb trail } } // gatsby-theme-blog ], // siteMetadata };
You can see my full
gatsby-config.js file here.
Bonus: Theme files in
node_modules
So far, the starter sites are well-documented in terms of theme options. What if we use unofficial themes with minimum info in the package README? 😕
We can assess the theme files either in the theme’s repository, or even quicker, in our project’s
node_modules folder. For instance, to see the blog theme files, we can open
node_modules/gatsby-theme-blog. There we can see the entire how the theme code actually resemble a regular Gatsby site, and what options are available.
The screenshot above shows
node_modules/gatsby-theme-blog/gatsby-config.js. We can see the
options object passed into the config and used, among others, in the
gatsby-source-filesystem plugin that looks for our content files. We also learn that if we do not define
contentPath, then
content/posts is used as default.
So—we have installed and modified our themes, but we don’t have any content yet. Let’s add them in the next step.
4) Add Markdown content and avatar image
Now we are adding our content in Markdown files. Based on the previous step, we are creating a folder called
content in my project root with three folders in it:
content/writing— contain Blog Post files
content/notes— contain Notes files
content/assets— I don’t know what exactly “assets” are, so I’m going to leave this empty
I’m going to do this via the command line, though you may do so elsewhere (from Finder, Windows Explorer, or your code editor).
mkdir content content/writing content/notes content/assets
I create a short blog post in
content/writing/hello-world.mdx and a note in
content/notes/hello-note.mdx. You can see my
content folder here.
So far, we have: installed the theme, modified theme options, and added content. Is it possible to have a site running without even a
src folder? We’re going to find out as I run the site for the first time.
I run
gatsby develop and got the following error:
There was an error in your GraphQL query: - Unknown field 'childImageSharp' on type 'File'. File: node_modules/gatsby-theme-blog/src/components/bio.js
I open the offending component and discover that we are required to have a PNG/JPG/GIF image file called
avatar.
// node_modules/gatsby-theme-blog/src/components/bio.js const bioQuery = graphql` query BioQuery { site { siteMetadata { author } } avatar: file(absolutePath: { regex: "/avatar.(jpeg|jpg|gif|png)/" }) { childImageSharp { fixed(width: 48, height: 48) { ...GatsbyImageSharpFixed } } } } `
I peek at the blog theme starter and see that we should have the avatar image in our
content/assets folder. I duly add a (badly, faux-artsy coloured selfie) avatar there and re-run the app. Aaaand… it works!
The site title, avatar image, and the social links correctly point to mine. I’ve got a site running without even having a
src folder! 😯
However, there are several issues:
- The bio text still uses the default (it’s not mentioned in the theme’s README or the starter 😕)
- The
/notesdirectory does exist and show my Notes content, but it’s not linked from the header navigation
Next we’re going to “shadow” the components to solve those issues.
5) Shadow layout and bio components
Component Shadowing is a technique that allows us to override a theme’s components without directly modifying or forking the theme.
Now we are going to shadow three components:
- Blog theme’s bio text -> to use my own bio text
- Blog theme’s header -> to add “Notes” link to the navigation
- Note theme’s layout -> so it matches the rest of the site (ie. matches the Blog pages)
For the second and third components, I copy from the gatsby-starter-theme as it seems to be the fastest way!
5a) Shadow Blog theme’s bio component
I first check the Blog theme’s
bio.js component, but turns out it renders another component called
<BioContent>. I open
bio-content.js and yes, that’s our culprit.
Steps to shadow a theme’s file:
- Create a folder with the theme name in our
srcfolder
- Example: To shadow
gatsby-theme-blog, I create the folder
src/gatsby-theme-blog
- Create the component file in the folder above with the file/folder structure resembling the theme’s structure after
src
- Example: The original file we want to shadow is
node_modules/gatsby-theme-blog/src/components/bio-content.js. We copy
components/bio-content.jsinto our theme folder from the step above. Hence our file is in
src/gatsby-theme-blog/components/bio-content.js.
TL;DR version, relative from our project root:
- Original:
node_modules/gatsby-theme-blog/src/components/bio-content.js
- Shadow:
src/gatsby-theme-blog/components/bio-content.js
I create a simple file duplicating the original
bio-content.js with the Bio text changed.
// src/gatsby-theme-blog/components/bio-content.js import React, { Fragment } from "react" export default () => ( <Fragment> Personal site of Eka, front-end web developer and competitive napper. </Fragment> )
I restart the app and now it shows my bio text. 👌🏾
5b) Shadow Blog theme’s header component
For the header component, if I were to do what I did with the bio component (ie. export a new component), I’d be overriding the entire header.
// src/gatsby-theme-blog/components/header.js import React, { Fragment } from "react" export default () => ( <Fragment> My custom header <br/> The entire header is gone! 😱 </Fragment> )
It’s not what I want because for now I’m happy with the site title, dark mode toggle button (both UI and functionality), and the bio; all I want to do is to add a link to the Notes page.
Here we can see that shadowing is more than just overriding a component. We can also interact with the theme’s component, along with its original props, as needed.
As shown in the Blog theme’s
header.js, the
<Header> component accepts
children prop between the site title and the dark mode switch, where we can pass our content.
Now we’re going to: (1) create the shadowing file in our site, (2) import the header component, and (3) render the header with our custom
children.
// src/gatsby-theme-blog/components/header.js import React from "react"; import Header from "gatsby-theme-blog/src/components/header"; export default props => { return ( <Header {...props}> <div style={{ color: "red" }}>My custom header</div> </Header> ); };
It works—I can add my own content without having to rewrite the entire header component! 💃🏽
You can also pass props to the component (provided the component supports it). For instance, here I modify the
title prop into “My Custom Title”.
// src/gatsby-theme-blog/components/header.js import React from "react"; import Header from "gatsby-theme-blog/src/components/header"; export default props => { return ( <Header {...props} <div style={{ color: "red" }}>My custom header</div> </Header> ); };
Here is the result.
Finally, I’m going to add a link to the Notes page with the code from gatsby-starter-theme/header.js. Here we use functionalities from Theme UI, a theming library used by the Blog theme. In a nutshell, Theme UI’s
Styled component and
css prop allow us to use HTML element with the theme’s
theme-ui styles, for example to match the theme’s
heading font family.
Styled also supports the
as prop (popularized by libraries like Emotion and Styled Component), so we can take advantage of Gatsby’s built-in routing through the
Link component with
<Styled.a as={Link}> (meaning: use
<Link> component with
<a> style).
import React from "react"; import { Link } from "gatsby"; import { css, Styled } from "theme-ui"; import Header from "gatsby-theme-blog/src/components/header"; export default props => { return ( <Header {...props}> <Styled.a as={Link} to="/notes" css={css({ // styles })} > Notes </Styled.a> </Header> ); };
It works! You can see the full code here.
5c) Shadow Note theme’s layout component
We already have a Notes page at
/notes (ie. localhost:8000/notes), but it does not have the header and footer yet. That’s because this view comes from the Notes theme, separate from the Blog theme, which renders the header and footer.
Now we are going to shadow the Layout component in
src/gatsby-theme-notes/components/layout.js, import the Blog theme’s Layout component, and wrap our content in the latter.
As with the previous step, the shadowing component in our site gets the props from the original component (ie. Notes theme’s Layout), so we can wrap the entire
props.children (ie. Notes content) without having to rewrite anything else.
// src/gatsby-theme-notes/components/layout.js import React from "react" import BlogLayout from "gatsby-theme-blog/src/components/layout" export default props => <BlogLayout {...props}>{props.children}</BlogLayout>
Restart the app, and voila, the Blog theme layout (header and footer) now applies to the Notes section, too!
6) Customize the styles
Unless you happen to like the theme’s default purple, in all likelihood you would want to modify your theme-powered site’s visual styles such as colours and typography.
The Blog theme uses the theming library we discussed briefly above, Theme UI. Theme UI itself works as a “theme plugin” that exports a
theme object from
gatsby-theme-blog/src/gatsby-plugin-theme-ui. Check out Theme UI’s docs to read more about the
theme object.
The Blog theme breaks down the
theme-ui object into separate files (colors, components, etc) that are imported in the
gatsby-plugin-theme-ui index file. Accordingly, if we only want to customize the colors, we can shadow the
colors.js file, and so on.
We customize the styles by shadowing the
gatsby-plugin-theme-ui file(s) the same way we shadow any other components. To shadow
node_modules/gatsby-theme-blog/src/gatsby-plugin-theme-ui/colors.js, for example, we take the part after
src (
gatsby-plugin-theme-ui/colors.js) and put it in our shadowing folder,
src/gatsby-theme-blog. Thus, we create our file at
src/gatsby-theme-blog/gatsby-plugin-theme-ui/colors.js.
Now we’re going to modify the colors, using the Blog theme starter’s file as reference. As we don’t want to replace all the colors, we import the theme’s default theme colors and merge them with our modified colors. We also import lodash’s
merge to deep-merge the style objects. It’s not required but it helps us do the deep merge; we may omit it if we want to code the deep merge ourselves OR if we don’t need to merge with the default theme (ie. we rewrite the entire theme styles).
// src/gatsby-theme-blog/gatsby-plugin-theme-ui/colors.js import merge from "lodash.merge"; import defaultThemeColors from "gatsby-theme-blog/src/gatsby-plugin-theme-ui/colors"; export default merge({}, defaultThemeColors, { text: "rgba(0,0,0,0.9)", primary: "#0e43c5", background: "#fff1c1", modes: { dark: { text: "rgba(255,255,255,0.9)", primary: "#f7e022", background: "#151f48" } } });
Other theme styling attempts:
gatsby-plugin-theme-ui/typography.js
- Result: ✅❌ Partial success. I could change
fonts.bodyfrom the default Merriweather typeface to system-ui, but I could not change
fonts.heading. It’s likely because the
fonts.headingvalue is overridden into Montserrat in
gatsby-plugin-theme-ui/index. Which brings us to…
gatsby-plugin-theme-ui/index.js
- Result: ❌ Fail. My shadowing
index.jsdoes not seem to get detected. I test by running
console.log(‘Hello’), which did not get printed.
gatsby-plugin-theme-ui/styles.js
- Result: ✅ Success! I modify the hover link style to add underline and use the
secondarycolor.
You can see those three files here.
Note about theme order: If multiple themes use
theme-ui, the last theme specified in the
plugins array in our
gatsby-config.js wins.
This is the end result of the steps in this post.
Conclusion
Here are my impressions after trying the official themes.
- Themes help you get started building a simple, basic Gatsby site quickly without even needing a
srcfolder. More advanced users can take advantage of themes to create modular, extendable, composable blocks of their site (though I have not personally got to this point).
- The official themes are a good place to start using, modifying (through shadowing), and dissecting themes.
- The difficulty level of using and shadowing themes highly depends on the theme’s documentation, eg. what options are available, what data are required.
Do you have examples of non-official themes that you build and/or use? Let me know in the comments!
Next stop, learn to do more advanced customizations and/or build my own theme. Thanks for reading, til next post! 👋🏾
Discussion
thanks a lot!! helped me setup and understand | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ekafyi/using-and-customizing-multiple-official-gatsby-themes-from-scratch-without-starter-sites-2441 | CC-MAIN-2020-40 | refinedweb | 2,924 | 55.84 |
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
3 years, 1 month ago.
How do I reset the time printed when the user presses the blue user button?
I'm trying to programme my Nucleo ST303RE board so that when a user presses the user button it resets the time printed out to the original start time of Tue Nov 28 00:00:00 2017.
The code I have so far is:
- include "mbed.h"
int main() { set_time(1511827200);
while (true) { time_t seconds = time(NULL);
printf("Time = %s", ctime(&seconds));
wait(1); } }
2 Answers
3 years ago.
Andy A's answer helped me to set the time too. Thank you.
I just wanted to add a note regarding user button contact bounce on the Nucleo boards...
The schematic for the Nucleo boards can be found at Here is a snippet of the schematic showing the blue user button B1
From the schematic, when B1's contacts change from closed to open, the voltage at pin PC13 increases at an exponential rate with a time constant of 0.48ms (defined by the values of resistors R29, R30 and capacitor C15). I found that this design inhibited contact bounce when the button was pressed. (I checked this with a digital storage oscilloscope probe on PC13.)
I'm not sure if push button action is damped on other mbed boards. As Andy A says, the issue is solvable with code. The R-C combination could possibly be an alternative method if an external pushbutton switch is required.
3 years, 1 month ago.
There are two ways to do this:
1) In your while loop you could check for the button being pressed and call set_time if it is. The problem with this is that because of your wait you may miss any button press of less than a second. You will also keep time reset while the button is down so you'll end up counting from when it is released not when it is pressed, that may or may not be what you want. These issues are all solvable with a little more code.
2) You can use an interrupt to reset the time the instant the button is pressed. This will catch a button press no matter how short and only detect it being pressed not released. The down side is that due to contact bouncing (mechanical switches suffer from issues where the contacts physically bounce a tiny amount when closed generating lots of short pulses, google de-bouncing for details on different ways to handle this nicely) you could end up resetting the time several times each button press. Not an issue in this situation but something to keep in mind for more complex problems.
Here's the code for option 2. In the future please use
<<code>> your code goes here <</code>>
so that the code is displayed correctly.
#include "mbed.h" InterruptIn userButton(USER_BUTTON); void resetTime(void) { set_time(1511827200); } int main() { resetTime(); userButton.rise(resetTime); while (true) { time_t seconds = time(NULL); printf("Time = %s", ctime(&seconds)); wait(1); } } | https://os.mbed.com/questions/81376/How-do-I-reset-the-time-printed-when-the/ | CC-MAIN-2021-31 | refinedweb | 530 | 70.84 |
By Theo van Kraay, Data and AI Solution Architect at Microsoft
Azure Machine Learning Studio is Microsoft’s graphical tool for Data Science, which allows for deploying externally generated machine learning models as web services. This product was designed to make Data Science more accessible for a wider group of potential users who may not necessarily be coming from a Data Science background, by providing easy to use modules and a drag and drop experience for various Machine Learning related tasks.
For those looking for an integrated, end-to-end advanced analytics solution, Azure Machine Learning Workbench might be the better option. Data scientists can use it to prepare data, develop experiments, and deploy models at cloud scale. Go here for a full end-to-end tutorial on how to prepare (part 1), build (part 2), and deploy/operationalise your models as web services using Docker (part 3) with Azure Machine Learning Workbench.
This article will focus on deploying models using Studio, the graphical interface. The purpose of this article is to take you through how to deploy an externally trained and serialised sklearn Python machine learning model, or a pre-saved model generated in R, as a web service using the Studio features.
First, we generate a simple model in Python using the pickle module, training the model using a .csv file that contains a sample from iris data set:
import pickle import sys import os import pandas import numpy as np from sklearn.metrics import confusion_matrix from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import precision_recall_curve # create the outputs folder os.makedirs('./outputs', exist_ok=True) # load Iris dataset from a DataPrep package as a pandas DataFrame iris = pandas.read_csv('iris.csv') print ('Iris dataset shape: {}'.format(iris.shape)) # load features and labels X, Y = iris[['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal Width']].values, iris['Species'].values # split data 65%-35% into training set and test set X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.35, random_state=0) # change regularization rate and you will likely get a different accuracy reg = 0.01 # load regularization rate from argument if present if len(sys.argv) > 1: reg = float(sys.argv[1]) print("Regularization rate is {}".format(reg)) # train a logistic regression model on the training set clf1 = LogisticRegression(C=1/reg).fit(X_train, Y_train) print (clf1) # evaluate the test set accuracy = clf1.score(X_test, Y_test) print ("Accuracy is {}".format(accuracy)) # serialize the model on disk in the special 'outputs' folder print ("Export the model to model.pkl") f = open('./outputs/model.pkl', 'wb') pickle.dump(clf1, f) f.close()
In the above example, the .csv file just contains a sample of the iris data set. The “species” column is the classification that our Logistic Regression model is going to predict, based on the four features of Sepal and Petal length and width:
The above code will save the serialized model into the outputs folder. We take the model.pkl file, zip it, and upload it into Azure Machine Learning Studio (sign up here if you have not already done so). Click the “New” icon in the bottom left:
In the pane that comes up, click on dataset, and then “From Local File”:
Select the zip file where you stored your serialized sklearn model and click the tick:
We are also going to create an iris_input.csv file that will be used to model the request input to the web service (note that this will not have the “species” column, as this is a score label):
Use the same process as above to upload your iris_input.csv. Next, hit “new” and this time click “Blank Experiment”:
You will be presented with the experiment canvas:
In the “search experiment items” box, search for each of the below, and drag each into the canvas:
- Your serialized “model.zip” that you uploaded earlier
- Your “iris_input.csv”
- A module named “Execute Python Script”
When they are on the canvas, connect iris_input.csv and model.zip to the “Execute Python Script” module as illustrated below:
Highlight the execute Python Script Module, and an Execute Python Script pane will appear, click the highlighted icon below to expand it so you can edit the code (note: you will need to ensure that the Python version selected contains a version of the pickle module that matches the one used to originally create the serialized model) :
Replace the auto-generated code with the simple script below:
import pandas as pd import sys import pickle def azureml_main(dataframe1 = None, dataframe2 = None): sys.path.insert(0,".\Script Bundle") model = pickle.load(open(".\Script Bundle\model.pkl", 'rb')) pred = model.predict(dataframe1) return pd.DataFrame([pred[0]]),
Click the tick, ensure you save the experiment using the icon in the bottom left, and then hit “Run” to run the experiment. This will de-serialise the model into the Azure Machine Learning Studio Environment.
When finished, the Execute Python Script module should have a green tick, You can now hit “Set up web service”:
This will generate web service input and output modules. By default the output module will connect from the 2nd output port of the Python script. You will need to change this so that it connects from the 1st port, which is the result data set. Make sure your pane looks like the below:
Save the experiment. Before deploying the web service, you will need to run the experiment again (this is so Machine Learning Studio can determine the correct inputs and outputs from running the end-to-end model). When this is run and you have a green tick, you can hit “Deploy Web Service”:
This will take you to a screen with information about the newly provisioned web service, including the API key which you should store for later:
If you click on Request/Response, this will open a new window with a comprehensive set of information about calling the web service, including Swagger documentation, and sample client API code:
We can follow exactly the same process for externally generated and saved models in R. For example, we can use a Support Vector Machine to train a model using the same iris.csv file mentioned earlier, in R:
library(kernlab) rbf <- rbfdot(sigma=0.1) irisSVM <- ksvm(Species~.,data=iris,type="C-bsvc",kernel=rbf,C=10,prob.model=TRUE) save(irisSVM, file = "c:/irisSVMmodel.rda")
We can zip and import the irisSVMModel.rda file created above. The R script to load the saved model in Azure Machine Learning Studio, using “Execute R Script” instead of “Execute Python Script”, would be as below:
library(kernlab) dataset1 <- maml.mapInputPort(1) # class: data.frame load("./src/irisSVMmodel.rda"); prediction <- predict(irisSVM, dataset1, type="probabilities") dataframe <- prediction index <- which.max(dataframe) df <- colnames(dataframe) result <- data.frame(df[index]) # Select data.frame to be sent to the output Dataset port maml.mapOutputPort("result");
We can publish in the same way:
Performance considerations
Although performance may be adequate for small models and limited throughout, since the Azure Machine Learning Environment is a managed service (where you are not in control of the physical resources) and the model is being de-serialized at runtime for each execution, you may need to consider the performance characteristics.
It would be advisable to test/evaluate the following, and ensure you are content with the results:
- Is there a “spin up” latency after some inactivity? What are the characteristics of that?
- What is the average latency for 100 sequential requests
- What is the average latency for 10 parallel requests done 10 times in a sequence?
- Does changing the scale of the web service affect these statistics?
CICD
Additionally, whilst the graphical user interface is a convenient tool for deploying web services adhoc, you may also need to consider the Continuous Integration/Continuous Delivery (CICD) paradigm within your organisation. The methods discussed above assume that the Data Scientist has control over the deployment process and/or this process is not automated, but in the context of CICD, such approaches may be inadequate. Automation of web service deployment is possible through the Azure Machine Learning Studio automation SDKs/APIs. For more details, see here for R, and here for Python. As mentioned above, you may also prefer to use AML Workbench for this.
For more information on the various Machine Learning Environments and capabilities in Microsoft Azure, go here. | https://blogs.technet.microsoft.com/uktechnet/2018/04/25/deploying-externally-generated-pythonr-models-as-web-services-using-azure-machine-learning-studio/ | CC-MAIN-2019-04 | refinedweb | 1,395 | 54.02 |
go to bug id or search bugs for
Description:
------------
When WDDX module is compiled separately from main PHP build, the module contents are not actually compiled due to #ifdef HAVE_WDDX. Since config.h is not included,HAVE_WDDX is not defined in this case and empty .o file is produced.
This problem also affects PHP 4.4.2.
Fix is to include config.h on top:
--- ext/wddx/wddx.c.orig 2006-04-22 12:18:32.000000000 +0200
+++ ext/wddx/wddx.c
@@ -20,2 +20,6 @@
+#ifdef HAVE_CONFIG_H
+#include "config.h"
+#endif
+
#include "php.h"
Reproduce code:
---------------
Build as separate module (i.e. not compiled into php) and load via extension=wddx.so
Expected result:
----------------
Module loads
Actual result:
--------------
php prints warning when loading the extension:
PHP Warning: Unknown(): Invalid library (maybe not a PHP library) 'wddx.so' in Unknown on. | https://bugs.php.net/bug.php?id=37162 | CC-MAIN-2015-27 | refinedweb | 141 | 63.05 |
In this article, a translator is written which takes a program written in BASIC and converts it to JavaScript. The purpose of this project is not just for the nostalgia of being able to write programs in BASIC again, but also to demonstrate how one can go about writing a compiler in C#.
Being able to define a language and write a compiler for it will not only give you insight into how the programming languages you use work, but it's useful in a number of real-world scenarios, such as writing a custom rules engine for a business application, allowing things such as levels in a game to be describable in a simple language, writing IDE add-ins, or writing a domain-specific language.
There are two demonstrations available for this project. In the first one, you can play a little game I wrote, and in the second, you can write your own BASIC program, have it converted to JavaScript, and then run it in your browser.
Many programmers who grew up in the 80s and 90s learnt programming in a language called BASIC, whether it was using a Commodore 64/VIC-20/etc. plugged into their TV, or programming in GW-BASIC on their 286. For my part, I was given a Dick Smith VZ200 at the age of 8, which came with a stunning 8KB of RAM and loaded programs from cassette tapes. The games it came with very quickly/instantly became horribly boring, but the ability to program the machine made it infinitely interesting. The first BASIC program I and most people learnt generally looked something like this:
10 print "Hello world"
20 goto 10
This would generate something like the following output (and would never stop):
Side note: "Hello world" would often be replaced by something a little more whimsical, such as "You suck", especially when the 10-year-old programmer was faced with a bank of Commodore 64s and Amigas at the local electronics shop.
The main points to note from such a simple basic program is that each line is numbered, flow is controlled using harmful "goto" statements, and there is a print statement for printing to the console. There are no classes, and no type declarations. A variable is appended with the '$' character, and there are other statements such as input to get user input, and if-then-else statements, which cannot span more than one line. Comments are created by starting the line with "REM" (short for "remark", not a nod to a certain musical group, nor a phenomenon which we all regularly experience several times a night). Using this information, we can write the following simple program:
goto
print
input
if-then-else
REM
10 print "Hello"
20 rem Get the user to enter their name, and hold the result in the variable a$
30 input "Please enter your name" a$
40 rem Quit if the user entered "quit"
50 if a$ = "quit" then goto 80
60 print "Hello " a$
70 goto 30
80 print "Goodbye"
What this program does should be pretty obvious, but feel free to see it in action here. Using the constructs here, and a few other statements such as for and while loops, and the gosub/return statements, reasonably sophisticated text-based games could be made. The turning point for me, though, was when I found out about the LOCATE statement, which allows you to place the cursor anywhere on the screen (as opposed to the next line, when using the print statement), and inkey$, which allows you to get the last key the user pressed:
for
while
gosub
return
LOCATE
inkey$
10 locate 10,4
20 print "Down here"
30 locate 2, 10
40 print "Up here"
Run it
10 a$ = inkey$
20 if a$ <> "" then print "You pressed " a$
30 goto 10
With this knowledge, I was able to draw a spaceship (>=-) at a certain position, and when the 'A' button was pressed, remove the ship from its current location by printing three spaces over the top of where it was, LOCATEing the cursor one row above the last position, and then redrawing the ship. Same-deal-different-direction for the 'Z' key. Animation was created!
I drew a second ship, and using for loops, allowed them to fire bullets (full-stops: '.') at each other. I wrote this game, calling it "Space War", when I was nine, and spent hours on it. Unfortunately, the cassette player on my computer could only load programs; not save them. So, at the end of the day, I had to eventually turn off my computer, and it was lost forever. I later remade more elaborate versions using QBASIC when I had a PC, and fairly recently - for reasons I'm sure only a psychologist versed in computer science can answer - I wanted to recreate it by writing it in old-fashioned BASIC and converting the code to JavaScript. When I read about the Irony .NET Compiler Construction Kit, it was like seeing a vision: fate could not be fought, and the JSBasic compiler project was born.
The implementation of this project can be nicely split into two halves: compiling a BASIC program, and generating JavaScript.
In order to read BASIC source code and generate script from that, the source code must be parsed to pick up line numbers, keywords, variable names, strings, loops, if statements, and everything else in the language. Once parsed, an abstract syntax tree is constructed, which is to say the source code is converted into a tree, with the root node representing the whole program. You can think of each line as a child of the root node, where each line has more child nodes (for example, an if statement might be a child node of a line, and the condition of the if statement and the then part of the if statement are then children of the if statement node).
if
then
Once the abstract syntax tree is built, the tree can be traversed and code can be generated.
There are a lot of tools available for this, but to me, it has always seemed a little bit too much work to get the tools together in a .NET project and generate a parser and do whatever else is needed. And then, one day I read about Irony on The Code Project, which allows you to write the grammar of the language in C#, and then it takes care of parsing and building the tree. I immediately saw the benefit in this, and realised that this is the sort of project which could enable the implementation of the Interpreter pattern to become much more widespread, what with its barrier-removing-goodness.
The key to language implementation is to understand the aforementioned "grammar" of the language. This requires you to describe how the language you are writing the compiler for works, and requires quite a bit of recursive thinking. In terms of BASIC though, we can say the following:
Using the descriptions above, the following program...
...can be represented as a tree as follows:
Terminology-time: The orange nodes above are "Terminals" as they come at the end of each branch in the tree; the blue nodes are Non-Terminals.
This is somewhat of a simplification, as a line can actually be a statement list, where each statement is separated by a colon, and goto and print have their own non-terminals (GotoStmt and PrintStmt), but you get the idea.
GotoStmt
PrintStmt
The problem that Irony solves is how to convert a source code into a tree, such as that above. The first step is to describe the grammar formally. A common way to describe a grammar is using the Backus-Naur form (BNF), which is not a required step for using Irony, but it is helpful to do this first. The above descriptions in BNF would be as follows:
<program> ::= <lines>
<lines> ::= <lines> <line> | <line>
<line> ::= <number> <statement>
<statement> ::= "print" <expression>
| "input" <expression> <variable>
| "if" <expression> "then" <statement>
| "if" <expression> "then" <statement> "else" <statement>
| "goto" <number>
| <variable> "=" <expression>
<expression> ::= <string>
| <number>
| <variable>
| <expression> <binary operator> <expression>
<binary operator> ::= "=" | "<>" | "<" | ">" | "<=" | ">="
This next section explains the background to the <lines> node above. This is a recursive definition which essentially states that a program can have one or more lines. This section is not necessary when using Irony however, as a much more natural construct exists which allows you to bypass this step. However, the following explanation has been left to give further background to those who are interested.
When converting from English to BNF, the biggest difference is in the way a line like "A PROGRAM is made up of LINES" is written. In the first line, we say that a <Program> is made up of <Lines>. <Lines> is then followed by a <Lines>, or a <Line>: this is where the recursive thinking is required. If you think of a program with one <Line>, then you can see that one <Line> can be considered as a <Lines> node. If we now add a second line to this imaginary program, then we can say that the first <Line> matches (and gets represented as) a <Lines> node, and then that <Lines> node with the second <Line> matches the "<lines> <line>" rule, and that can get reduced to a <Lines> node.
<Program>
<Lines>
<Line>
<lines> <line>
This, in fact, means that when defined this way, the <Program> node will have one child node, <Lines>, which in turn will have only two child nodes for any program over one line: the first child will be a <Lines> node, the second child will be a <Line> node. The above diagram was therefore a simplification; the "Hello World" program would actually look like the following:
This may seem a roundabout way to represent the idea that a node can have one-or-more of a certain type of child nodes, but that's the way it's traditionally been done. The good news is that you do not need to use this type of construct when using Irony; instead, you can use the much more intuitive concept that a program has one or more line nodes. Therefore, the first tree shown above is in fact the correct representation when using Irony.
Once you have defined the grammar of your language, you need to write this as a C# class. Using Irony.NET, you create a new class which extends the abstract Grammar class, and in the constructor, build the equivalent of the above Backus-Nauf Form description of the language. You need to declare each part of the language, such as:
Grammar
NonTerminal PROGRAM = new NonTerminal("PROGRAM");
NonTerminal LINE = new NonTerminal("LINE");
// etc
And also declare the terminals:
Terminal number = new NumberTerminal("NUMBER");
Terminal stringLiteral = new StringLiteral("STRING", "\"", ScanFlags.None);
Terminal variable = new VariableIdentifierTerminal();
In this case, NumberTerminal and StringLiteral are classes supplied with Irony to identify numbers and quoted strings "like this" in the source code, and VariableIdentifierTerminal is a class written by myself which extends Terminal and is used to match strings of alphanumeric characters ending with the '$' character (i.e., BASIC variables).
NumberTerminal
StringLiteral
VariableIdentifierTerminal
Terminal
Once we have those formalities out of the way, we can get down to business. The following is a subset and simplification of what ended up in the final grammar of JSBasic:
PROGRAM.Rule = MakePlusRule(PROGRAM, null, LINE);
LINE.Rule = number + STATEMENT + NewLine;
STATEMENT.Rule = EXPR | ASSIGN_STMT | PRINT_STMT | INPUT_STMT | IF_STMT |
| IF_ELSE_STMT | BRANCH_STMT;
ASSIGN_STMT.Rule = variable + "=" + EXPR;
PRINT_STMT.Rule = "print" + EXPR_LIST;
INPUT_STMT.Rule = "input" + EXPR_LIST + variable;
IF_STMT.Rule = "if" + CONDITION + "then" + STATEMENT;
IF_ELSE_STMT.Rule = "if" + CONDITION + "then" +
STATEMENT + "else" + STATEMENT;
EXPR.Rule = number | variable | EXPR + BINARY_OP + EXPR | stringLiteral;
Remember, this is all compilable C#, with operator overloading provided by Irony to allow a pretty close translation from BNF to C#. It's really a nice way to define a language's grammar.
Note the first line in the code above, which is how you specify that a NonTerminal (in this case, PROGRAM) has one or more LINE nodes, which is one of my favourite features of Irony, as it simplifies the generated tree. There is also a MakeStarRule for zero or more nodes. When compiled, the Program node in the abstract syntax tree will have a collection of Line nodes.
PROGRAM
LINE
MakeStarRule
Program
Line
Important note: when using MakePlusRule or MakeStarRule, you cannot have anything else in the rule.
MakePlusRule
Making your language case-insensitive couldn't be much easier. In the constructor of your grammar, you just set a property, and then all keywords such as "if" will work nO maTTer WHAT the CaSiNg.
this.CaseSensitive = false;
BASIC lines are ended with line breaks. Languages, like JavaScript, which are ended with characters such as the semi-colon are generally easier to handle; simply add ";" to the end of the rule. However, if the language you are implementing has lines which end with line-breaks, then the following steps are required:
1: End the rule with "NewLine", which is defined in the base Grammar class:
NewLine
LINE.Rule = number + STATEMENT + NewLine;
2: Because Irony ignores whitespace (including line breaks) when scanning the source code, you need a way to resurrect the line breaks when the Abstract Syntax Tree is being created. Fortunately, this can be done with the following line, which can go anywhere in your grammar's constructor:
this.TokenFilters.Add(new CodeOutlineFilter(false));
The CodeOutlineFilter can also help if you need to know the indentation of the source code, and I believe it can (or will in a future release) allow you to handle languages like VB which use characters to join lines together (e.g., by using an underscore). See the parsers included in Irony for examples.
CodeOutlineFilter
Irony provides a terminal which matches comments. To match BASIC comments (i.e., statements starting with "REM" and ending with a line break), the following declaration was used:
Terminal comment = new CommentTerminal("Comment", "REM", "\n");
And, this was then used in the statement rule:
This was required by JSBasic as I wanted to re-print the BASIC comments as JavaScript comments in the generated JavaScript. Normally, you just want to ignore comments, as defining the fact that, for example, in C# the /* */ comments can appear anywhere would really bloat your grammar definition if you didn't just ignore it! So, if you want to ignore comments, define the comment terminal, and rather than putting it in one of your rules, just add it to the non-terminals list:
base.NonGrammarTerminals.Add(comment);
There are a few little details I've skipped above (see the source code for the full story), but once the Grammar is defined, you can get a string containing the source code and compile it into an abstract syntax tree like so:
string
string sourceCode = "10 print \"hello world\"";
BasicGrammar basicGrammer = new BasicGrammar();
LanguageCompiler compiler = new LanguageCompiler(basicGrammer);
AstNode rootNode = compiler.Parse(sourceCode);
OK, so now we've got a tree in memory. Next step: traversing the tree and generating the code.
There is more than one way to do this, but a rather elegant way to generate code is to create a class extending the Irony-defined AstNode. This means there are classes such as ProgramNode, LineNode, StatementNode, PrintStmtNode, etc. Earlier, we saw that the <Program> node will have a collection of <Line> node objects, which in C# means ProgramNode has the following property:
AstNode
ProgramNode
LineNode
StatementNode
PrintStmtNode
public IEnumerable<LineNode> Lines { get; }
As another example, the IfElseStmtNode would have three properties:
IfElseStmtNode
public ExpressionNode Condition { get; private set};
public StatementNode ThenExpression { get; private set};
public StatementNode ElseExpression { get; private set};
In general, these properties should be set in the constructor of the class. This gets called by Irony, and so you need to copy-and-paste basically the same thing into each node class, with a few different name-changes depending on the number of child nodes:
public class IfElseStmtNode : AstNode
public IfElseStmtNode(AstNodeArgs args)
: base(args)
{
// We ignore the "if", "then" and "else" keywords.
// args.ChildNodes[0] is terminal containing "if"
Condition = (ExpressionNode)args.ChildNodes[1];
// args.ChildNodes[2] is terminal containing "then"
ThenExpression = (StatementNode)args.ChildNodes[3];
// args.ChildNodes[4] is terminal containing "else"
ElseExpression = (StatementNode)args.ChildNodes[5];
// This class assumes that the "else" part is mandatory
}
}
The base constructor for AstNode has a property ChildNodes which it sets to args.ChildNodes, which is useful for traversing the tree of nodes later.
ChildNodes
args.ChildNodes
Note that when you are in a node created using MakePlusRule or MakeStarRule, you cannot set the property in the constructor, as at that point args.ChildNodes is empty (it gets populated during the creation of the tree). Therefore, it is recommended to cast the nodes in the property. For the ProgramNode class, its Lines property looks like this:
Lines
public IEnumerable<LineNode> Lines {
get { // Note: this uses the .NET 3.5 Cast<T> method:
return base.ChildNodes.Cast<LineNode>();
}
}
When Irony is creating the abstract syntax tree, it needs to know that, for example, <LineNode> nodes should be instantiated as instances of LineNode classes. This is achieved by using a different constructor for the LineNode non-terminal above:
<LineNode>
NonTerminal LINE = new NonTerminal("LINE", typeof(LineNode));
This is done for all the other types, and now when the code is compiled, the abstract syntax tree will consist of the AstNode classes that you have defined. Generating script is now relatively easy: you just need to get each class to write itself as the target code. To achieve this, I created the following interface which all my AstNode classes implemented:
public interface IJSBasicNode
{
void GenerateJavaScript(JSContext context, TextWriter textWriter);
}
The JSContext was just to recreate the code indentation, to make the JavaScript look pretty, but isn't really necessary. Now, each node can print the bits it needs to, and call GenerateJavaScript on all its child nodes. Here's what the IfStmtNode's implementation of this method could look like:
JSContext
GenerateJavaScript
IfStmtNode
public override void GenerateJavaScript(JSContext context,
TextWriter textWriter)
{
textWriter.Write("if (");
Condition.GenerateJavaScript(context, textWriter);
textWriter.WriteLine(") {");
ThenStatement.GenerateJavaScript(context, textWriter);
textWriter.WriteLine("} else {");
ElseStatement.GenerateJavaScript(context, textWriter);
textWriter.WriteLine("}");
}
The Condition, ThenStatement and ElseStatement nodes can each be complicated, nested, recursive nodes, but because each node just writes the bits that are particular to itself, each class' GenerateJavaScript method is pretty straight-forward. Once every node has its GenerateJavaScript method defined, everything just automagically works. Just compile some source code, create a text writer, and ask the ProgramNode to generate the JavaScript.
Condition
ThenStatement
ElseStatement
I must admit that I've left quite a few details out here, but hopefully, it's given you a general understanding of how to write a compiler and generate code using Irony. Unfortunately, converting BASIC to JavaScript wasn't so straight-forward for every node type, as the next section describes.
A JavaScript program written to run in a browser and a BASIC program written for a console are two completely different beasts. This section describes the problems which needed to be overcome to complete this project.
In order to handle print, input, and locate statements in the same way as an old computer console, I wrote JSConsole. See that article for more information, but in a nutshell, it allows you to treat an HTML element such as a DIV in the same way as a console, for example:
locate
DIV
console.cls(); // clear the screen
var userName = console.input('What is your name?');
console.println('Hello ' + userName);
JavaScript does not have a "goto" branch statement, which made things very difficult. The idea is somewhat similar to calling functions, and so my first thought was to wrap each line into its own function, and at the end of each function, the next function to call is called. We will use the following program again as an example:
This would look something like this, then:
function line10() {
console.println("Hello world");
line20();
}
function line20() {
line10();
}
If you run this though, the function stack will very quickly overflow, and the program will crash (e.g., after "Hello world" has printed 1000 times, 1000 line10 and 1000 line20 function calls will be on the stack, and the program will quickly die). To solve this problem, instead of having each line call the next function directly, it returns the next-function-to-call to a base function (or null to end). This now looks something like this:
line10
line20
null
function line10() {
console.println("Hello world");
return line20;
}
function line20() {
return line10;
}
function run() {
var currentLine = line10;
while (currentLine != null) {
currentLine = currentLine();
}
}
This will now successfully run without blowing the call-stack, because the stack will always consist of either only run, run and line10, or run and line20. This seems like a good solution, until you run it and the browser stops responding. Welcome to the next problem...
run
The problem is that the little program above will run using 100% of the CPU, and will not release any CPU time to the browser. The browser will become stuck (or the browser will think the JavaScript has got into an infinite loop, and let you stop it). Ideally, we need the run() function to look something like the following:
run()
function run() {
var currentLine = line10;
while (currentLine != null) {
currentLine = currentLine();
// give the browser some CPU
Thread.Sleep(10);
}
}
Of course, there is no Thread.Sleep in JavaScript. The closest is the setTimeout function, which lets you execute code after a certain number of milliseconds. The problem with setTimeout is that it essentially acts as an asynchronous call, and so you cannot return anything from it, and we need to know the next function to call.
Thread.Sleep
setTimeout
The trick is to move everything outside of the while loop to become global variables, and turn the while loop into a function, with the condition becoming an if statement, like so:
var curLine;
function run() {
curLine = line10;
mainLoop();
}
function mainLoop() {
if (curLine == null) {
return;
}
curLine = curLine();
setTimeout('mainLoop()', 10);
}
This will now run the deceptively simple two-line basic program forever, while keeping the browser responsive. BASIC also supports gosub and return statements, which are essentially function calls:
10 gosub 40
20 print "Hello from line 20"
30 end
40 print "Hello from line 40"
50 return
Are you ready for the output?
Hello from line 40
Hello from line 20
This was a lot more simple to implement than the goto statement, as it just gets converted to simple function calls. Note that there is a limitation to this technique of returning function pointers to the main loop: you cannot have goto statements within a sub-routine.
The main-loop pattern is very useful for any situation where you need to execute a loop in JavaScript which will last more than a few seconds (such as animations). At a more general level, whenever you have a JavaScript program such as:
function animatePage() {
// initialisation
while (someCondition) {
// ...
// code to execute
// ...
}
// cleanup code
}
You can convert it to this:
function initialse() {
// initialisation
mainLoop();
}
function mainLoop() {
if (!condition) {
cleanUp();
return;
}
// ...
// code to execute
// ...
setTimeout(mainLoop, 10);
}
function cleanUp() {
// cleanup code
}
The mainLoop() code can be found in JSBasic.js, and it actually has a little more code for error handling, and a couple of global functions for strings, and the implementation of inkey$, which simply listens for key-press events on the window and saves the last-pressed key.
mainLoop()
My goal in this project was both to create a BASIC compiler and evaluate the usefulness of Irony. Using Irony was a real pleasure, and I wouldn't hesitate to drop the Irony DLL into a project and define a little grammar for a domain-specific language, should the need ever arise.
I hope that this project may be useful to others as an example of how to create a compiler in .NET. Thanks for reading!
A big thank you to the creator of Irony, Roman Ivantsov, who made major improvements to my initial implementation of JSBasic and answered my many queries. His help has improved the usefulness of this article a great deal. Have a read of the CodeProject Irony article and get the latest release of Irony from CodePlex.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
10 a$=3
20 result$=a$+a$
20 print a$ "+" a$ "=" result$
10 PI=3.14
20 INPUT "WHAT IS THE RADIUS"; R
30 A=PI*R^2
40 PRINT "THE AREA OF THE CIRCLE IS"; A
50 PRINT
60 GOTO 20
10 PI$=3.14
20 INPUT "WHAT IS THE RADIUS" R$
30 A$=PI$*R$*R$
40 PRINT "THE AREA OF THE CIRCLE IS " A$
50 PRINT
60 GOTO 20
COMPILE ERROR
: Invalid character: ' ':
10 DEFSNG A-Z
return
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/25069/JSBasic-A-BASIC-to-JavaScript-Compiler?msg=2528354 | CC-MAIN-2018-51 | refinedweb | 4,168 | 56.29 |
John Hunter wrote:
>>
> they will not be transparent. If you need transparent
> masked regions, then try pcolor instead of imshow. Pcolor
> plots nothing at all in masked cells. Pcolormesh, on the
> other hand, is like imshow in plotting the assigned bad
> color and in using a single alpha for everything.
>> I'm confused about the comments about alpha not working on
>> imshow -- can you elaborate. On the agg backend at least, the
>> alpha channel is respected in imshow, eg
>> examples/layer_images.py. Is there a reason it does not (or
>> should not) work in the masked example?
> John,
> I don't know why it doesn't work; I know only that in my
> example, it doesn't work as I perhaps naively think it
> should. My interpretation of alpha is that if alpha is zero
> in any colored region, and if nothing else is drawn on top,
> then the background should show through; that is, the r,g,b
> values in the r,g,b,a tuple for a region should have no
> effect if a is zero. If you uncomment
> #cmap.set_bad((1,1,1,0) in my example, you will find that
> the masked region is white; and if you change the rgb part
> of that tuple, it takes on that color, regardless of alpha.
I'm not sure what is going on in your example, but this test case
shows that the alpha channel is respected. I made a red RGBA array
and set the alpha channel for the center to be transparent and it
behaves as expected: you can see the line through the transparent
region of the rectangle and the axes bgcolor shows through. I had to
make a small change to svn to make this work because the image wasn't
respecting the zorder (revision 2495). So the bug you are
experiencing is likely to be in the front-end code.
from pylab import figure, show, nx
# a red rectangle with a transparent center
X = nx.ones((20,20,4), nx.Float)
X[:,:,1] = 0
X[:,:,2] = 0
X[5:15,5:15,-1] = 0
fig = figure()
ax = fig.add_subplot(111)
l, = ax.plot(nx.arange(20))
l.set_zorder(2)
im = ax.imshow(X)
im.set_zorder(3) # put the image over the line
show() | https://discourse.matplotlib.org/t/plotting-an-array-of-enumerated-values/5379 | CC-MAIN-2021-43 | refinedweb | 380 | 71.24 |
.
- Build a NFA from the regex. "Simulate" the NFA to recognize input.
- Build a NFA from the regex. Convert the NFA to a DFA. Simulate the DFA to recognize input.
- Build a DFA directly from the regex. Simulate the DFA to recognize input.
-); characters may be concatenated, like this: abb; alternation a|b meaning a or b; the star * meaning "zero or more of the previous"; and grouping ().
What follows is Thompson's construction - an algorithm that builds a NFA from a regex. The algorithm is syntax directed, in the sense that it uses the syntactic structure of the regex to guide the construction process.
The beauty and simplicity of this algorithm is in its modularity. First, construction of trivial building blocks is presented.
For eps, construct the NFA:
Here i is a new start state and f is a new accepting state. It's easy to see that this NFA recognizes the regex eps.
For some a from the input alphabet, construct the NFA:
Again, it's easy to see that this NFA recognizes the trivial regex a.es.
For the regular expression s|t, construct the following composite NFA N(s|t):
The eps transitions into and out of the simple NFAs assure that we can be in either of them when the match starts. Any path from the initial to the final state must pass through either N(s) or N(t) exclusively. Thus we see that this composite NFA recognizes s|t.
For the regular expression st (s and then t), construct the composite NFA NFA(st):
The composite NFA will have the start state of N(s) and the end state of N(t). The accepting (final) state of N(s) is merged with the start state of N(t). Therefore, all paths going through the composite NFA must go through N(s) and then through N(t), so it indeed recognizes N(st).
For the regular expression s*, construct the composite NFA N(s*):
Note how simply the notion of "zero or more" is represented by this NFA. From the initial state, either "nothing" is accepted with the eps transition to the final state or the "more than" is accepted by going into N(s). The eps transition inside N(s) denotes that N(s) can appear again and again.
For the sake of completeness: a parenthesized regular expression (s) has the same NFA as s, namely N(s)..
Let's see how this NFA was constructed:
First, it's easy to note that states 2 and 3 are the basic NFA for the regex a.
Similarly, states 4 and 5 are the NFA for b.
Can you see the a|b? It's clearly states 1,2,3,4,5,6 (without the eps transition from 6 to 1).
Parenthesizing (a|b) doesn't change the NFA
The addition of states 0 and 7, plus the eps transition from 6 to 1 is the star on NFA(a|b), namely states 0 - 7 represent (a|b)*.
The rest is easy. States 8 - 10 are simply the concatenation of (a|b)* with abb.
Try to run a few strings through this NFA until you convince yourself that it indeed recognizes (a|b)*abb. Recall that a NFA recognizes a string when the string's characters can be spelled out on some path from the initial to the final state.
Implementation of a simple NFA
At last, let's get our hands on some code. Now that we know the theory behind NFA-from-regex construction, it's clear that we will be doing some NFA manipulations. But how will we represent NFAs in code?
NFA is not a trivial concept, and there are full-blown implementations for general NFAs that are far too complex for our needs. My plan is to code as simple an implementation as possible - one that will be enough for our needs and nothing more. After all, the regex recognizing engine is not supposed to expose its NFAs to the outer world - for us a NFA is only an intermediate representation of a regular expression, which we want to simulate in order to "accept" or "reject" input strings.
My philosophy in such cases is the KISS principle: "Keep It Simple, Stupid". The goal is first to code the simplest implementation that fits my needs. Later, I have no problem refactoring parts of the code and inserting new features, on an as-needed basis.
A very simple NFA implementation is now presented. We will build upon it later, and for now it is enough just to demonstrate the concept. Here is the interface:
#ifndef NFA_H #define NFA_H #include <vector> using namespace std; // Convenience types and constants typedef unsigned state; typedef char input; enum {EPS = -1, NONE = 0}; class NFA { public: // Constructed with the NFA size (amount of // states), the initial state and the final state NFA(unsigned size_, state initial_, state final_); // Adds a transition between two states void add_trans(state from, state to, input in); // Prints out the NFA void show(void); private: bool is_legal_state(state s); state initial; state final; unsigned size; vector<vector<input> > trans_table; }; #endif // NFA_H
As promised, the public interface is kept trivial, for now. All we can do is create a NFA object (specifying the amount of states, the start state and the final state), add transitions to it, and print it out. This NFA will then consist of states 0 .. size-1, with the given transitions (which are single characters). Note that we use only one final state for now, for the sake of simplicity. Should we need more than one, it won't be difficult to add.
A word about the implementation: I don't want to go deep into graph-theory here (if you're not familiar with the basics, a web search can be very helpful), but basically a NFA is a directed graph. It is most common to implement a graph using either a matrix or an array of linked lists. The first implementation is more speed efficient, the second is better space-wise. For our NFA I picked the matrix (vector of vectors), mostly because (in my opinion) it is simpler.
The classic matrix implementation of a graph has 1 in cell (i, j) when there is an edge between vertex i and vertex j, and 0 otherwise.
A NFA is a special graph, in the sense that we are interested not only in whether there is an edge, but also in the condition for the edge (the input that leads from one state to another in FSM terminology). Thus, our matrix holds inputs (a nickname for chars, as you can see). So, for instance, 'c' in trans_table[i][j] means that the input 'c' leads from state i to state j in our NFA.
Here is the implementation of the NFA class:
#include <iostream> #include <string> #include <cassert> #include <cstdlib> #include "nfa.h" using namespace std; NFA::NFA(unsigned size_, state initial_, state final_) { size = size_; initial = initial_; final = final_; assert(is_legal_state(initial)); assert(is_legal_state(final)); // Initialize trans_table with an "empty graph", // no transitions between its states for (unsigned i = 0; i < size; ++i) { vector<input> v; for (unsigned j = 0; j < size; ++j) { v.push_back(NONE); } trans_table.push_back(v); } } bool NFA::is_legal_state(state s) { // We have 'size' states, numbered 0 to size-1 if (s < 0 || s >= size) return false; return true; } void NFA::add_trans(state from, state to, input in) { assert(is_legal_state(from)); assert(is_legal_state(to)); trans_table[from][to] = in; } void NFA::show(void) { cout < "This NFA has " < size < " states: 0 - " < size - 1 < endl; cout < "The initial state is " < initial < endl; cout < "The final state is " < final < endl < endl; for (unsigned from = 0; from < size; ++from) { for (unsigned to = 0; to < size; ++to) { input in = trans_table[from][to]; if (in != NONE) { cout < "Transition from " < from < " to " < to < " on input "; if (in == EPS) { cout < "EPS" < endl; } else { cout < in < endl; } } } } }
The code is very simple, so you should have no problem understanding what every part of it does. To demonstrate, let's see how we would use this class to create the NFA for (a|b)*abb - the one we built using Thompson's construction earlier (only the driver code is included):
#include "nfa.h" int main() { NFA n(11, 0, 10); n.add_trans(0, 1, EPS); n.add_trans(0, 7, EPS); n.add_trans(1, 2, EPS); n.add_trans(1, 4, EPS); n.add_trans(2, 3, 'a'); n.add_trans(4, 5, 'b'); n.add_trans(3, 6, EPS); n.add_trans(5, 6, EPS); n.add_trans(6, 1, EPS); n.add_trans(6, 7, EPS); n.add_trans(7, 8, 'a'); n.add_trans(8, 9, 'b'); n.add_trans(9, 10, 'b'); n.show(); return 0; }
This would (quite expectedly) result in the following output:
As I mentioned earlier: as trivial as this implementation may seem at the moment, it is the basis we will build upon later. Presenting it in small pieces will, hopefully, make the learning curve of this difficult subject less steep for you.
Implementing Thompson's Construction
Thompson's Construction.
Some changes to the NFA class
The previous[i] :.
Implementing concatenation: ab
Here is the diagram of NFA concatenation from earlier:.
Implementing star: a*
Here is the diagram of the NFA for a* from earlier: article. from the source code archive on the book's companion website. It has a lot.,.
To remind you, this is the algorithm for DFA simulation :
Let's now finally learn how the DFA is built.
The.
-. eps-closure(move(A, a)). D.trans_table(A, a) = B. D.trans_table. | http://www.gamedev.net/page/resources/_/technical/general-programming/finite-state-machines-and-regular-expressions-r3176?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2016-30 | refinedweb | 1,596 | 71.55 |
React Hooks: What's going to happen to react context?
December 17, 2018
Photo by Joel Fulgencio on Unsplash
With the cool new stuff coming to React (Hooks/Suspense), what's going to happen to the context api?
Earlier this year, the React team introduced the first official context API.
I blogged about that new API
and people got sufficiently and reasonably hyped.
One common complaint that I knew people were going to have when applying it
practically was the fact that the context consumer is a render-prop based API.
This can lead to a lot of nesting when you need to consume multiple contexts and
other render-prop based APIs as well (for logic reuse). So I addressed that in
the blog post by suggesting that you could combine all of the render-prop based
APIs into a single function component and consume that:
1const ThemeContext = React.createContext('light')
2class ThemeProvider extends React.Component {
3
4}
5const ThemeConsumer = ThemeContext.Consumer
6const LanguageContext = React.createContext('en')
7class LanguageProvider extends React.Component {
8
9}
10const LanguageConsumer = LanguageContext.Consumer
11
12function AppProviders({children}) {
13 return (
14 <LanguageProvider>
15 <ThemeProvider>{children}</ThemeProvider>
16 </LanguageProvider>
17 )
18}
19
20function ThemeAndLanguageConsumer({children}) {
21 return (
22 <LanguageConsumer>
23 {language => (
24 <ThemeConsumer>{theme => children({language, theme})}</ThemeConsumer>
25 )}
26 </LanguageConsumer>
27 )
28}
29
30function App() {
31 return (
32 <AppProviders>
33 <ThemeAndLanguageConsumer>
34 {({theme, language}) => (
35 <div>
36 {theme} and {language}
37 </div>
38 )}
39 </ThemeAndLanguageConsumer>
40 </AppProviders>
41 )
42}
As much as this solution works thanks to the composability of React components,
I'm still not super thrilled with it. And I'm not the only one:
We've heard feedback that adopting the new render prop API can be difficult
in class components. So we've added a convenience API to >
consume a context value from within a class component. — React v16.6.0: lazy, memo and contextType
This new convenience API means that if you use a class component and you're only
consuming one context, you can simply define a static property called
contextType and assign it to the context you want to consume, then you can
access the context via
this.context. It's pretty neat and a nice trick for
common cases where you only consume a single context.
I've used this convenience API and I love it. But I'm even more excited about
the implications that React Hooks have for the future of React context. Let's
rewrite what we have above with the upcoming (ALPHA!)
useContext hook:
1const ThemeContext = React.createContext('light')
2class ThemeProvider extends React.Component {
3
4}
5const LanguageContext = React.createContext('en')
6class LanguageProvider extends React.Component {
7
8}
9
10function AppProviders({children}) {
11 return (
12 <LanguageProvider>
13 <ThemeProvider>{children}</ThemeProvider>
14 </LanguageProvider>
15 )
16}
17
18function App() {
19 const theme = useContext(ThemeContext)
20 const language = useContext(LanguageContext)
21 return (
22 <div>
23 {theme} and {language}
24 </div>
25 )
26}
27
28ReactDOM.render(
29 <AppProviders>
30 <App />
31 </AppProviders>,
32 document.getElementById('root'),
33)
WOWZA! As powerful as the render-prop based consumers are, this is even easier
to read, understand, refactor, and maintain! And it's not just less code for
less code's sake. Besides, often when we reduce the amount of code we also
reduce the clarity of communication that code can give to us. But in this case,
it's less code and it's easier to understand. I think that's a big win and a
huge feature of the new hooks API.
Another big feature of React hooks is the fact that it's completely opt-in and
backward compatible. I'm given such a huge amount of comfort knowing that
Facebook can't make decisions that will cause grief to the engineers who are
working on the oldest and one of the largest React codebases in the world. The
fact that React has incrementally taken us to this new world of hooks is just
fantastic. Thanks React team! Looking forward to the official release!
Conclusion
One of the coolest things about React is that it allows us to focus on solving
real-world problems without normally having to get too close to the
implementation of things. It's been a long time since I had to deal with
cross-browser or performance issues with any degree of regularity. And now React
is taking it even further and simplifying things so the code that I do write is
simpler to read, understand refactor, and maintain. I just love that. Makes me
wonder if there may be some things I could do about my code to simplify things
for other people as well 🤔.
Until next time! Good luck! 👋
Things to not miss:
- Simplify React Apps with React Hooks and Suspense — My
new egghead course... of course!
- Shurlan — I WON NANOWRIMO
THIS YEAR! That means that I successfully wrote 50,000 words of a novel in the
month of November (for perspective, Harry Potter book 1 is 76k words). It was
a wild month, and it was tons of fun. And you can read what I ended up with.
It's a fantasy novel about a utopian world where things start to go bad and a
14-year-old girl is tasked with stopping a rebellion from inadvertently
destroying the city. I think you'll love the characters, plot, and magic
system :)
- React 16.x Roadmap — Tl;DR:
React 16.6: Suspense for Code Splitting (already shipped), React 16.7: React
Hooks (~Q1 2019), React 16.8: Concurrent Mode (~Q2 2019), React 16.9: Suspense
for Data Fetching (~mid 2019)
- Modern React Workshop: Hooks & Suspense — a
recording of a livestream I did last week at PayPal.
Here's the workshop repo and
here's the part 2. | http://brianyang.com/react-hooks-whats-going-to-happen-to-react-context/ | CC-MAIN-2019-39 | refinedweb | 956 | 55.44 |
Write a Python function that will take a the list of 100 random integers between 0 and 1000 and return the maximum value. (Note: there is a builtin function named max but pretend you cannot use it.)
Here's what I tried:
import random
list = []
for i in range(100):
list.append(random.randint(0,1000))
def max(list):
#sort list from least to greatest
answer = list.sort()
#get last item in list (max value)
list.pop()
return max
print (max(list))
ParseError: bad input on line 12
list.pop()
my_list.sort() sorts the list itself. If you want to store your sorted list in
answer, you should use:
answer = sorted(my_list)
You can also use a list comprehension to generate your first list as follows:
>>> random >>> >>> my_list = [random.randint(0, 1000) for i in range(100)] >>> answer = sorted(my_list) >>> answer.pop() 996 # This should be different each time
Now, your function can be:
def max(my_list): # sort list from least to greatest answer = sorted(my_list) # get last item in list (max value) max = answer.pop() return max
If you still want to use the same list, you can do:
my_list.sort() max = my_list.pop()
Note that: I prefer to call the list
my_list because
list is a python keyword. | https://codedump.io/share/U61fwJodBgzO/1/create-list-of-100-random-integers-return-max-value | CC-MAIN-2016-50 | refinedweb | 210 | 73.88 |
Phonon calculations¶
Module for calculating vibrational normal modes for periodic systems using the so-called small displacement method (see e.g. [Alfe]). So far, space-group symmetries are not exploited to reduce the number of atomic displacements that must be calculated and subsequent symmetrization of the force constants.
For polar materials the dynamical matrix at the zone center acquires a non-analytical contribution that accounts for the LO-TO splitting. This contribution requires additional functionality to evaluate and is not included in the present implementation. Its implementation in conjunction with the small displacement method is described in [Wang].
Example¶
Simple example showing how to calculate the phonon dispersion for bulk aluminum using a 7x7x7 supercell within effective medium theory:
from ase.build import bulk from ase.calculators.emt import EMT from ase.phonons import Phonons # Setup crystal and EMT calculator atoms = bulk('Al', 'fcc', a=4.05) # Phonon calculator N = 7 ph = Phonons(atoms, EMT(), supercell=(N, N, N), delta=0.05) ph.run() # Read forces and assemble the dynamical matrix ph.read(acoustic=True) ph.clean() path = atoms.cell.bandpath('GXULGK', npoints=100) bs = ph.get_band_structure(path) dos = ph.get_dos(kpts=(20, 20, 20)).sample_grid(npts=100, width=1e-3) # Plot the band structure and DOS: import matplotlib.pyplot as plt fig = plt.figure(1, figsize=(7, 4)) ax = fig.add_axes([.12, .07, .67, .85]) emax = 0.035 bs.plot(ax=ax, emin=0.0, emax=emax) dosax = fig.add_axes([.8, .07, .17, .85]) dosax.fill_between(dos.weights[0], dos.energy, y2=0, color='grey', edgecolor='k', lw=1) dosax.set_ylim(0, emax) dosax.set_yticks([]) dosax.set_xticks([]) dosax.set_xlabel("DOS", fontsize=18) fig.savefig('Al_phonon.png')
Mode inspection:
from ase.io.trajectory import Trajectory from ase.io import write # Write modes for specific q-vector to trajectory files: L = path.special_points['L'] ph.write_modes([l / 2 for l in L], branches=[2], repeat=(8, 8, 8), kT=3e-4, center=True) # Generate gif animation: with Trajectory('phonon.mode.2.traj', 'r') as traj: write('Al_mode.gif', traj, interval=50, rotation='-36x,26.5y,-25z')
List of all Methods¶
- class
ase.phonons.
Phonons(*args, **kwargs)[source]¶
Class for calculating phonon modes using the finite displacement method.
The matrix of force constants is calculated from the finite difference approximation to the first-order derivative of the atomic forces as:
2 nbj nbj nbj d E F- - F+ C = ------------ ~ ------------- , mai dR dR 2 * delta mai nbj
where F+/F- denotes the force in direction j on atom nb when atom ma is displaced in direction +i/-i. The force constants are related by various symmetry relations. From the definition of the force constants it must be symmetric in the three indices mai:
nbj mai bj ai C = C -> C (R ) = C (-R ) . mai nbj ai n bj n
As the force constants can only depend on the difference between the m and n indices, this symmetry is more conveniently expressed as shown on the right hand-side.
The acoustic sum-rule:
_ _ aj \ bj C (R ) = - ) C (R ) ai 0 /__ ai m (m, b) != (0, a)
Ordering of the unit cells illustrated here for a 1-dimensional system (in case
refcell=Nonein constructor!):
m = 0 m = 1 m = -2 m = -1 ----------------------------------------------------- | | | | | | * b | * | * | * | | | | | | | * a | * | * | * | | | | | | -----------------------------------------------------
Example:
>>> from ase.build import bulk >>> from ase.phonons import Phonons >>> from gpaw import GPAW, FermiDirac >>> atoms = bulk('Si', 'diamond', a=5.4) >>> calc = GPAW(kpts=(5, 5, 5), h=0.2, occupations=FermiDirac(0.)) >>> ph = Phonons(atoms, calc, supercell=(5, 5, 5)) >>> ph.run() >>> ph.read(method='frederiksen', acoustic=True)
Initialize with base class args and kwargs.
apply_cutoff(D_N, r_c)[source]¶
Zero elements for interatomic distances larger than the cutoff.
Parameters:
- D_N: ndarray
Dynamical/force constant matrix.
- r_c: float
Cutoff in Angstrom.
band_structure(path_kc, modes=False, born=False, verbose=True)[source]¶
Calculate phonon dispersion along a path in the Brillouin zone.
The dynamical matrix at arbitrary q-vectors is obtained by Fourier transforming the real-space force constants. In case of negative eigenvalues (squared frequency), the corresponding negative frequency is returned.
Frequencies and modes are in units of eV and Ang/sqrt(amu), respectively.
Parameters:
- path_kc: ndarray
List of k-point coordinates (in units of the reciprocal lattice vectors) specifying the path in the Brillouin zone for which the dynamical matrix will be calculated.
- modes: bool
Returns both frequencies and modes when True.
- born: bool
Include non-analytic part given by the Born effective charges and the static part of the high-frequency dielectric tensor. This contribution to the force constant accounts for the splitting between the LO and TO branches for q -> 0.
- verbose: bool
Print warnings when imaginary frequncies are detected.
dos(kpts=(10, 10, 10), npts=1000, delta=0.001, indices=None)[source]¶
Calculate phonon dos as a function of energy.
Parameters:
- qpts: tuple
Shape of Monkhorst-Pack grid for sampling the Brillouin zone.
- npts: int
Number of energy points.
- delta: float
Broadening of Lorentzian line-shape in eV.
- indices: list
If indices is not None, the atomic-partial dos for the specified atoms will be calculated.
read(method='Frederiksen', symmetrize=3, acoustic=True, cutoff=None, born=False, **kwargs)[source]¶
Read forces from pickle files and calculate force constants.
Extra keyword arguments will be passed to
read_born_charges.
Parameters:
- method: str
Specify method for evaluating the atomic forces.
- symmetrize: int
Symmetrize force constants (see doc string at top) when
symmetrize != 0(default: 3). Since restoring the acoustic sum rule breaks the symmetry, the symmetrization must be repeated a few times until the changes a insignificant. The integer gives the number of iterations that will be carried out.
- acoustic: bool
Restore the acoustic sum rule on the force constants.
- cutoff: None or float
Zero elements in the dynamical matrix between atoms with an interatomic distance larger than the cutoff.
- born: bool
Read in Born effective charge tensor and high-frequency static dielelctric tensor from file.
read_born_charges(name=None, neutrality=True)[source]¶
Read Born charges and dieletric tensor from pickle file.
The charge neutrality sum-rule:
_ _ \ a ) Z = 0 /__ ij a
Parameters:
- neutrality: bool
Restore charge neutrality condition on calculated Born effective charges.
write_modes(q_c, branches=0, kT=0.02585199101165164, born=False, repeat=(1, 1, 1), nimages=30, center=False)[source]¶
Write modes to trajectory file.
Parameters:
- q_c: ndarray
q-vector of the modes.
- branches: int or list
Branch index of modes.
- kT: float
Temperature in units of eV. Determines the amplitude of the atomic displacements in the modes.
- born: bool
Include non-analytic contribution to the force constants at q -> 0.
- repeat: tuple
Repeat atoms (l, m, n) times in the directions of the lattice vectors. Displacements of atoms in repeated cells carry a Bloch phase factor given by the q-vector and the cell lattice vector R_m.
- nimages: int
Number of images in an oscillation.
- center: bool
Center atoms in unit cell if True (default: False). | https://wiki.fysik.dtu.dk/ase/dev/ase/phonons.html | CC-MAIN-2020-05 | refinedweb | 1,145 | 51.14 |
Recursive nested elements woe
Discussion in 'XML' started by GR33DY, Jun permissions woe - even when writing to a local drive that is mapped, Sep 9, 2005, in forum: ASP .Net
- Replies:
- 3
- Views:
- 536
- Paul Clement
- Sep 12, 2005
XSL Recursive nested elements woeGR33DY, Jun 24, 2004, in forum: XML
- Replies:
- 0
- Views:
- 600
- GR33DY
- Jun 24, 2004
re.sub replacement text \-escapes woeAlexander Schmolck, Feb 13, 2004, in forum: Python
- Replies:
- 4
- Views:
- 386
- Alexander Schmolck
- Feb 14, 2004
import woe, May 19, 2006, in forum: Python
- Replies:
- 4
- Views:
- 363
- Terry Hancock
- May 19, 2006
PreRenderComplete event woeHan, Dec 22, 2006, in forum: ASP .Net
- Replies:
- 3
- Views:
- 3,266
- Han
- Dec 23, 2006 | http://www.thecodingforums.com/threads/recursive-nested-elements-woe.167358/ | CC-MAIN-2014-52 | refinedweb | 116 | 67.93 |
.sbt Build Definition¶
This page describes sbt build definitions, including some "theory" and the syntax of build.sbt. It assumes you know how to use sbt and have read the previous pages in the Getting Started Guide.
.sbt vs. .scala Build Definition¶
An sbt build definition can contain files ending in .sbt, located in the base directory of a project, and files ending in .scala, located in the project/ subdirectory of the base directory.
This page discusses .sbt files, which are suitable for most cases. The .scala files are typically used for sharing code across .sbt files and for more complex build definitions. See .scala build definition (later in Getting Started) for more on .scala files.
What is a Build Definition?¶
After examining a project and processing build definition files, sbt ends up with an immutable map (set of key-value pairs) describing the build.
For example, one key is name and it maps to a string value, the name of your project.
Build definition files do not affect sbt's map directly.
Instead, the build definition creates a huge list of objects with type Setting[T] where T is the type of the value in the map. A Setting describes a transformation to the map, such as adding a new key-value pair or appending to an existing value. (In the spirit of functional programming with immutable data structures and values, a transformation returns a new map - it does not update the old map in-place.)
In build.sbt, you might create a Setting[String] for the name of your project like this:
name := "hello"
This Setting[String] transforms the map by adding (or replacing) the name key, giving it the value "hello". The transformed map becomes sbt's new map.
To create the map, sbt first sorts the list of settings so that all changes to the same key are made together, and values that depend on other keys are processed after the keys they depend on. Then sbt walks over the sorted list of Settings and applies each one to the map in turn.
Summary: A build definition defines a list of Setting[T], where a Setting[T] is a transformation affecting sbt's map of key-value pairs and T is the type of each value.
How build.sbt defines settings¶
build.sbt defines a Seq[Setting[_]]; it's a list of Scala expressions, separated by blank lines, where each one becomes one element in the sequence. If you put Seq( in front of the .sbt file and ) at the end and replace the blank lines with commas, you'd be looking at the equivalent .scala code.
Here's an example:
name := "hello" version := "1.0" scalaVersion := "2.10.3"
Each Setting is defined with a Scala expression. The expressions in build.sbt are independent of one another, and they are expressions, rather than complete Scala statements. These expressions may be interspersed with vals, lazy vals, and defs. Top-level objects and classes are not allowed in build.sbt. Those should go in the project/ directory as full Scala source files.
On the left, name, version, and scalaVersion are keys. A key is an instance of SettingKey[T], TaskKey[T], or InputKey[T] where T is the expected value type. The kinds of key are explained below.
Keys have a method called :=, which returns a Setting[T]. You could use a Java-like syntax to call the method:
name.:=("hello")
But Scala allows name := "hello" instead (in Scala, a single-parameter method can use either syntax).
The := method on key name returns a Setting, specifically a Setting[String]. String also appears in the type of name itself, which is SettingKey[String]. In this case, the returned Setting[String] is a transformation to add or replace the name key in sbt's map, giving it the value "hello".
If you use the wrong value type, the build definition will not compile:
name := 42 // will not compile
Settings must be separated by blank lines¶
You can't write a build.sbt like this:
// will NOT compile, no blank lines name := "hello" version := "1.0" scalaVersion := "2.10.3"
sbt needs some kind of delimiter to tell where one expression stops and the next begins.
.sbt files contain a list of Scala expressions, not a single Scala program. These expressions have to be split up and passed to the compiler individually.
Keys¶
Types¶
There are three flavors.
Built-in Keys¶
The built-in keys are just fields in an object called Keys. A build.sbt implicitly has an import sbt.Keys._, so sbt.Keys.name can be referred to as name.
Custom Keys¶. vals and defs must be separated from settings by blank lines.
Note
Typically, lazy vals are used instead of vals to avoid initialization order problems.
Task v. Setting keys¶ map describing the project.
Defining tasks and settings¶,
hello := { println("Hello!") }
We already saw an example of defining settings when we defined the project's name,
name := "hello"
Types for tasks and settings¶ more about settings, coming up soon.
Keys in sbt interactive mode¶
In sbt's interactive mode,.
Imports in build.sbt¶
You can place import statements at the top of build.sbt; they need not be separated by blank lines.
There are some implied default imports, as follows:
import sbt._ import Process._ import Keys._
(In addition, if you have .scala files, the contents of any Build or Plugin objects in those files will be imported. More on that when we get to .scala build definitions.)
Adding library dependencies¶
To depend on third-party libraries, there are two options. The first is to drop jars in lib/ (unmanaged dependencies) and the other is to add managed dependencies, which will look like this in build.sbt:
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3"
This is how you add a managed dependency on the Apache Derby library, version 10.4.1.3.
The libraryDependencies key involves two complexities: += rather than :=, and the % method. += appends to the key's old value rather than replacing it, this is explained in more about settings. The % method is used to construct an Ivy module ID from strings, explained in library dependencies.
We'll skip over the details of library dependencies until later in the Getting Started Guide. There's a whole page covering it later on.
Move on to learn about scopes.
Contents
- .sbt Build Definition
- .sbt vs. .scala Build Definition
- What is a Build Definition?
- How build.sbt defines settings
- Settings must be separated by blank lines
- Keys
- Defining tasks and settings
- Keys in sbt interactive mode
- Imports in build.sbt
- Adding library dependencies | http://www.scala-sbt.org/release/docs/Getting-Started/Basic-Def.html | CC-MAIN-2014-15 | refinedweb | 1,110 | 76.01 |
Earlier this week I wrote a post detailing the application framework that I’ve been working on, which I’m calling AgFx. I wanted that post to be a bit of an introductory overview,. Now I’m going to dig into detail a bit more, as well as show a bit about how you can use the free Windows Phone 7 Developer tools to quickly create a great application. AgFx does a lot, and one of the apps I used to generate requirements and testing, is Jeff Wilcox’s gorgeous new foursquare™ client: 4th & Mayor, which is built on AgFx.
Jeff and I, along with some others, have written several apps on top of AgFx over the last few months, and have really sharpened it to be exactly what you need when building a connected phone application. And the steps that you use to do it are almost always the same, and I’ll detail them here.
If you’re thinking about writing an app that talks to a web service of any kind, go ahead and follow along with that example in your head.
Below is pretty long and comprehensive, but it is not only a basic tutorial, it is also a summary of some of the broader set of features in AgFx. So it may give you some ideas about how to use AgFx for your next Phone application, or even your current one! We have moved several existing data-connected applications to AgFx and in every case the code becomes smaller, simpler, and better performing.
The first step is to go take a look at the data that the web service offers and how you can process it. In this example case we’ll be using the NOAA National Weather Service REST Web Service.
So we go to the website above an look at what the web service offers. Looking through at their service operations, I see this one:
Which sounds about right. Give it a zip code, get back weather data. And there is an example for a query there as well:
We like things that we can easily build a query string from. AgFx supports more complex operations, but these GET queries are a slam dunk. If you click on that link, you’ll get a bunch of XML data. It’s a little hairy, but the data is there so all we need to do is parse it, which is easy enough. OK, weather data: check.
One thing it does not give us, though, is the name of the city for the zip code. It gives us the latitude and longitude, but that’s not that helpful really. We want to be able to display the name of the city for a given zip code, and we can get that from WebServiceX.NET, here.
WebServiceX.net provides an API for converting a zip code into a city and state name.
Basically we call this:, and we get this:
<?xml version="1.0" encoding="utf-8"?>
<NewDataSet>
<Table>
<CITY>Redmond</CITY>
<STATE>WA</STATE>
<ZIP>98052</ZIP>
<AREA_CODE>425</AREA_CODE>
<TIME_ZONE>P</TIME_ZONE>
</Table>
</NewDataSet>
Perfect! City name: check.
Now it’s time to start writing the code to talk to the service.
The first thing to think about is the shape of the objects coming back from the service, and what is their unique identifier. In database terms, this would be the primary key. Given that key, I expect to get the correct data back, and that data should be discreet from calls using different keys.
In the weather case, the weather data is basically a set of weather forecast objects. They always come together as a group and this app will never talk to them individually – it’s the collection of objects that makes up the data we care about. We don’t care about addressing the individual forecasts, so the zip code works great as the identifier for our forecast.
For the zip code information, it’s even easier, the data is tied directly to the zip code, so that’s also the unique identifier here.
Before we go on, let’s think of some other examples.
Imagine you were talking to the Flickr service. Flickr has identifiers on almost every object. For example the identifier for my user profile on Flickr is “78467353@N00”. If I was to write a model for my Flickr user profile, I’d use that as my identifier. The same goes for a model that represents my PhotoStream, because that’s tied to my user account. But a model for an individual album or photo would use the id associated with that item. And so on.
This identifer is how you’ll ask for data from AgFx, so that’s why it’s important, we’ll see more about this later:
DataManager.Current.Load<ZipCodeVm>(txtZipCode.Text);
DataManager.Current.Load<ZipCodeVm>(txtZipCode.Text);
So, back to LoadContext what’s going on with that guy? Good question.
The LoadContext is where I store data that will be used to make a request for my data. For example, if I want to request a photos on Flickr from a photo album, I may want to pass in paging information, or information about how many items to return. The LoadContext is where I would put that information, the reason why will be clear later. LoadContext really exists for more sophisticated cases and in the case of simple requests, like those for the weather app, you won’t need to bother with it much. But you’ll see it in the API so you should understand what it’s for.
Even in simple cases, it also gives you a chance to make things strongly typed.
If you look at the base LoadContext:
public class LoadContext { public object Identity { get; } public string UniqueKey {get;} public LoadContext(object identity) {} // .. other stuff }
you’ll see the Identity property there is of type object. So in this case, we’re just going to derive and give ourselves a nice strongly-typed property and constructor:
public class ZipCodeLoadContext : LoadContext { public string ZipCode { get { return (string)Identity; } } public ZipCodeLoadContext(string zipcode) : base(zipcode) { } }
One important thing to note here is that the parameter on the constructor should match the type of object you’ll be passing in. In this case, the zip is going to be a string, so there should be a 1-parameter ctor that takes a string. AgFx will look for this to automatically create the LoadContext.
We can reuse this LoadContext for the zip code service as well. Handy!
Now that we’ve got the LoadContext, we can create our ViewModel. ViewModels should derive from ModelItemBase<T>, which you’ll want to be familiar with. So let’s look at some of the members you’ll be using:
ModelItemBase derives from NotifyPropertyChangedBase, which is a default implementation of INotifyPropertyChanged and also adds some helpful functions:
Okay, that’s it for ModelItemBase. Let’s create our ViewModel for the weather forecast:
First, let’s create the constructors:
public class WeatherForecastVm : ModelItemBase<ZipCodeLoadContext> {
public WeatherForecastVm() { }
public WeatherForecastVm(string zipcode):
base(new ZipCodeLoadContext(zipcode)) { }
//...
}
This is pretty simple, but it’s important to note two things:
Okay, now we add some properties to the ViewModel.
Most properties look like this:
private string _city;
public string City
{
get
{
return _city;
}
set
{
if (_city != value)
{
_city = value;
RaisePropertyChanged("City");
}
}
}
Note the “RaisePropertyChanged” call at the bottom, which calls down to NotifyPropertyChangedBase declared above. This let’s the UI know that it should update this value.
For collection properties, things are just a little bit different. When you databind to collection properties, you typically use an ObservableCollection<T> so that any databound UI will automatically update when you make changes to your items. That still works fine here, too, but with a small tweak
For collection properties in your ViewModel, that you should do the following:
The reason for this is because AgFx knows which instance of your object is bound to your UI, and always updates that instance (it uses the same instance across your application). So when an update happens, it “copies” the property values from the fresh instance into the one currently held by the system. Rather than doing this automatically, because you may want to manage how the copying happens, you’ll typically just do this:
private ObservableCollection<WeatherPeriod> _wp = new ObservableCollection<WeatherPeriod>();
public ObservableCollection<WeatherPeriod> WeatherPeriods {
get {
return _wp;
}
public set {
if (value == null) throw new ArgumentNullException();
if (_wp != null) {
_wp.Clear();
foreach (var wp in value) {
_wp.Add(wp);
}
}
RaisePropertyChanged("WeatherPeriods");
}
}
So now you can add all of your properties to fill out the shape of your ViewModel.
When you fetch data from your service, how long is it good for? AgFx defines a very simple way to define how long data should be cached for in your object.
The way this is done is with the CachePolicyAttribute. The constructor for CachePolicyAttribute takes two parameters:
So we just apply this to our ViewModels as in the following:
[CachePolicy(CachePolicy.ValidCacheOnly, 60 * 15)]
public class WeatherForecastVm : ModelItemBase<ZipCodeLoadContext>{…}
// this basically never changes, but we'll say
// it's valid for a year.
[CachePolicy(CachePolicy.CacheThenRefresh, 3600 * 24 * 365)]
public class ZipCodeVm : ModelItemBase<ZipCodeLoadContext>{…}
That’s it.
Okay, now is when the real party starts. The DataLoader is the beating heart of your ViewModel objects.
The DataLoader has two primary jobs:
Fortunately both of these are pretty easy.
Your DataLoader is an object that implements IDataLoader<T> where T is your LoadContext type from earlier. Your DataLoader should be a public nested type in your view model. This isn’t strictly necessary (see AgFx.DataLoaderAttribute) but I haven’t yet found a case where I need to do it otherwise. If it’s a nested type, AgFx will automatically find and instantiate it.
IDataLoader<T> It has two methods:
Implementing these is also simple. Here’s how GetLoadRequest usually looks:
private const string ZipCodeUriFormat = "{0}";));
}
Make sense? It’s pretty simple. If you needed more parameters on the request, then you’d add more properties to your LoadContext and then use that to build out the URI.
AgFx will take that object and it will result in a network request. When that network request returns, the stream that it provided will be passed along back to the DataLoader for processing.
Processing the network data is typically straight-forward. I’m doing manual XML walking here. If your web service exposes a WSDL that the Windows Phone Developer Tools can handle, you can use that to make parsing a snap, usually using DataContractSerializer,, but for clarity I’ll just do the brute force method below.
public object Deserialize(ZipCodeLoadContext loadContext, Type objectType, System.IO.Stream stream)
{
// the XML will look like the following, so we parse it.
//<?xml version="1.0" encoding="utf-8"?>
//<NewDataSet>
// <Table>
// <CITY>Kirkland</CITY>
// <STATE>WA</STATE>
// <ZIP>98033</ZIP>
// <AREA_CODE>425</AREA_CODE>
// <TIME_ZONE>P</TIME_ZONE>
// </Table>
//</NewDataSet>
var xml = XElement.Load(stream);
var table = (
from t in xml.Elements("Table")
select t).FirstOrDefault();
if (table == null) {
throw new ArgumentException("Unknown zipcode " + loadContext.ZipCode);
}
ZipCodeVm vm = new ZipCodeVm(loadContext.ZipCode);
vm.City = table.Element("CITY").Value;
vm.State = table.Element("STATE").Value;
vm.AreaCode = table.Element("AREA_CODE").Value;
vm.TimeZone = table.Element("TIME_ZONE").Value;
return vm;
}
So what we do here is get into the XML stream with XLINQ, then just create a new ViewModel object (of type ZipCodeVm) and populate it’s properties with the values that we got out of XML.
Note the objectType property. The returned object MUST be an instance of this type (derived types are OK). AgFx will enforce this. You can take a look at the Deserialize method in the WeatherForecastVm as well, it’s pretty much the same, but it handles much more complex XML from the NOAA service via a helper class.
Now we just compile to make sure it’s all building.
Believe it or not, our application is just about done. Using the great sample data features of Expression Blend, it’s really easy to create our UI.
We’ll walk through a simple case of just the zip code information that we deserialized above. To make this easy to understand, what we want to do is create a UserControl that shows the city and state, along with the LastUpdated date. For the whole weather application, it’s the same process with a little more detail.
For this, I add a new Windows Phone User Control to my project, right click on the XAML file in Solution Explorer and choose “Open in Expression Blend…”.
Once Blend opens with the UserControl showing on the design surface, the first thing we’ll do is create sample data for our ViewModel:
Now we choose “Create Sample Data From Class”:
This will show us the public types in our project assemblies. We’re looking for “ZipCodeVm”, which is one of the VM objects we created up in step 3.
Hit OK, and now just drag the ZipCodeVm item onto the design surface:
Now we just build out our UI like so by dragging on some TextBlock objects from the Blend toolbar:
Now, for each of the items that says “TextBlock” we can just databind to our data fields by dragging them onto the object on the design surface, as demonstrated by the red arrow below. We’ll see sample (bogus) data that lets us just visualize how the UI will look at runtime.
Remember, it’s bogus sample data., it just allows us to see the layout. This prevents the tweak-F5-tweak-F5 method that many of use are used to.
After a little cleanup, here is what the XAML looks like. Notice the Binding statements to the properties we added in our ViewModel.
<StackPanel>
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding City}" FontWeight="Bold"/>
<TextBlock Text=", "/>
<TextBlock Text="{Binding State}" />
</StackPanel>
<StackPanel Orientation="Horizontal">
<TextBlock Text="Updated: " />
<TextBlock Text="{Binding LastUpdated}" />
</StackPanel>
</StackPanel>
Okay now we’ve got our View Models built, our data processing done, and our databinding done. Now we just have to load the data.
The primary object for data access and management in AgFx is DataManager. DataManager is a singleton class that is accessed via the DataManager.Current property.
DataManager only exposes a few operations, but they really are key to tying the whole thing together.
As a starter example, we’d load data into the UserControl that we built above as follows:
zipCodeUserControl1.DataContext =
DataManager.Current.Load<ZipCodeVm>("98052");
Which gives us this at runtime:
Notice we are passing the zip code as a string. Remember form our ZipCodeLoadContext, that we added a constructor with just a single string parameter. AgFx will take that “98052” and automatically pass it to that constructor. Also notice that we don’t have to do anything to manage caching or networking here. AgFx figures out if the value is already available in the cache, if the cache is valid, or if it needs to be refreshed from the network.
It’s important to keep in mind that the instance returned from the Load call will be the same instance that is returned from any future Load or Refresh calls. This identity-tracking is one of the key features of AgFx. What this means, is that once I’ve databound to a value in my UI, future calls to Refresh or Load will update this same instance.
In other words, if in a button click you simply do this:
private void btnRefreshZipcode_Click(object sender, RoutedEventArgs e) {
DataManager.Current.Refresh<ZipCodeVm>("98052");
}
then the UI above will update. Note we never touch the user control or it’s DataContext. It just works.
DataManager gives you a broad set of operations to interact with your data.
Remember these are all accessed via DataManager.Current:
For Load and Refresh, there are overloads that allow you to get handle both completion and error scenarios in code. The signatures look like this:
public T Load<T>(object id, Action<T> completed, Action<Exception> error) ;
which allows you to pass in a lambda that will be called when the operation completes or fails, like so:
this.DataContext = DataManager.Current.Load<WeatherForecastVm>(txtZipCode.Text,
(vm) =>
{
// upon a succesful load, show the info panel.
// this is a bit of a hack, but we can't databind against
// a non-existant data context...
info.Visibility = Visibility.Visible;
},
(ex) =>
{
MessageBox.Show("Error: " + ex.Message);
});
Note that if you pass in a handler for error, then the DataManager.UnhandledError event will not be invoked, regardless of what happens in the handler. You either handle via the lambda, or the global event, not both.
The “vm” parameter in the success handler will be the same instance as the object returned from the Load<> function as well, so any operations on this object will also be reflected in the UI. These handlers will invoke on the UI thread.
Finally, as I’ve mentioned before, it’s best to let DataManager do all the instance management for you. It does a lot of caching so don’t cache instances yourself unless you have to. For ViewModels that have properties that are other ViewModels (a pattern I really like because it encourages on-demand network activity as a user moves through an app), use this pattern:
/// <summary>
/// ZipCode info is the name of the city for the zipcode. This is a separate
/// service lookup, so we treat it separately.
/// </summary>
public ZipCodeVm ZipCodeInfo
{
get
{
return DataManager.Current.Load<ZipCodeVm>(LoadContext.ZipCode);
}
}
Just use the Load method to return the right value.
So back to our weather example, here is the parts of the UI that are serviced by the various objects we’ve been discussing:
If you build a debug version of AgFx, you’ll find some useful debugging output that will help you determine what AgFx is up to. In the case of a first-time run of the weather application, here’s the output you’ll seee:
No cache found for NWSWeather.Sample.ViewModels.WeatherForecastVm (1) 3/17/2011 2:22:35 PM: Queuing load for WeatherForecastVm (2) No cache found for NWSWeather.Sample.ViewModels.ZipCodeVm 3/17/2011 2:22:35 PM: Queuing load for ZipCodeVm 3/17/2011 2:22:35 PM: Checking cache for ZipCodeVm 3/17/2011 2:22:37 PM: Deserializing live data for NWSWeather.Sample.ViewModels.WeatherForecastVm (3) Writing cache for WeatherForecastVm, IsOptimized=False, Will expire 3/17/2011 2:37:37 PM (4) 3/17/2011 2:23:43 PM: Deserializing live data for NWSWeather.Sample.ViewModels.ZipCodeVm Writing cache for ZipCodeVm, IsOptimized=False, Will expire 3/16/2012 2:23:43 PM
Because these operations all happen off of the UI thread, you get interleaving between the call to the weather service and the zip code service. That’s good!
I’ve added the numbers in (bold) to the right to describe that this output means:
If you’re writing an app and it’s not doing what you think it should, this output will really help you understand what the problem might be. If you see a lot of this output when you don’t expect to, that means something is going on that may be resulting in extra data loads that you don’t need. So you can use this as a perf tuning input as well.
As a comparison, here’s the output from a second run, after the app has been exited and restarted a few minutes later. Note there are no “queuing load” or “writing cache” entries. This shows that cached data is actually being loaded, when it was last updated, and when it will expire.
3/17/2011 2:35:12 PM: Checking cache for WeatherForecastVm 3/17/2011 2:35:12 PM: Checking cache for ZipCodeVm 3/17/2011 2:35:12 PM: Loading cached data for ZipCodeVm, Last Updated=3/17/2011 2:23:43 PM, Expiration=3/16/2012 2:23:43 PM 3/17/2011 2:35:12 PM: Deserializing cached data for NWSWeather.Sample.ViewModels.ZipCodeVm, IsOptimized=False 3/17/2011 2:35:12 PM: Loading cached data for WeatherForecastVm, Last Updated=3/17/2011 2:22:37 PM, Expiration=3/17/2011 2:37:37 PM 3/17/2011 2:35:12 PM: Deserializing cached data for NWSWeather.Sample.ViewModels.WeatherForecastVm, IsOptimized=False
Phew, that’s a lot of stuff! Believe it or not, there’s more but I’ll leave it at that for now. I’ve had a lot of fun putting this framework together, with help from Jeff among others. And I’ve had even more fun writing apps on top of it. Hopefully you will too, please let me know how it goes!
Download AgFx with the NOAA Weather sample here. Note when you open this project, you'll get a few warnings about Silverlight project. These are expected and harmless because the base code is an Silverlight class library (so it can be used on Silverlight too). Just hit OK.
Shawn
This looks just awesome. Need to try it out with my app :)
Please tell me you'll be creating a NuGet package for AgFx?
I also vote for getting a NuGet package. Btw the download link to the example doesn't work for me (says that the element has not yet been published)
Yep, it's on NuGet now. Thanks!
Hi shawn, how are you?
Awesome work with AGFX, i was wonderig how one goes with sync data with a service..
I m using AGFX specially for getting the app running without Web connection.. and i want to cache the data, and when i m going online i will push this to thecloud.
My issue is that i don't find easy way to store the data that isn't sync ...
For now i created another collection on my vm where i store the NEWItems, but i could one persist this new object to the cache?! since this only local without any web request to get the stream..
Maybe i should go increating a diffrent VM for the new items?
Does this make sense ?
Hope u can give me a hand.
Thanks
Rui | http://blogs.msdn.com/b/sburke/archive/2011/03/17/tutorial-building-a-connected-phone-app-with-agfx.aspx | CC-MAIN-2014-52 | refinedweb | 3,703 | 62.48 |
15. Mechanical design¶
Assignment¶
- Group assignment
Design a machine (mechanism + actuation + automation), including the end effector, build the passive parts and operate it manually.
Group Assignment¶
End Effector Design¶
Research¶
First I researched the end effector of following coreXY projects. They attaches a kind of pen and lift it up with a servo motor.
Planning¶
I planned to design the end tools to hold ‘Fude’, brush of Japanese calligraphy.
One of the important technique is to sweep and lift the brush gradually. (see red circle below).
So the requirement of the design is :
- Mechanism to hold Fude while moving and writing
- Mechanism to control z-motion, lift Fude gradually
I sketched two types of mechanisms and did rapid prototyping.
A) Fiber actuator type (Artificial muscle )
B) Rack & Pinion type
Fiber Actuator type¶
I planned to use Biometal. It is a fiber-like actuator based on shape-memory alloy, and moves like a muscle when current flows.
..
The circuit of the biometal is very simple. The biometal is connected between Vcc and drain of Nch-MOSFET, and the gate is connected to the analogue out of Arduino. The biometal shrinks according to the value of pwm, but the movement is not so big.
First I tested the circuit using Bread board, Arduino and Nch MOSFET 2SK2232 that I have.
- video
The pen lifted when the tact switch was pressed.
- Board design
As I confirmed the movement of the Biometal, I designed the board using Fab inventory.
Parts List
Eagle design
Mods
- Difference of the movement by pwm value
PWM value = 75
PWM value = 150
- Set the mechanism in the bamboo enclosure
- Arduino Code
//-------------------------------------------------- // Endtool Trial : BMX //-------------------------------------------------- const int analogOutPin = 9; const int buttonPin = 2; int buttonState = 0; int maxVal=150; void setup() { pinMode(buttonPin, INPUT_PULLUP); Serial.begin(9600); } void loop() { buttonState = digitalRead(buttonPin); if(buttonState == LOW){ analogWrite(analogOutPin, maxVal); Serial.println("pushed"); delay(100); }else{ analogWrite(analogOutPin, 0); Serial.println("High"); delay(100); } }
Fude Holder#2¶
Referring to this tutorial, I designed the rack and pinion mechanism.
- Fusion 360 Rack and Pinion Tutorial
Select ‘ SpurGear’ from Add-Ins menu.
I chose 12 for the number of Teeth.
Pinion was created automatically.
Rotate the pinion 15 degrees (= 180/12) in preparation for designing the Rack.
I cut the rack and pinion using laser cutter and combined with servo motor. The torque of micro servo SG90 (1.8 kgf·cm) seemed low for the project, so I chosed Futaba S3001 (4.8v 2.38kgf·cm) from the Kamakura Lab inventory.
Parts List
video
Arduino Code
#include <Servo.h> Serial.begin(9600); // void loop() { val = analogRead(potpin); // reads the value of the potentiometer Serial.println(val); val = map(val, 0, 1023, 0, 180); // scale it to use it with the servo (value between 0 and 180) myservo.write(val); // sets the servo position according to the scaled value delay(15); // waits for the servo to get there } | http://fabacademy.org/2019/labs/kamakura/students/kae-nagano/assignments/week15/ | CC-MAIN-2022-05 | refinedweb | 483 | 56.86 |
This is a part of an ongoing series. Check out my profile for more.
It’s been a couple months, but I’m back and ready to talk about coordinating state across a distributed system, specifically with regards to distributed transactions. Many businesses require transactional operations, that is, an operation consisting of many parts that should succeed or fail as an atomic unit. When we get to the world of SOA, things become more complicated as we’ll see in this post. We’ll start off by looking at a non-transactional operation, then move to a transactional operation to get a sense of the differences in problems posed by the two scenarios.
As seen in the previous post, resolving the partial failure of an operation that acts across a distributed system can be complicated. Last time, we looked at a distributed blogging service:
def publish_blog_post(post_id)
PostDB.execute(“UPDATE posts WHERE id = ? SET published = ‘t’”, post_id)
NotificationService.notify_friends(post_id)
FeedUpdatingService.update_feeds(post_id)
TwitterService.tweet_about_post(post_id)
end
As discussed in the previous post, calls like NotificationService.notify_friends can fail or time out, leaving us in a state of uncertainty. Previously we discussed idempotency as a strategy for solving such issues: if we retry once or twice the request will likely eventually succeed, and we don’t need to worry about the issue of double-performing any action. We were able to greatly reduce complexity by making each operation — including publish_blog_post itself — idempotent.
In this example there’s no real interdependence between all those downstream service calls; the successive success or failure of each service request should not impact the others. To be more precise, the operation publish_blog_post appears to be non-transactional; if one service call fails, we likely do not want to rollback any of the other operations. Sometimes we don’t get quite as lucky, and find ourselves in a situation that requires operations of a transactional nature.
Let’s take an example e-commerce application for buying and selling tickets. A user can sell a ticket to another user. The buyer wants to pay with a voucher we gave him. When this happens, the application should “reassign” the ticket to the buyer, use up the buyer’s voucher, and credit the seller’s account. In this example, our company chose to split our services three-fold: an account service which manages payouts to a user’s bank account, a promo service which manages promotions and vouchers, and the ticketing service which coordinates the selling and ownership of a ticket. Here’s some example code for selling the ticket in the ticket service:
def sell_ticket(buyer_id, seller_id, ticket_id)
ticket = TicketDB.execute(“SELECT * FROM tickets WHERE id = ?”, ticket_id)
TicketDB.execute(“UPDATE tickets SET user_id = ? WHERE id = ?”, buyer_id, ticket_id)
PromoService.debit(buyer_id, ticket.amount)
AccountService.credit(seller_id, ticket.amount)
end
This operation is clearly transactional: we shouldn’t let the buyer get charged without getting his ticket, and we shouldn’t let the seller trade away his ticket without getting credited for the sale. Keeping in mind that any of these services could go down at any point in time, we have to account for all sorts of failure scenarios. What happens if the “credit” call starts breaking? Can we recover and undo the calls to reassign the ticket and debit the buyer? What if the debit call times out? Do we have the ability to retry and move forward? If the service repeatedly times out, can we reassign the ticket back to the seller? As you can see, there are a lot of issues here, and the complexity in the code used solving these issues could grow greatly. Idempotency could be useful as it was in our blogging application, but it might not be sufficient to simply retry ad nauseam as we did in the blogging platform. If a system fails for enough requests, we may wish to “rollback” the other operations.
In a monolithic application using a traditional RDBMS this would be an excellent case for using a database transaction like so:
BEGIN;
UPDATE tickets SET user_id = <buyer_id> WHERE id = <ticket_id>;
UPDATE voucher_balances SET balance = balance — <amount> WHERE user_id = <buyer_id>;
UPDATE accounts SET balance = balance + <amount> WHERE user_id = <seller_id>;
COMMIT;
Concurrency concerns aside, the transactional properties of this example get us the behavior we want: nothing succeeds unless each command succeeds. The semantics of SQL permit this behavior: the database knows that these commands are to be grouped together as a unit from the BEGIN…COMMIT messages.
It would be nice if we could simplify lift this idea and use it in our distributed ticketing platform, but things get a lot more complicated. First off, their are now at least four actors (the ticketing application, its database, the voucher service, the account service), probably more (since the other services have their own datastores and possibly downstream dependencies). In the monolith system, there were only two (monolith and database). Managing consistency across all of those systems grows in complexity since you have to consider what might happen if any of those were to fail individually (or in tandem). I’ve seen code that attempts to catch and “undo” errors in these scenarios to roll back when failure occurs. Here’s a simplified example of code where we attempt to protect against failure in the AccountingService:
TicketDB.execute(“UPDATE tickets SET user_id = ? WHERE id = ?”, buyer_id, ticket_id)
PromoService.debit(buyer_id, ticket.amount)
begin
AccountingService.credit(seller_id, ticket.amount)
rescue Exception => e
PromoService.credit(buyer_id, ticket.amount)
TicketDB.execute(“UPDATE tickets SET user_id = ? WHERE id = ?”, seller_id, ticket_id)
end
This, of course, has its flaws. Foremost, the calls to re-credit the buyer’s promo or assign the ticket back to the user can fail. The code complexity has increased. And we haven’t even addressed the issue of timeouts or retries yet! Continuing down this path, we’ll likely end up with hard-to-manage code that is prone to edge cases and is hard to follow.
Some might attempt to ameliorate this issue with distributed commitment algorithms like two-phase commit, three-phase commit, or others. Note that these algorithms do not give you RDBMS-style rollbacks, nor do they guarantee the ACID-ity of your transaction, but rather merely coordinate a decision to commit a transaction across your services (i.e. “do we all agree to commit this transaction or not?”). This might make it marginally easier to avoid complexity, but you still might have to craft code to undo each operation should the transaction go awry. Moreover, note that implementing these algorithms is usually not trivial.
Rather than suffer, you should try to minimize the possibility of distributed transactions altogether. If you’re finding that there are a lot of atomic operations broken across services, you might want to reassess your service boundaries. Doing so will likely reduce the risk of errors and make your code more comprehensible. Sometimes you really do have to support a distributed transactional operation, and unfortunately there’s no silver bullet here. You’ll have to implement some code that manages the operation and enforces each dependent system processes its part, and is able to roll back gracefully as much as possible if needed.
That’s all for this time — there will be more to come on distributed systems, so hang tight. | https://hackernoon.com/skills-for-technical-leadership-on-distributed-transactions-ab0b413e5d2b | CC-MAIN-2019-47 | refinedweb | 1,218 | 53.61 |
Em 16/02/2019 12:18, Julie Marchant escreveu: > libre? The only argument I've seen on the matter is the way copyright > works, but Chromium is under the Modified BSD License according to > documentation I was able to find. If some files are not actually covered For what is worth, what I learned with projects that don't follow the Open Source Definition (I know that I shouldn't support this term here, but I had to mention it) is that they mask their non-compliance behind a license. Of course we don't intend to foster open source here, as this project, having the goal to provide a package manager that is under the GNU project, also aims to create a system distribution that follows the GNU FSDG and uses such package manager If the norm would be to only check the licenses, then we would have for example, taken ages to figure out that the kernel source files from upstream of GNU Linux-libre was/is non-free. Having a requirement for a package to be first throughly reviewed eliminates some of the possibility of having non-free functional data or non-distributable non-functional data. It's not a perfect protection (since the package in review might have implemented things from other works that one of the reviewers might not be aware of). As I said in a message to these mailing lists, I already started reviewing Chromium, although this project is big and I might not have the time nor all the skills to do it alone. Since today, I moved the review, which was available at [1], to the appropriate Review namespace at [2]. [1] [2]
signature.asc
Description: OpenPGP digital signature | https://lists.gnu.org/archive/html/guix-patches/2019-02/msg00361.html | CC-MAIN-2019-30 | refinedweb | 288 | 52.33 |
On 03/06/2013 10:10, Mark Thomas wrote:
> My next step is to look more closely at the server code (the issue is
> sensitive to timing so it can be tricky to add debug code and still see
> the issue) to figure out if I am misusing the API or if there might be
> an APR/native bug at the root of this.
I've finally figured out what is going on. The non-blocking write test
is more likely to hit this bug because it uses a slow client so the
buffers all fill up. In theory, any of the non-blocking code could hit this.
The problem is actually quite simple. When the buffers are almost full
the next write triggers an APR_STATUS_IS_EAGAIN. However, some of the
requested data will have been written. How much data is not available
through the current API. The Tomcat code assumes no data was written and
hence ends up duplicating part of the previous write.
Thoughts on how to extend the tc native API?
I'm thinking something along the lines of the following:
if (ss == APR_SUCCESS)
return (jint)sent;
else if ((APR_STATUS_IS_EAGAIN(ss) || ss == TCN_EAGAIN) && sent > 0) {
return (jint)sent;
} else {
TCN_ERROR_WRAP(ss);
return -(jint)ss;
}
to fix the immediate problem.
Mark
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org | http://mail-archives.apache.org/mod_mbox/tomcat-dev/201306.mbox/%3C51AC8FCC.7030501@apache.org%3E | CC-MAIN-2016-26 | refinedweb | 231 | 63.7 |
Python: write a Twitter client
Why fill up the internet with pointless 140-character drivel yourself when you can write an application to do it for you?
This issue we’re going to create our own Twitter application using Python and two libraries: Tweepy, a Twitter Python library, and our old favourite EasyGUI, a library of GUI elements. This project will cover the creation of the application using Python and also the configuration of a Twitter application using the Twitter development website dev.twitter.com.
Tweepy is a Python library that enables us to create applications that can interact with Twitter. With Tweepy we can:
- Post tweets and direct messages.
- View our time line.
- Receive mentions and direct messages.
Now you may be thinking “Why would I want to use Python with Twitter?” Well, dear reader, quite simply we can use Python to build our own applications that can use Twitter in any of the ways listed above. But we can also use Twitter and Python to enable interaction between the web and the physical world. We can create a script that searches for a particular hashtag, say #linuxvoice, and when it finds it, an LED can flash, a buzzer can buzz or a robot can start navigating its way around the room.
In this tutorial we will learn how to use Tweepy and how to create our own application.
At the end of this project you will have made a functional Twitter client that can send and receive tweets from your Twitter account.
Downloading Tweepy and EasyGUI
Tweepy The simplest method to install Tweepy on your machine is via Pip, a package manager for Python. This does not come installed as standard on most machines, so a little command line action is needed. The instructions below work for all Debian- and Ubuntu-based distros.
First, open a terminal and type sudo apt-get update to ensure that our list of packages is up to date. You may be asked for your password – once you have typed it in, press the Enter key.
You will now see lots of on-screen activity as your software packages are updated. When this is complete, the terminal will return control to you, and now you should type the following to install Pip. If you are asked to confirm any changes or actions, please read the instructions carefully and only answer ‘Yes’ if you’re happy.
sudo apt-get install python-pip
With Pip installed, our attention now shifts to installing Tweepy, which is accomplished in the same terminal window by issuing the following command.
sudo pip install tweepy
Installation will only take a few seconds and, when complete, the terminal will return control to you. Now is the ideal time to install EasyGUI, also from the Pip repositories.
To create an application you will need to sign in with the Twitter account that you would like to use with it.
Twitter apps
Twitter will not allow just any applications to use its platform – all applications require a set of keys and tokens that grant it access to the Twitter platform.
The keys are:
- consumer_key
- consumer_secret
And the tokens are:
- access_token
- access_token_secret
To get this information we need to head over to and sign in using the Twitter account that we wish to use in our project. It might be prudent to set up a test account rather than spam all of your followers. When you have successfully signed in, look to the top of the screen and you’ll see your Twitter avatar; left-click on this and select “My Applications”. You will now see a new screen saying that you don’t have any Twitter apps, so let’s create our first Twitter app.
To create our first app, we need to provide four pieces of information to Twitter:
- The name of our application.
- A description of the application.
- A website address, so users can find you. (This can be completed using a placeholder address.)
- Callback_URL. This is where the application should take us once we have successfully been authenticated on the Twitter platform. This is not relevant for this project so you can either leave it blank or put in another URL that you own.
After reading and understanding the terms and conditions, click on “I Agree”, then create your first app. Right about now is an ideal time for a cup of tea.
With refreshment suitably partaken, now is the time to tweak the authentication settings. Twitter has auto generated our API key and API secret, which are our consumer_key and consumer_secret respectively in Tweepy. We can leave these as they are. Our focus is now on the Access Level settings. Typically, a new app will be created with read-only permissions, which means that the application can read Twitter data but not post any tweets of direct messages. In order for the app to post content, it first must be given permission. To do this, click on the “modify app permissions” link. A new page will open from which the permissions can be tweaked. For this application, we need to change the settings to Read and Write. Make this change and apply the settings. To leave this screen, click on the Application Management title at the top-left of the page.
We now need to create an access token, which forms the final part of our authentication process. This is located in the API Keys tab. Create a new token by clicking Create My Access Token. Your token will now be generated but it requires testing, so scroll to the top-right of the screen and click “Test OAUTH”. This will test your settings and send you to the OAuth Settings screen. In here are the keys and tokens that we need, so please grab a copy of them for later in this tutorial. These keys and tokens are sensitive, so don’t share them with anyone and do not have them available on a publicly facing service. These details authenticate that it is YOU using this application, and in the wrong hands they could be used to send spam or to authenticate you on services that use the OAuth system.
With these details in hand, we are now ready to write some Python code.
Creating a new application is an easy process, but there are a few hoops to jump through in order to be successful.
Python
For this tutorial, we’ll use the popular Python editor Idle. Idle is the simplest editor available and it provides all of the functionality that we require. Idle does not come installed as standard, but it can be installed from your distribution’s repositories. Open a new terminal and type in the following.
For Debian/Ubuntu-based systems
sudo apt-get install idle-python2.7
With Idle now installed it will be available via your menu, find and select it to continue.
Idle is broken down into two areas: a shell where ideas can be tried out, and where the output from our code will appear; and an editor in which we can write larger pieces of code (but to run the code we need to save and then run the code). Idle will always start with the shell, so to create a new editor window go to File > New and a new editor window will appear. To start with, let’s look at a simple piece of test code, which will will ensure that our Twitter OAuth authentication is working as it should and that the code will print a new tweet from your timeline every five seconds.
import tweepy
from time import sleep
import sys
In this first code snippet we import three libraries. The first of these is the tweepy library, which brings the Twitter functionality that we require. We import the sleep function from the time library so that we can control the speed of the tweets being displayed. Finally we import the sys library so that we can later enable a method to exit the Twitter stream.
consumer_key = "API KEY"
consumer_secret = "API SECRET"
access_token = "=TOKEN"
access_token_secret = "TOKEN SECRET"
In this second code snippet we create four variables to store our various API keys and tokens. Remember to replace the text inside of the “ “ with the keys and tokens that you obtained via Twitter.
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
For the third code snippet we first create a new variable called auth, which stores the output of the Tweepy authorisation handler, which is a mechanism to connect our code with Twitter and successfully authenticate.
api = tweepy.API(auth)
public_tweets = api.home_timeline()
The fourth code snippet creates two more variables. We access the Twitter API via Tweepy and save the output as the variable api. The second variable instructs Tweepy to get the user’s home timeline information and save it as a variable called public_tweets.
for tweet in public_tweets:
try:
print tweet.text
sleep(5)
except:
print("Exiting")
sys.exit()
The final code snippet uses a for loop to iterate over the tweets that have been gathered from your Twitter home timeline. Next up is a new construction: try and except. It works in a similar fashion to if and else, but the try and except construction is there to follow the Python methodology that it’s “Easier to ask for forgiveness than for permission”, where try and except relates to forgiveness and if else refers to permission. Using the try and except method is seen as a more elegant solution – you can find out why at.
In this case we use try to print each tweet from the home timeline and then wait for five seconds before repeating the process. For the except part of the construction we have two lines of code: a print function that prints the word “Exiting”, followed by the sys.exit() function, which cleanly closes the application down.
With the code complete for this section, save it, then press F5 to run the code in the Idle shell.
Applications are set to be read-only by default, and will require configuration to enable your application to post content to Twitter.
Sending a tweet
Now that we can receive tweets, the next logical step is to send a tweet from our code. This is surprisingly easy to do, and we can even recycle the code from the previous step, all the way up to and including:
api = tweepy.API(auth)
And the code to send a tweet can be easily added as the last line:
api.update_status("Tinkering with tweepy, the Twitter API for Python.")
Change the text in the bracket to whatever you like, but remember to stay under 140 characters. When you’re ready, press F5 to save and run your code. There will be no output in the shell, so head over to your Twitter profile via your browser/Twitter client and you should see your tweet.
We covered EasyGUI in LV006, but to quickly recap, it’s a great library that enables anyone to add a user interface to their Python project. It’s easier to use than Tkinter, another user interface framework, and ideal for children to quickly pick up and use.
For this project we will use the EasyGUI library to create a user interface to capture our status message. We will then add functionality to send a picture saved on our computer.
Using EasyGUI we can post new messages to the desktop via the msgbox function.
Adding a user interface
Open the file named send_tweet.py and let’s review the contents.
import tweepy
from time import sleep
import sys
import easygui as eg
This code snippet only has one change, and that is the last line where we import the EasyGUI library and rename it to eg. This is a shorthand method to make using the library a little easier.
consumer_key = "Your Key"
consumer_secret = "Your secret”
access_token = "Your token"
access_token_secret = "Your token"
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
These variables are exactly the same as those previously.
message = eg.enterbox(title="Send a tweet", msg="What message would you like to send?")
This new variable, called message, stores the output of the EasyGUI enterbox, an interface that asks the user a question and captures their response. The enterbox has a title visible at the top of the box, and the message, shortened to msg, is a question asked to the user.
try:
length = len(message)
if length < 140:
api.update_status(message)
else:
eg.msgbox(msg="Your tweet is too long. It is "+str(length)+" characters long")
except:
sys.exit()
For this final code snippet we’re reusing the try except construction. Twitter has a maximum tweet length of 140 characters. Anything over this limit is truncated, so we need to check that the length is correct using the Python len function. The len function will check the length of the variable and save the value as the variable length.
With the length now known, our code now checks to see if the length is less than 140 characters, and if this is true it runs the function update_status with the contents of our message variable. To see the output, head back to Twitter and you should see your tweet. Congratulations! You have sent a tweet using Python. Now let’s put the icing on the cake and add an image.
EasyGUI looks great and is an easy drop-in-replacement for the humble print function.
Adding an image to our code
The line to add an image to our tweet is as follows
image = eg.fileopenbox(title="Pick an image to attach to your tweet")
We create a variable called image, which we use to store the output from the EasyGUI fileopenbox function. This function opens a dialog box similar to a File > Open dialog box. You can navigate your files and select the image that you wish to attach. Once an image is chosen, its absolute location on your computer is saved as the variable image. The best place to keep this line of code is just above the line where the status message is created and saved as a variable called message. With the image selection handled, now we need to modify an existing line so that we can attach the image to the update.
Navigate to this line in your code:
api.update_status(message)
And change it to this:
api.update_with_media(image, status=message)
Previously we just sent text, so using the update_status function and the message contents was all that we needed, but to send an image we need to use the update_with_media function and supply two arguments: the image location, stored in a variable for neatness; and the status update, saved as a variable called message.
With these changes made, save the code and run it by pressing F5. You should be asked for the images to attach to your code, and once that has been selected you will be asked for the status update message. With both of these supplied, the project will post your update to Twitter, so head over and check that it has worked.
Sending an image is made easier via a GUI interface that enables you to select the file that you wish to send. Once selected, it saves the absolute path to the file.
Extension activity
Following these steps, we’re managed to make two scripts that can read our timeline and print the output to the shell, but we can also merge the two together using an EasyGUI menu and a few functions. The code for this activity is available via the GitHub repository, so feel free to examine the code and make the application your own. | https://www.linuxvoice.com/python-write-a-twitter-client/ | CC-MAIN-2022-40 | refinedweb | 2,626 | 70.43 |
I'm trying to use a makefile to compile a program someone else has written, using cygwin. I get a lot of error messages, of which many complain
error: template with C linkage
extern "C"
#include < pthread.h >
/usr/include/pthread.h:67:5: error: previous declaration of ‘int pthread_atfork(void (* )(),void ( *)(), void ( *)())’ with ‘C++’ linkage
/usr/include/sys/unistd.h:136:5: error: conflicts with new declaration with ‘C’ linkage
EDIT: Based on the exchange in the comments, the culprit was a header file in the build directory (Endian.h) that conflicted with a system include file /usr/include/endian.h. It was being included instead of the system header, and causing build issues. The files were in conflict because case is insignificant on Windows. The root cause was what was suggested in the original answer. The extern C construct was leaking into C++ code unintentionally, where templates were defined, causing the indicated error.
I would check for a "dangling" C linkage construct in your header files somewhere. This would be in code you wrote (not any of the system headers; those are likely safe).
Code in headers is wrapped with,
on top:
#ifdef __cplusplus extern "C" { #endif
and at bottom:
#ifdef __cplusplus } #endif
If the bottom part is missing, the effect of the top half extends into code in other headers unintentionally. This causes issues like you are encountering. | https://codedump.io/share/AOiaN11MgqS6/1/compiling-program-containing-extern-quotcquot | CC-MAIN-2017-04 | refinedweb | 231 | 54.83 |
1618240774
Each company wishes to remain present in the industry, and for that purpose, it indulges in user profile creation sites list to become popular in the online field. To achieve this, however, you need robust and productive strategies to maintain your existing customers’ loyalty. These plans should also attract new customers at precisely the same time.
Only the latest consulting firms are effective enough to observe the suitable approaches that can help you transform your business into your favourite brand. At Vowels Advertising in Dubai, Abu Dhabi, we offer various courses and suggestions to help your business and building to be effective by becoming a brand consultant. Branding advisers build new communication methods and a new identity. These advisers offer the brand of goods as required. They research your brand, which is made up of its identity and values. Brand Advisors have fantastic ideas and approaches to branding your products.
The most significant advantage of hiring a new consulting firm is that it has years of experience combined with the latest construction approaches. Since then, such companies are used to market trends and are continually being improved; they can quickly assess the competition and track competitor companies. In this way, they can analyze the manufacturer’s promotional requirements and evaluate the market to target a new product more effectively.
None of the other popular brands you see in the market today have started on a large scale right from the start. They achieved their present position thanks to ongoing trials and planned procedures. The whole long-term strategy was to introduce a new identity, place it precisely in the current market, and strengthen it by copying it to acceptable masses. Planning these elements and then implementing them appropriately is not a simple job. That is why companies from all over the world employ new consulting companies to implement the project.
If best, a new consulting firm can be tricky as there are an endless number of firms offering such services? The key to finding the perfect supplier for you is to analyze your condition first. As soon as you write down all the things, you can compare providers on the Internet and look for any gaps that may be useful to you. Also, start looking for reviews and connect with all previous customers to find out how satisfied they were with all the services.
By opting for Enterprises, they are responsible for creating the perfect brand identity for virtually any business, placing or repositioning it according to the market in which it operates, designing campaigns that advertise the business professionally and economically, and conducting thorough target market, competition, and market research. Thanks to all these activities, companies help to establish a small business and increase its visibility on the market. In the case of your small business, you can undoubtedly see it grow and reach a wider audience. You can find suitable new employees for a consulting company.
So this is how you can hire professional brand consulters to promote your brand amongst the customers effectively.
#profile creation sites #profile creation sites 2021 #profile creation sites list #high pr profile creation sites #free profile creation sites list #high da profile creation sites
1597626399
After walking down the path of fire, captain Smith decided to visit the Earth realm and continue his exploration of the universe. The doorway he entered lead down a path that was seemingly endless.
Read Part 1
Read Part 2
The length of his journey made him question the value of it but just when he was about to give up, he finally saw her in the distance… Gaia was waiting, and she had a lot of information to share with our captain.
Captain’s log 003:
It feels like an eternity… I can’t even remember when was the last time I heard from general @niallon11 and general @gregory-f. I know that they instructed me to come here but was it worth it? I guess we shall find out very soon.
#storytelling #ai #story #artificial-intelligence #artificial-intelligence-hype #latest-tech-stories #ai-top-story #musings
1599786000
Natural+[]()
We will load the data using pandas so we will import pandas and for creating the data profile we will import the NLP profiler.
import pandas as pd
from nlp_profiler.core import apply_text_profiling
#developers corner #nlp #pandas profiling #profile #python
1597645680
Selenium has been a pinnacle for open-source software in the industry of automated website testing. The automation testing framework is widely adopted by the testing community to help them in automating interactions with their web-application for desktops.
I have been an automation tester from couple years now, and have been fondly towards Selenium ever since I knew what it’s capable of.
Things You Can’t Do With JavaScript
Now that you know the purpose of JavaScript. You may be wondering about the reason for your end user to disable JavaScript in their browsers while surfing the internet?.
#selenium #test-automation #javascript #manual-testing #latest-tech-stories #hackernoon-top-story #javascript-top-story #qa-checklist | https://morioh.com/p/cefb7bd28c1e | CC-MAIN-2022-05 | refinedweb | 842 | 50.77 |
One tool for making Java do something over and over again is the Timer class.
A Timer can make your program do something on a regular basis, like redraw the screen thirty times a second, or sound a klaxon every two seconds until you go mad.
Here's a simple example of using Timer to print a message to the console once every second:
import java.util.*; public class TimerTest { public static void main(String[] arg){ Timer tickTock = new Timer(); // Create a Timer object TimerTask tickTockTask = new TimerTask(){ // This is what we want the Timer to do once a second. public void run(){ System.out.println("Tick"); System.out.println("Tock"); } }; tickTock.schedule(tickTockTask, 1000, 1000); } }I've used a shortcut here to override the run() method of the TimerTask. In TimerTask the run() method is abstract, so you create a definition for it for your instance of a TimerTask. Or for a class that extends TimerTask.
In the schedule() method, we set an initial delay time of 1000 milliseconds (one second) and a repeat time of 1000 milliseconds. When run, the program will do what we told it to do in the TimerTask's run() method once every second.
If you use timer in a simple video game, replace the System.out.println() statements in run() with your game loop commands. Then set the schedule() method's repeat rate to whatever you want, for example, use a repeat rate of 33 for about 30 frames per second.
The Timer is not necessarily the best way to do this for all applications, but it's adequate if your demands aren't too great. The Timer makes use of "threads", which is a way of letting a program do more than one thing at a time. For more intensive applications that need an ability to run in a loop, the Thread class and Runnable interface can do more than a simple Timer and TimerTask. | https://beginwithjava.blogspot.com/2010/07/ | CC-MAIN-2022-40 | refinedweb | 323 | 69.01 |
Welcome to NanUIWelcome to NanUI
NanUI is a library based on ChromiumFX that can let your Winform application use HTML5/CSS3 as user interface. You can use orginal Winform borders or full view no border form that use all html/css to design the interface.
NanUI is MIT licensed, so you can use it in both business and free/open source application. For more details, see the LICENSE file.
What's new in version 0.6What's new in version 0.6
- Rewritted codes of no border interface logic, new version is faster than old versions.
- NanUI now supports Hi-DPI in Windows 8 and later.
- Combined HtmlUIForm and HtmlContentForm to one Formium which support these two styles.
- Install Nuget Package of NanUI will add CEF and ChromiumFX dependencies to your application automatically.
ChangesChanges
2017/12/21
- New feature: added a new WebBrowser control to NanUI. You can now drag the WebBrowserControl to your form.
2017/12/11
- BUG FIX: n-ui-command html attribute doesn't fire and cause a javasscript error.
- Update new nuget packages.
2017/11/24
- BUG FIX: n-ui-command html attribute will not fire if html source dosen't contain script tags.
- Update new dependencies of NetDimension.NanUI. I made a mistake, the old one did not contain 32bit dependencies. So please reinstall the new one.
2017/9/25
- Fixed: if your project didn't has satellite resources, the program will crash by a dll file not found exception.
- Fixed: if your html contains select element which is opened and dropdown is shown, moving or resizing the window will cause the dropdown at wrong place.
2017/9/22
- Add NetDimension.NanUI.XP project, it can use on Windows XP and it is based on CEF3.2526.1373.
- The sources of NanUI 0.6 is open source now.
- Fixed an issue that if you add embedded globalization files like xxx.zh-cn.js or xxx.en-us.css to your project, the complier will auto generate satellite files in output fold and NanUI did not loads these files correctly.
2017/9/10
- update to version 0.6
Build NetDimension.NanUI.dllBuild NetDimension.NanUI.dll
You should use the complier which supports C# 7.0 syntax. Visual Studio 2017 is recommended.
ReleasesReleases
Stable NanUI binaries are released on NuGet. Use following Nuget command to install latest version of NanUI to your Winfrom application. It will install CEF and CFX dependencies too and the dependencies will automatic copy to the bin folder.
NOTE: NanUI requires .Net Framework 4.0 as minimal support.
Nuget Package Manager
PM> Install-Package NetDimension.NanUI
Release of NetDimension.NanUI.XP
Another version of NanUI that supports Windows XP is now can be downloaded on Nuget by using following command:
PM> Install-Package NetDimension.NanUI.XP
Download Manually
- NetDimension.NanUI - NanUI main library
- NetDimension.NanUI.Cef2987 - Dependencies of NanUI (Include CEF3.2987.1601.0 and ChromiumFX3.2987.1601 binaries)
Basic UsageBasic Usage
Initialize Runtime in Main
namespace TestApplication { using NetDimension.NanUI; static class Program { [STAThread] static void Main(string[] args) { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Initalize: set CEF paths //If you use default structure of the FX folder, you should provide paths of fx folder, resources folder and locales folder. var result = Bootstrap.Load(PlatformArch.Auto, System.IO.Path.Combine(Application.StartupPath, "fx"), System.IO.Path.Combine(Application.StartupPath, "fx\\Resources"), System.IO.Path.Combine(Application.StartupPath, "fx\\Resources\\locales")); if (result) { // Load embedded html/css resources in assembly. Bootstrap.RegisterAssemblyResources(System.Reflection.Assembly.GetExecutingAssembly()); Application.Run(new Form1()); Application.Exit(); } } } }
Using native Winform border style
namespace TestApplication { public partial class Form1 : Formium { public Form1() //Load embedded resource index.html and not set form to no border style by the second parameter. : base("", false) { InitializeComponent(); } } }
Using no border style
namespace TestApplication { public partial class Form1 : Formium { public Form1() //Load embedded resource index.html and set form to no border style by igrone the second parameter or set it to true. : base("") { InitializeComponent(); } } }
DocumentationDocumentation
I have no time for writting documents for the present, documents will come late.
DonateDonate
If you like my work, please buy me a cup of coffee to encourage me continue with this library.
In China you can donate me by scaning the QR code below in Alipay or WeChat app.
Or you can donate me by Paypal. | https://www.ctolib.com/NetDimension-NanUI.html | CC-MAIN-2018-05 | refinedweb | 718 | 52.56 |
Problem Statement
Given a cyclic list with elements as numbers in sorted order, write a function to insert a new element (5) into the cyclic list that maintains the sorted order of the cyclic list. Do not assume that the cyclic list is referenced by its minimal element. A perfectly cyclic list is one where the nth node's 'next' pointer points to the same place as 'head' --
User would get a custom data type as Node. It is defined as
public class Node { public int data; public Node next; }
You are given a function addElement which takes in the head pointer of the cyclic linked list. Complete the function to insert the number 5 such that the sorted order is maintained and return the head node of the modified linked list. You only need to complete this function. The Main and Node classes for storing linked list are already defined, so you will not need to add them. Every time you submit your code it will be compiled and a small suite of tests will be run against your code and you will be shown the results. When you complete the problem a larger suite of tests will be run to test your implementation. If you successfully pass the displayed tests, use your extra time to make sure your code handles other edge cases appropriately.
Input/Output Specs Inputs Specs: The data in test case starts and ends with brackets ( ). Within the brackets lies the following information (Node input1) where input1 is the head pointer of the cyclic linked list. A cyclic linked list i.e. the next of the Last node is the first node , so for {1,2} the Node having 1 would have the next element as 2 and the node with data as 2 the next value would point to 1 Output Specs: You need to return the head node of the modified linked list.
Examples Input values: 2->6
Expected Output value : 2->5->6
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/answers/viewqa/Java-Interview-Questions/26349-problem-question-please-need-urgent-answers.html | CC-MAIN-2013-20 | refinedweb | 367 | 65.56 |
Inside XSL-T (1/4) - exploring XML
Inside XSL-T
Now that we have had so many columns with XSL examples and so few about XSL, it seems like a good idea to study XSL in more detail. XSL comes out to be a very generic mechanism for transforming document trees, at least the paperless ones...
XSL was initially devised to solve two problems:
- Transforming an XML document into something else.
- Formatting an XML document for display on a page-oriented device, such as a printer or a browser.
Subsequently it has proven difficult to solve the second problem in a fashion that satisfies all the different requirements from low resolution screen displays all the way to hi-res printing and copying. Furthermore, screen formatting is currently done with Cascading Style Sheets (CSS), so little interest developed in yet another method. The World Wide Web Committee (W3C) then decided to split the two tasks into separate sub-standards, XSL Transformations (XSL-T) and XSL formatting objects (XSL-FO). While XSL-T has been an official recommendation since November of last year, XSL-FO is still in the making.
The T in XSLT
A transformation expressed in XSLT describes rules for transforming a source tree into a result tree. The transformation is achieved by associating patterns with templates. Whenever a pattern matches elements in the source tree, a template is used to create part of the result tree. The result tree is separate from the source tree, and their structures can be completely different. In constructing the result tree, elements from the source tree can be filtered and reordered, and new elements can be added. A transformation expressed in XSLT is called a stylesheet in the case where XSLT is transforming into a display language, such as HTML or WML.
Stylesheet structure
This example shows the structure of a stylesheet. Ellipses (
...)
indicate where attribute values or content have been omitted. Although this
example shows one of each type of allowed element, stylesheets may contain zero
or more of each of these elements.
<xsl:stylesheet <xsl:import <xsl:include <xsl:output <xsl:strip-space <xsl:preserve-space <xsl:decimal-format <xsl:namespace-alias <xsl:key .
In addition, the
xsl:stylesheet element may contain any element
not from the XSLT namespace. Such elements can provide, for example,
- information about what to do with the result tree.
- information about how to obtain the source tree.
- metadata about the stylesheet.
- structured documentation for the stylesheet.
Produced by Michael Claßen
URL:
Created: Aug 13, 2000
Revised: Aug 13, 2000 | http://www.webreference.com/xml/column17/index.html | CC-MAIN-2015-06 | refinedweb | 423 | 63.09 |
man, when i do that another error pops out instructing me to change it back to length()
Type: Posts; User: Yunus Einsteinium
man, when i do that another error pops out instructing me to change it back to length()
import java.util.Scanner;
public class CharOccurence {
public static void main(String[] arg){
Scanner input = new Scanner(System.in);
System.out.println("Enter a sentence to count the...
sorry for the late reply,. . .picking largest each time is what i find technical. I was thinking of using for loops and the largest total in each loop i print the corresponding employee hours
[...
This is atleast of of the easy ways you can implement what you want using java. As you know java is a very powerful programming language,therefore there are other million ways one can implement this,...
Hallo, the first paragraph i didn't understand anything maybe because i'm just novice. The idea of the array picking the largest remaining each time,and print out the corresponding employee hours...
:confused:
Stranded in this easy program. . . .
Q : Write a program that displays employ-ees and their total hours in decreasing order of the total hours
my working. . .
{I was not able to... | http://www.javaprogrammingforums.com/search.php?s=3e682ba0d93ddcee960ed3e1b91fcd7d&searchid=1028371 | CC-MAIN-2014-35 | refinedweb | 202 | 65.22 |
Proposed features/External links
Official website
website=[URL URL] website=
website:title=URL, and website=URL for the first one?
e.g.
- building=yes
- type=house
- website=
- railway=station
- name=Euston
- website:official=
- website:departures=
- Why not rather more generic url instead of website? Then there could be a url:photo, url:logo, or even url:webcam (in addition to default url, url:official, url:departures). This woulf allow images to be treated differently (eg in a baloon popup, or rendered right into a map at a high enough zoom level) -- Stefanb 15:32, 7 June 2007 (BST)
- Maybe considering website:wikipedia= is good too. --Messi 19:09, 19 July 2008 (UTC)
- I'd also prefer url or even uri over website. It's more flexible to be expanded in the future by not breaking it's meaning (think of e.g. uri:contact=mailto:restaurant@example.com. -- MapFlea 07:43, 12 November 2008 (UTC)
- I'm definitely in favor of a url:website / url:wikipedia / url:contact approach. It's more flexible and expandable. we have on main tag (url) that can be used as namespace/prefix. the default value of url=foobar should be handled as main website of an object. additional uri sources can be added using a set of predefined extensions like url:wikipedia, url:contact, url:webcam. a direct link to a page containing opening hours instead should be tagged with opening_hours:url=. that would follow the logic: url per se points to a site with information about the main object. a suffic (url:wikipedia) defines the website and its content. if the website is about a specific attribute (e.g. opening hours) the corresponding tag prefix should be used (e.g. opening_hours:url, source:url (to define the source of information used to map the object)). I don't like the flat tagging approach (a wikipedia tag, a website tag and so), it just gets to crowded. adding structure using prefixes and suffixes keeps everything clean and logical. --Marc 11:49, 3 January 2010 (UTC)
Description of website links
Descriptive text for any website links
website_description=* website_description=National Trust website website_description:official=NetworkRail's website website_description:departures=Live departure boards
- I would rater use - as mentioned above - a namespace/suffix approach: using existing tags instead of creating new ones: website:description, website:title and so on. this approach is more generic and can be used to any attribute/tag, e.g. in this case the website tag. --Marc 11:51, 3 January 2010 (UTC)
Wikipedia
Travel guides
Should it be policy to only link to editable wiki links, or do people also want to include things like lonely planet?
3D models
Map links
Links to maps of the location which are available online with a Free license
- I don't we need this link as we can easily make the link to any online/offline we wish by knowing the langitude and latitude of the location (city, building, ...)
Considerations
Given wikipedia's problems with external links (people adding hundreds of their own for use in advertising) we should have some policy saying what's allowed. e.g. wikimedia projects and official websites only?
- we can choose the allowed title for website:title=URL. (e.g: official, logo, wikipedia, ...) --Messi 19:09, 19 July 2008 (UTC)
- this would not solve the advertisement spam problem. For example: do we want a "official" link for every supermarket of a big chain ? Or personal websites of people living in a certain house? According to my experience with english and german language wikipedia every possibility to place weblinks will be used for spamming too. This in return will lead to the need of administrative countermeasures to kick those guys and their spam out. So if we would like to have weblinks, there should be very very well defined criteria which links are allowed and which not. In my opinion we should only allow following link types:
- wikipedia pages (in 99% of all cases this should be sufficient to get additional information)
- wikimedia commons category pages directly connected with the object
- one official web links of the object (but not of other entities the object belongs to or which belong to the object). e.g. if a departement store has its own website, it can be linked. If the store is part of a chain, the chains website must not be linked. And if a factory is shown in the map, the official link of this factory is acceptable, links to trade mark websites of the factory are not. -- Andreas König 13:36, 28 December 2009 (UTC)
- Regarding number of links allowed--it should just be policed. If a particular building has a wiki entry on say, some student-run wiki about a university campus, but that content doesn't pass the notability requirement for Wikipedia, it still deserves to be linked to, IMO. Whatever rules are made probably won't be complete. I also think it would be wrong to say businesses deserve official links to their sites but individuals don't merit links to their blogs. With the rel=nofollow tag implemented on openstreetmap.org, at least bad guys won't be at it in order to improve their PageRank. -Cov 00:20, 29 January 2010 (UTC) | https://wiki.openstreetmap.org/wiki/Proposed_features/External_links | CC-MAIN-2021-04 | refinedweb | 879 | 53.41 |
in reply to
Functional take 2
in thread Why I like functional programming
Hello!
Perhaps the following page will offer a shorter
introduction into a functional programming in Perl.
The page shows off the fixpoint combinator, the fold combinator,
closures, higher-order functions, and implementations of a
a few algorithms on lists. It's noteworthy how
easy it was to translate these algorithms from Scheme to Perl.
Even the location of parentheses is sometimes similar to
that in Scheme notation. The page finishes with examples of
improper and circular lists.
As to parsing of XML in a pure functional style,
a SSAX parser may serve as an example:
The parser fully supports XML namespaces, character and
parsed entities, xml:space, CDATA sections, nested entities,
attribute value normalization, etc. The parser
offers support for XML validation, to a full or
a partial degree. It does not use assignments at all.
It could be used to parse HTML too.
edited: Fri Jan 31 15:23:34 2003
by jeffa - linkafied those URL's
Lots
Some
Very few
None
Results (222 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=60971 | CC-MAIN-2016-07 | refinedweb | 181 | 63.8 |
字符集,字符的码,编码方式
字符集,字符的码,编码方式 --一直没有搞清楚它们之间的区别和联系。最近工作做连续遇到这方面的困扰,终于决心,把它们搞清楚了~~!
原文地址:
如果你看明白来,不妨为浏览器做个编码自动识别程序~~!Mozilla的对应程序地址为:
A tutorial on character code issues
Contents
- The basics
- Definitions: character repertoire, character code, character encoding
- Examples of character codes
- Good old ASCII
- Another example: ISO Latin 1 alias ISO 8859-1
- More examples: the Windows character set(s)
- The ISO 8859 family
- Other "extensions to ASCII"
- Other "8-bit codes"
- ISO 10646 (UCS) and Unicode
- More about the character concept
- The Unicode view
- Control characters (control codes)
- A glyph - a visual appearance
- What's in a name?
- Glyph variation
- Fonts
- Identity of characters: a matter of definition
- Failures to display a character
- Linear text vs. mathematical notations
- Compatibility characters
- Compositions and decompositions
- Typing characters
- Just pressing a key?
- Program-specific methods for typing characters
- "Escape" notations ("meta notations") for characters
- How to mention (identify) a character
- Information about encoding
- The need for information about encoding
- The MIME solution
- An auxiliary encoding: Quoted-Printable (QP)
- How MIME should work in practice
- Problems with implementations - examples
- Practical conclusions
- Further reading. This document in itself does not contain solutions to practical problems with character codes (but see section Further reading ). Rather, it gives background information needed for understanding what solutions there might be, what the different solutions do - and what's really the problem in the first place.
If you are looking for some quick help in using a large character repertoire in HTML authoring, see the document Using national and special characters in HTML .
Several technical terms related to character sets (e.g. glyph, encoding) can be difficult to understand, due to various confusions and due to having different names in different languages and contexts. The EuroDicAutom online database can be useful: it contains translations and definitions for several technical terms used here.
The basics
In computers and in data transmission between them, i.e. in digital data processing and transfer, data is internally presented as octets, as a rule. An octet is a small unit of data with a numerical value between 0 and 255, inclusively. The numerical values are presented in the normal (decimal) notation here, but notice that other presentations are used too, especially octal (base 8) or hexadecimal (base 16) notation. Octets are often called bytes , but in principle, octet is a more definite concept than byte . Internally, octets consist of eight bit s (hence the name, from Latin octo 'eight'), but we need not go into bit level here. However, you might need to know what the phrase "first bit set" or "sign bit set" means, since it is often used. In terms of numerical values of octets, it means that the value is greater than 127. In various contexts, such octets are sometimes interpreted as negative numbers, and this may cause various problems.
Different conventions can be established as regards to how an octet or a sequence of octets presents some data. For instance, four consecutive octets often form a unit that presents a real number according to a specific standard. We are here interested in the presentation of character data (or string data; a string is a sequence of characters) only.
In the simplest case, which is still widely used, one octet corresponds to one character according to some mapping table (encoding). Naturally, this allows at most 256 different characters being represented. There are several different encodings, such as the well-known ASCII encoding and the ISO Latin family of encodings. The correct interpretation and processing of character data of course requires knowledge about the encoding used. For HTML documents, such information should be sent by the Web server along with the document itself, using so-called HTTP headers (cf. to MIME headers ).
Previously the ASCII encoding was usually assumed by default (and it is still very common). Nowadays ISO Latin 1 , which can be regarded as an extension of ASCII , is often the default. The current trend is to avoid giving such a special position to ISO Latin 1 among the variety of encodings.
Definitions
The following definitions are not universally accepted and used. In fact, one of the greatest causes of confusion around character set issues is that terminology varies and is sometimes misleading.
- character repertoire
- A set of distinct characters. No specific internal presentation in computers or data transfer is assumed. The repertoire per se does not even define an ordering for the characters; ordering for sorting and other purposes is to be specified separately. A character repertoire is usually defined by specifying names of characters and a sample (or reference) presentation of characters in visible form. Notice that a character repertoire may contain characters which look the same in some presentations but are regarded as logically distinct, such as Latin uppercase A, Cyrillic uppercase A, and Greek uppercase alpha. For more about this, see a discussion of the character concept later in this document.
- character code
- A mapping, often presented in tabular form, which defines a one-to-one correspondence between characters in a character repertoire and a set of nonnegative integers. That is, it assigns a unique numerical code, a code position , to each character in the repertoire. In addition to being often presented as one or more tables, the code as a whole can be regarded as a single table and the code positions as indexes. As synonyms for "code position", the following terms are also in use: code number , code value , code element , code point , code set value - and just code . Note: The set of nonnegative integers corresponding to characters need not consist of consecutive numbers; in fact, most character codes have "holes", such as code positions reserved for control functions or for eventual future use to be defined later.
- character encoding
- A method (algorithm) for presenting characters in digital form by mapping sequences of code numbers of characters into sequences of octets . In the simplest case, each character is mapped to an integer in the range 0 - 255 according to a character code and these are used as such as octets. Naturally, this only works for character repertoire s with at most 256 characters. For larger sets, more complicated encodings are needed. Encodings have names, which can be registered .
Notice that a character code assumes or implicitly defines a character repertoire. A character encoding could, in principle, be viewed purely as a method of mapping a sequence of integers to a sequence of octets. However, quite often an encoding is specified in terms of a character code (and the implied character repertoire). The logical structure is still the following:
- A character repertoire specifies a collection of characters, such as "a", "!", and "ä".
- A character code defines numeric codes for characters in a repertoire. For example, in the ISO 10646 character code the numeric codes for "a", "!", "ä", and "‰" (per mille sign) are 97, 33, 228, and 8240. (Note: Especially the per mille sign, presenting 0 /00 as a single character, can be shown incorrectly on display or on paper. That would be an illustration of the symptoms of the problems we are discussing.)
- A character encoding defines how sequences of numeric codes are presented as (i.e., mapped to) sequences of octets. In one possible encoding for ISO 10646 , the string a!ä‰ is presented as the following sequence of octets (using two octets for each character): 0, 97, 0, 33, 0, 228, 32, 48.
For a more rigorous explanation of these basic concepts, see Unicode Technical Report #17: Character Encoding Model .
The phrase character set is used in a variety of meanings. It might denotes just a character repertoire but it may also refer to a character code, and quite often a particular character encoding is implied too.
Unfortunately the word charset is used to refer to an encoding, causing much confusion. It is even the official term to be used in several contexts by Internet protocols, in MIME headers.
Quite often the choice of a character repertoire, code, or encoding is presented as the choice of a language . For example, Web browsers typically confuse things quite a lot in this area. A pulldown menu in a program might be labeled "Languages", yet consist of character encoding choices (only). A language setting is quite distinct from character issues, although naturally each language has its own requirements on character repertoire. Even more seriously, programs and their documentation very often confuse the above-mentioned issues with the selection of a font .
Examples of character codes
Good old ASCII
The basics of ASCII
The name ASCII , originally an abbreviation for "American Standard Code for Information Interchange", denotes an old character repertoire , code , and encoding .
Most character codes currently in use contain ASCII as their subset in some sense. ASCII is the safest character repertoire to be used in data transfer. However, not even all ASCII characters are "safe"!
ASCII has been used and is used so widely that often the word ASCII refers to "text" or "plain text" in general, even if the character code is something else! The words "ASCII file" quite often mean any text file as opposite to a binary file.
The definition of ASCII also specifies a set of control codes ("control characters") such as linefeed (LF) and escape (ESC). But the character repertoire proper, consisting of the printable characters of ASCII, is the following (where the first item is the blank, or space, character):
! " # $ % & ' ( ) * + , - . / appearance of characters varies, of course, especially for some special characters. Some of the variation and other details are explained in The ISO Latin 1 character repertoire - a description with usage notes .
A formal view on ASCII
The character code
defined by the ASCII standard is the
following: code values are assigned to characters consecutively in the
order in which the characters are listed above (rowwise), starting from
32 (assigned to the blank) and ending up with 126 (assigned to the
tilde character
~
). Positions 0 through 31 and 127 are reserved for control codes
. They have standardized names and descriptions
, but in fact their usage varies a lot.
The character encoding specified by the ASCII standard is very simple, and the most obvious one for any character code where the code numbers do not exceed 255: each code number is presented as an octet with the same value.
Octets 128 - 255 are not used in ASCII. (This allows programs to use the first, most significant bit of an octet as a parity bit, for example.)
National variants of ASCII
There are several national variants of ASCII. In such variants, some special characters have been replaced by national letters (and other symbols). There is great variation here, and even within one country and for one language there might be different variants. The original ASCII is therefore often referred to as US-ASCII ; the formal standard (by ANSI ) is ANSI X3.4-1986 .
The phrase "original ASCII" is perhaps not quite adequate, since the creation of ASCII started in late 1950s, and several additions and modifications were made in the 1960s. The 1963 version had several unassigned code positions. The ANSI standard, where those positions were assigned, mainly to accommodate lower case letters, was approved in 1967/1968, later modified slightly. For the early history, including pre-ASCII character codes, see Steven J. Searle's A Brief History of Character Codes in North America, Europe, and East Asia and Tom Jennings' ASCII: American Standard Code for Information Infiltration . See also Jim Price 's ASCII Chart , Mary Brandel's 1963: ASCII Debuts , and the computer history documents , including the background and creation of ASCII, written by Bob Bemer , "father of ASCII".
The international standard ISO 646 defines a character set similar to US-ASCII but with code positions corresponding to US-ASCII characters @[/]{|} as "national use positions". It also gives some liberties with characters #$^`~ . The standard also defines "international reference version (IRV)", which is (in the 1991 edition of ISO 646) identical to US-ASCII. Ecma International has issued the ECMA-6 standard, which is equivalent to ISO 646 and is freely available on the Web.
Within the framework of ISO 646, and partly otherwise too, several "national variants of ASCII" have been defined, assigning different letters and symbols to the "national use" positions. Thus, the characters that appear in those positions - including those in US-ASCII - are somewhat "unsafe" in international data transfer, although this problem is losing significance. The trend is towards using the corresponding codes strictly for US-ASCII meanings; national characters are handled otherwise, giving them their own, unique and universal code positions in character codes larger than ASCII. But old software and devices may still reflect various "national variants of ASCII".
The following table lists ASCII characters which might be replaced by other characters in national variants of ASCII. (That is, the code positions of these US-ASCII characters might be occupied by other characters needed for national use.) The lists of characters appearing in national variants are not intended to be exhaustive, just typical examples .
Almost all of the characters used in the national variants have been incorporated into ISO Latin 1 . Systems that support ISO Latin 1 in principle may still reflect the use of national variants of ASCII in some details; for example, an ASCII character might get printed or displayed according to some national variant. Thus, even "plain ASCII text" is thereby not always portable from one system or application to another.
More information about national variants and their impact:
- Johan van Wingen : International standardization of 7-bit codes, ISO 646 ; contains a comparison table of national variants
- Digression on national 7-bit codes by Alan J. Flavell
- The ISO 646 page by Roman Czyborra
- Character tables by Koichi Yasuoka .
Subsets of ASCII for safety
Mainly due to the "national variants" discussed above, some characters are less "safe" than other, i.e. more often transferred or interpreted incorrectly.
In addition to the letters of the English alphabet ("A" to "Z", and "a" to "z"), the digits ("0" to "9") and the space (" "), only the following characters can be regarded as really "safe" in data transmission:
! " % & ' ( ) * + , - . / : ; < = > ?
Even these characters might eventually be interpreted wrongly by the recipient, e.g. by a human reader seeing a glyph for "&" as something else than what it is intended to denote, or by a program interpreting "<" as starting some special markup , "?" as being a so-called wildcard character, etc.
When you need to name things (e.g. files, variables, data fields, etc.), it is often best to use only the characters listed above, even if a wider character repertoire is possible. Naturally you need to take into account any additional restrictions imposed by the applicable syntax. For example, the rules of a programming language might restrict the character repertoire in identifier names to letters, digits and one or two other characters.
The misnomer "8-bit ASCII"
Sometimes the phrase "8-bit ASCII" is used. It follows from the discussion above that in reality ASCII is strictly and unambiguously a 7-bit code in the sense that all code positions are in the range 0 - 127.
It is a misnomer used to refer to various character codes which are extensions of ASCII in the following sense: the character repertoire contains ASCII as a subset, the code numbers are in the range 0 - 255, and the code numbers of ASCII characters equal their ASCII codes.
Another example: ISO Latin 1 alias ISO 8859-1
The ISO 8859-1 standard (which is part of the ISO 8859 family of standards) defines a character repertoire identified as "Latin alphabet No. 1", commonly called "ISO Latin 1", as well as a character code for it. The repertoire contains the ASCII repertoire as a subset, and the code numbers for those characters are the same as in ASCII. The standard also specifies an encoding , which is similar to that of ASCII: each code number is presented simply as one octet.
In addition to the ASCII characters, ISO Latin 1 contains various accented characters and other letters needed for writing languages of Western Europe, and some special characters. These characters occupy code positions 160 - 255, and they
Notes:
- The first of the characters above appears as space; it is the so-called no-break space .
- The presentation of some characters in copies of this document may be defective e.g. due to lack of font support. You may wish to compare the presentation of the characters on your browser with the character table presented as a GIF image in the famous ISO 8859 Alphabet Soup document. (In text only mode, you may wish to use my simple table of ISO Latin 1 which contains the names of the characters.)
- Naturally, the appearance of characters varies from one font to another.
See also: The ISO Latin 1 character repertoire - a description with usage notes , which presents detailed characterizations of the meanings of the characters and comments on their usage in various contexts.
More examples: the Windows character set(s)
In ISO 8859-1 , code positions 128 - 159 are explicitly reserved for control purposes ; they "correspond to bit combinations that do not represent graphic characters". The so-called Windows character set (WinLatin1, or Windows code page 1252 , to be exact) uses some of those positions for printable characters. Thus, the Windows character set is not identical with ISO 8859-1 . It is, however, true that the Windows character set is much more similar to ISO 8859-1 than the so-called DOS character sets are. The Windows character set is often called "ANSI character set", but this is seriously misleading. It has not been approved by ANSI . (Historical background: Microsoft based the design of the set on a draft for an ANSI standard. A glossary by Microsoft explicitly admits this.)
Note that programs used on Windows systems may use a DOS
character set; for example, if you create a text file using a Windows
program and then use the
type
command on DOS prompt to
see its content, strange things may happen, since the DOS command
interprets the data according to a DOS character code.
In the Windows character set, some positions in the range 128 - 159 are assigned to printable characters, such as "smart quotes", em dash, en dash, and trademark symbol. Thus, the character repertoire is larger than ISO Latin 1 . The use of octets in the range 128 - 159 in any data to be processed by a program that expects ISO 8859-1 encoded data is an error which might cause just anything. They might for example get ignored, or be processed in a manner which looks meaningful, or be interpreted as control characters . See my document On the use of some MS Windows characters in HTML for a discussion of the problems of using these characters.
The Windows character set exists in different variations, or "code pages"
(CP), which generally differ from the corresponding ISO 8859 standard
so that it contains same characters in positions 128 - 159 as code page
1252. (However, there are some more differences between ISO 8859-7 and win-1253 (WinGreek)
.) See Code page &Co.
by Roman Czyborra and Windows codepages
by Microsoft
. See also CP to Unicode mappings
. What we have discussed here is the most usual one, resembling ISO 8859-1. Its status in the officially IANA registry
was unclear; an encoding had been registered under the name
ISO-8859-1-Windows-3.1-Latin-1
by Hewlett-Packard (!), assumably intending to refer to WinLatin1, but in 1999-12 Microsoft finally registered
it under the name
windows-1252
. That name has in fact been widely used for it. (The name
cp-1252
has been used too, but it isn't officially registered even as an alias name.)
The ISO 8859 family
There are several character codes which are extensions to ASCII in the same sense as ISO 8859-1 and the Windows character set .
ISO 8859-1 itself is just a member of the ISO 8859 family of character codes, which is nicely overviewed in Roman Czyborra's famous document The ISO 8859 Alphabet Soup . The ISO 8859 codes extend the ASCII repertoire in different ways with different special characters (used in different languages and cultures). Just as ISO 8859-1 contains ASCII characters and a collection of characters needed in languages of western (and northern) Europe, there is ISO 8859-2 alias ISO Latin 2 constructed similarly for languages of central/eastern Europe, etc. The ISO 8859 character codes are isomorphic in the following sense: code positions 0 - 127 contain the same character as in ASCII, positions 128 - 159 are unused (reserved for control characters ), and positions 160 - 255 are the varying part, used differently in different members of the ISO 8859 family.
The ISO 8859 character codes are normally presented using the obvious encoding: each code position is presented as one octet. Such encodings have several alternative names in the official registry of character encodings , but the preferred ones are of the form ISO-8859-n .
Although ISO 8859-1 has been a de facto default encoding in many contexts, it has in principle no special role. ISO 8859-15 alias ISO Latin 9 (!) was expected to replace ISO 8859-1 to a great extent, since it contains the politically important symbol for euro , but it seems to have little practical use.
The following table lists the ISO 8859 alphabets, with links to more detailed descriptions. There is a separate document Coverage of European languages by ISO Latin alphabets which you might use to determine which (if any) of the alphabets are suitable for a document in a given language or combination of languages. My other material on ISO 8859 contains a combined character table, too.
Notes: ISO 8859-n is Latin alphabet no. n for n =1,2,3,4, but this correspondence is broken for the other Latin alphabets. ISO 8859-16 is for use in Albanian, Croatian, English, Finnish, French, German, Hungarian, Irish Gaelic (new orthography), Italian, Latin, Polish, Romanian, and Slovenian. In particular, it contains letters s and t with comma below, in order to address an issue of writing Romanian . See the ISO/IEC JTC 1/ SC 2 site for the current status and proposed changes to the ISO 8859 set of standards.
Other "extensions to ASCII"
In addition to the codes discussed above, there are other extensions to ASCII which utilize the code range 0 - 255 ("8-bit ASCII codes" ), such as
- DOS character codes , or "code pages" (CP)
- In MS DOS systems, different character codes are used; they are called "code pages". The original American code page was CP 437, which has e.g. some Greek letters, mathematical symbols, and characters which can be used as elements in simple pseudo-graphics. Later CP 850 became popular, since it contains letters needed for West European languages - largely the same letters as ISO 8859-1 , but in different code positions. See DOS code page to Unicode mapping tables for detailed information. Note that DOS code pages are quite different from Windows character codes , though the latter are sometimes called with names like
cp-1252(=
windows-1252)! For further confusion, Microsoft now prefers to use the notion "OEM code page" for the DOS character set used in a particular country.
- Macintosh character code
- On the Macs , the character code is more uniform than on PCs (although there are some national variants ). The Mac character repertoire is a mixed combination of ASCII, accented letters, mathematical symbols, and other ingredients. See section Text in Mac OS 8 and 9 Developer Documentation .
Notice that many of these are very different from ISO 8859-1. They may have different character repertoires, and the same character often has different code values in different codes. For example, code position 228 is occupied by ä (letter a with dieresis, or umlaut) in ISO 8859-1, by ð (Icelandic letter eth) in HP's Roman-8 , by õ (letter o with tilde) in DOS code page 850, and per mille sign (‰) in Macintosh character code.
For information about several code pages, see Code page &Co. by Roman Czyborra. See also his excellent description of various Cyrillic encodings , such as different variants of KOI-8; most of them are extensions to ASCII, too.
In general, full conversions between the character codes mentioned above are not possible. For example, the Macintosh character repertoire contains the Greek letter pi, which does not exist in ISO Latin 1 at all. Naturally, a text can be converted (by a simple program which uses a conversion table) from Macintosh character code to ISO 8859-1 if the text contains only those characters which belong to the ISO Latin 1 character repertoire. Text presented in Windows character code can be used as such as ISO 8859-1 encoded data if it contains only those characters which belong to the ISO Latin 1 character repertoire.
Other "8-bit codes"
All the character codes discussed above are "8-bit codes", eight bits are sufficient for presenting the code numbers and in practice the encoding (at least the normal encoding) is the obvious (trivial) one where each code position (thereby, each character) is presented as one octet (byte). This means that there are 256 code positions, but several positions are reserved for control codes or left unused (unassigned, undefined).
Although currently most "8-bit codes" are extensions to ASCII in the sense described above, this is just a practical matter caused by the widespread use of ASCII . It was practical to make the "lower halves" of the character codes the same, for several reasons.
The standards ISO 2022 and ISO 4873 define a general framework for 8-bit codes (and 7-bit codes) and for switching between them. One of the basic ideas is that code positions 128 - 159 (decimal) are reserved for use as control codes ("C1 controls"). Note that the Windows character sets do not comply with this principle.
To illustrate that other kinds of 8-bit codes can be defined than extensions to Ascii, we briefly consider the EBCDIC code, defined by IBM and once in widespread use on "mainframes " (and still in use). EBCDIC contains all ASCII characters but in quite different code positions . As an interesting detail, in EBCDIC normal letters A - Z do not all appear in consecutive code positions. EBCDIC exists in different national variants (cf. to variants of ASCII ). For more information on EBCDIC, see section IBM and EBCDIC in Johan W. van Wingen 's Character sets. Letters, tokens and codes. .
ISO 10646, UCS, and Unicode
ISO 10646, the standard
ISO 10646 (officially: ISO/IEC 10646) is an international standard, by ISO and IEC . It defines UCS, Universal Character Set, which is a very large and growing character repertoire , and a character code for it. Currently tens of thousands of characters have been defined, and new amendments are defined fairly often. It contains, among other things, all characters in the character repertoires discussed above. For a list of the character blocks in the repertoire, with examples of some of them, see the document UCS (ISO 10646, Unicode) character blocks .
The number of the standard intentionally reminds us of 646, the number of the ISO standard corresponding to ASCII .
Unicode, the more practical definition of UCS
Unicode is a standard , by the Unicode Consortium , which defines a character repertoire and character code intended to be fully compatible with ISO 10646, and an encoding for it. ISO 10646 is more general (abstract) in nature, whereas Unicode "imposes additional constraints on implementations to ensure that they treat characters uniformly across platforms and applications", as they say in section Unicode & ISO 10646 of the Unicode FAQ .
Unicode was originally designed to be a 16-bit code, but it was extended so that currently code positions are expressed as integers in the hexadecimal range 0..10FFFF (decimal 0..1 114 111). That space is divided into 16-bit "planes". Until recently, the use of Unicode has mostly been limited to "Basic Multilingual Plane (BMP)" consisting of the range 0..FFFF.
The ISO 10646 and Unicode character repertoire can be regarded as a superset of most character repertoires in use. However, the code positions of characters vary from one character code to another.
"Unicode" is the commonly used name
In practice, people usually talk about Unicode rather than ISO 10646, partly because we prefer names to numbers, partly because Unicode is more explicit about the meanings of characters, partly because detailed information about Unicode is available on the Web (see below).
Unicode version 1.0 used somewhat different names for some characters than ISO 10646. In Unicode version, 2.0, the names were made the same as in ISO 10646. New versions of Unicode are expected to add new characters mostly. Version 3.0 , with a total number of 49,194 characters (38,887 in version 2.1), was published in February 2000, and version 4.0 has 96,248 characters.
Until recently, the ISO 10646 standard had not been put onto the Web. It is now available as a large (80 megabytes) zipped PDF file via the Publicly Available Standards page of ISO/IEC JTC1. page. It is available in printed form from ISO member bodies . But for most practical purposes, the same information is in the Unicode standard.
General information about ISO 10646 and Unicode
For more information, see
- Unicode FAQ by the Unicode Consortium. It is fairly large but divided into sections rather logically, except that section Basic Questions would be better labeled as "Miscellaneous".
- Roman Czyborra's material on Unicode, such as Why do we need Unicode? and Unicode's characters
- Olle Järnefors: A short overview of ISO/IEC 10646 and Unicode . Very readable and informative, though somewhat outdated e.g. as regards to versions of Unicode . (It also contains a more detailed technical description of the UTF encodings than those given above.)
- Markus Kuhn : UTF-8 and Unicode FAQ for Unix/Linux . Contains helpful general explanations as well as practical implementation considerations.
- Steven J. Searle: A Brief History of Character Codes in North America, Europe, and East Asia . Contains a valuable historical review, including critical notes on the "unification" of Chinese, Japanese and Korean (CJK) characters.
- Alan Wood : Unicode and Multilingual Editors and Word Processors ; some software tools for actually writing Unicode; I'd especially recommend taking a look at the free UniPad editor (for Windows).
- Jukka K. Korpela: Unicode Explained . O’Reilly, 2006.
- Tony Graham: Unicode: A Primer . Wiley, 2000.
- Richard Gillam: Unicode Demystified: A Practical Programmer's Guide to the Encoding Standard . Addison-Wesley, 2002.
Reference information about ISO 10646 and Unicode
- Unicode 4.0 online : the standard itself, mostly in PDF format; it's partly hard to read, so you might benefit from my Guide to the Unicode standard , which briefly explains the structure of the standard and how to find information about a particular character there
- Unicode et ISO 10646 en français , the Unicode standard in French
- Unicode charts , containing names , code positions , and representative glyphs for the characters and notes on their usage. Available in PDF format, containing the same information as in the corresponding parts of the printed standard. (The charts were previously available in faster-access format too, as HTML documents containing the charts as GIF images. But this version seems to have been removed.)
- Unicode database , a large (over 460 000 octets ) plain text file listing Unicode character code positions , names , and defined character properties in a compact notation
- Informative annex E to ISO 10646-1:1993 (i.e., old version!), which lists, in alphabetic order, all character names (and the code positions ) except Hangul and CJK ideographs; useful for finding out the code position when you know the (right!) name of a character.
- An online character database by Indrek Hein at the Institute of the Estonian Language . You can e.g. search for Unicode characters by name or code position and get the Unicode equivalents of characters in many widely used character sets.
- How to find an &#number; notation for a character ; contains some additional information on how to find a Unicode number for a character
Encodings for Unicode
Originally, before extending the code range past 16 bits, the "native" Unicode encoding was UCS-2 , which presents each code number as two consecutive octets m and n so that the number equals 256m +n . This means, to express it in computer jargon, that the code number is presented as a two-byte integer . According to the Unicode consortium, the term UCS-2 should now be avoided, as it is associated with the 16-bit limitations.
UTF-32 encodes each code position as a 32-bit binary integer, i.e. as four octets. This is a very obvious and simple encoding. However, it is inefficient in terms of the number of octets needed. If we have normal English text or other text which contains ISO Latin 1 characters only, the length of the Unicode encoded octet sequence is four times the length of the string in ISO 8859-1 encoding. UTF-32 is rarely used, except perhaps in internal operations (since it is very simple for the purposes of string processing).
UTF-16 represents each code position in the Basic Multilingual Plane as two octets. Other code positions are presented using so-called surrogate pairs , utilizing some code positions in the BMP reserved for the purpose. This, too, is a very simple encoding when the data contains BMP characters only.
Unicode can be, and often is, encoded in other ways, too, such as the following encodings:
-.
- UTF-7
- Each character code is presented as a sequence of one or more octets in the range 0 - 127 ("bytes with most significant bit set to 0", or "seven-bit bytes", hence the name). Most ASCII characters are presented as such, each as one octet, but for obvious reasons some octet values must be reserved for use as "escape" octets, specifying the octet together with a certain number of subsequent octets forms a multi-octet encoded presentation of one character. There is an example of using UTF-7 later in this document.
IETF Policy on Character Sets and Languages (RFC 2277 ) clearly favors UTF-8 . It requires support to it in Internet protocols (and doesn't even mention UTF-7). Note that UTF-8 is efficient, if the data consists dominantly of ASCII characters with just a few "special characters" in addition to them, and reasonably efficient for dominantly ISO Latin 1 text.
Support to Unicode characters
The implementation of Unicode support is a long and mostly gradual process. Unicode can be supported by programs on any operating systems, although some systems may allow much easier implementation than others; this mainly depends on whether the system uses Unicode internally so that support to Unicode is "built-in". addition to international standards, there are company policies which define various subsets of the character repertoire. A practically important one is Microsoft's "Windows Glyph List 4" (WGL4) , or "PanEuropean" character set, characterized on Microsoft's page Character sets and codepages and excellently listed on page Using Special Characters from Windows Glyph List 4 (WGL4) in HTML by Alan Wood .
The
U+
nnnn
notation
Unicode characters are often referred to using a notation of the form
U+
nnnn
where nnnn
is a four-digit hexadecimal
notation of the code value. For example,
U+0020
means the space character (with code value 20 in hexadecimal, 32 in
decimal). Notice that such notations identify a character through its
Unicode code value, without referring to any particular encoding. There
are other ways to mention (identify) a character
, too.
More about the character concept
An "A" (or any other character) is something like a Platonic entity: it is the idea of an "A" and not the "A" itself.-- Michael E. Cohen: Text and Fonts in a Multi-lingual Cross-platform World .
The character concept is very fundamental for the issues discussed here but difficult to define exactly. The more fundamental concepts we use, the harder it is to give good definitions. (How would you define "life"? Or "structure"?) Here we will concentrate on clarifying the character concept by indicating what it does not imply.
The Unicode view
The Unicode standard describes characters as "the smallest components of written language that have semantic value", which is somewhat misleading. A character such as a letter can hardly be described as having a meaning (semantic value) in itself. Moreover, a character such as ú (letter u with acute accent), which belongs to Unicode, can often be regarded as consisting of smaller components: a letter and a diacritic . And in fact the very definition of the character concept in Unicode is the following:
abstract character : a unit of information used for the organization, control, or representation of textual data.
?)
Control characters (control codes)
The rôle of the so-called control characters in character codes is somewhat obscure. Character codes often contain code positions which are not assigned to any visible character but reserved for control purposes. For example, in communication between a terminal and a computer using the ASCII code, the computer could regard octet 3 as a request for terminating the currently running process. Some older character code standards contain explicit descriptions of such conventions whereas newer standards just reserve some positions for such usage, to be defined in separate standards or agreements such as "C0 controls" (tabulated in my document on ASCII control codes ) and "C1 controls" , or specifically ISO 6429 . And although the definition quoted above suggests that "control characters" might be regarded as characters in the Unicode terminology, perhaps it is more natural to regard them as control codes .
Control codes can be used for device control such as cursor movement, page eject, or changing colors. Quite often they are used in combination with codes for graphic characters, so that a device driver is expected to interpret the combination as a specific command and not display the graphic character(s) contained in it. For example, in the classical VT100 controls , ESC followed by the code corresponding to the letter "A" or something more complicated (depending on mode settings) moves the cursor up. To take a different example, the Emacs editor treats ESC A as a request to move to the beginning of a sentence. Note that the ESC control code is logically distinct from the ESC key in a keyboard, and many other things than pressing ESC might cause the ESC control code to be sent. Also note that phrases like "escape sequences" are often used to refer to things that don't involve ESC at all and operate at a quite different level. Bob Bemer , the inventor of ESC, has written a "vignette" about it: That Powerful ESCAPE Character -- Key and Sequences .
One possible form of device control is changing the way a device interprets the data (octets) that it receives. For example, a control code followed by some data in a specific format might be interpreted so that any subsequent octets to be interpreted according to a table identified in some specific way. This is often called "code page switching", and it means that control codes could be used change the character encoding . And it is then more logical to consider the control codes and associated data at the level of fundamental interpretation of data rather than direct device control. The international standard ISO 2022 defines powerful facilities for using different 8-bit character codes in a document.
Widely used formatting control codes include carriage return (CR), linefeed (LF), and horizontal tab (HT), which in ASCII occupy code positions 13, 10, and 9. The names (or abbreviations) suggest generic meanings, but the actual meanings are defined partly in each character code definition, partly - and more importantly - by various other conventions "above" the character level. The "formatting" codes might be seen as a special case of device control, in a sense, but more naturally, a CR or a LF or a CR LF pair (to mention the most common conventions) when used in a text file simply indicates a new line. As regards to control codes used for line structuring, see Unicode technical report #13 Unicode Newline Guidelines . See also my Unicode line breaking rules: explanations and criticism . The HT (TAB) character is often used for real "tabbing" to some predefined writing position. But it is also used e.g. for indicating data boundaries, without any particular presentational effect, for example in the widely used "tab separated values" (TSV ) data format.
A control code, or a "control character" cannot have a graphic presentation (a glyph
) in the same way as normal characters have. However, in Unicode
there is a separate block Control Pictures
which contains characters that can be used to indicate the presence of a control code.
They are of course quite distinct from the control codes they symbolize -
U+241B
symbol for escape
is not the same as
U+001B
escape
!
On the other hand, a control code might occasionally be displayed, by
some programs, in a visible form, perhaps describing the control action
rather than the code. For example, upon receiving octet 3 in the
example situation above, a program might echo back (onto the terminal) ***
or INTERRUPT
or ^C
. All such notations are program-specific conventions. Some control codes are sometimes named
in a manner which seems to bind them to characters. In particular,
control codes 1, 2, 3, ... are often called control-A, control-B,
control-C, etc. (or CTRL-A or C-A or whatever). This is associated with
the fact that on many keyboards, control codes can be produced (for
sending to a computer) using a special key labeled "Control" or "Ctrl"
or "CTR" or something like that together with letter keys A, B, C, ...
This in turn is related to the fact that the code numbers
of characters and control codes have been assigned so that the code of "Control-X
" is obtained from the code of the upper case letter X
by a simple operation (subtracting 64 decimal). But such things imply
no real relationships between letters and control codes. The control
code 3, or "Control-C", is not
a variant of letter C at all, and its meaning is not associated with the meaning of C.
A glyph - a visual appearance
It is important to distinguish the character concept from the glyph concept. A glyph is a presentation of a particular shape which a character may have when rendered or displayed. For example, the character Z might be presented as a boldface Z or as an italic Z , and it would still be a presentation of the same character. On the other hand, lower-case z is defined to be a separate character - which in turn may have different glyph presentations.
This is ultimately a matter of definition : a definition of a character repertoire specifies the "identity" of characters, among other things. One could define a repertoire where uppercase Z and lowercase z are just two glyphs for the same character. On the other hand, one could define that italic Z is a character different from normal Z, not just a different glyph for it. In fact, in Unicode for example there are several characters which could be regarded as typographic variants of letters only, but for various reasons Unicode defines them as separate characters. For example, mathematicians use a variant of letter N to denote the set of natural numbers (0, 1, 2, ...), and this variant is defined as being a separate character ("double-struck capital N") in Unicode. There are some more notes on the identity of characters below.
The design of glyphs has several aspects, both practical and esthetic. For an interesting review of a major company's description of its principles and practices, see Microsoft's Character design standards (in its typography pages ).
Some discussions, such as ISO 9541-1 and ISO/EC TR 15285 , make a further distinction between "glyph image ", which is an actual appearance of a glyph, and "glyph", which is a more abstract notion. In such an approach, "glyph" is close to the concept of "character", except that a glyph may present a combination of several characters. Thus, in that approach, the abstract characters "f" and "i" might be represented using an abstract glyph that combines the two characters into a ligature, which itself might have different physical manifestations. Such approaches need to be treated as different from the issue of treating ligatures as (compatibility) characters.
What's in a name?
The names of characters are assigned identifiers rather than definitions. Typically the names are selected so that they contain only letters A - Z, spaces, and hyphens; often uppercase variant is the reference spelling of a character name. (See naming guidelines of the UCS .) The same character may have different names in different definitions of character repertoires. Generally the name is intended to suggest a generic meaning and scope of use. But the Unicode standard warns (mentioning full stop as an example of a character with varying usage):
A character may have a broader range of use than the most literal interpretation of its name might indicate; coded representation, name, and representative glyph need to be taken in context when establishing the semantics of a character.
Glyph variation
When a character repertoire is defined (e.g. in a standard), some particular glyph is often used to describe the appearance of each character, but this should be taken as an example only. The Unicode standard specifically says (in section 3.2) that great variation is allowed between "representative glyph" appearing in the standard and a glyph used for the corresponding character:
Consistency with the representative glyph does not require that the images be identical or even graphically similar; rather, it means that both images are generally recognized to be representations of the same character. Representing the character U+0061 Latin small letter a by the glyph "X" would violate its character identity.
Thus, the definition of a repertoire is not a matter of just listing glyphs , but neither is it a matter of defining exactly the meanings of characters. It's actually an exception rather than a rule that a character repertoire definition explicitly says something about the meaning and use of a character.
Possibly some specific properties (e.g. being classified as a letter or having numeric value in the sense that digits have) are defined, as in the Unicode database , but such properties are rather general in nature.
This vagueness may sound irritating, and it often is. But an essential point to be noted is that quite a lot of information is implied . You are expected to deduce what the character is, using both the character name and its representative glyph, and perhaps context too, like the grouping of characters under different headings like "currency symbols".
For more information on the glyph concept, see the document An operational model for characters and glyphs (ISO/IEC TR 15285:1998) and Apple's document Characters, Glyphs, and Related Terms
Fonts
A repertoire of glyph s comprises a font . In a more technical sense, as the implementation of a font, a font is a numbered set of glyphs. The numbers correspond to code positions of the characters (presented by the glyphs). Thus, a font in that sense is character code dependent. An expression like "Unicode font" refers to such issues and does not imply that the font contains glyphs for all Unicode characters.
It is possible that a font which is used for the presentation of some character repertoire does not contain a different glyph for each character. For example, although characters such as Latin uppercase A, Cyrillic uppercase A, and Greek uppercase alpha are regarded as distinct characters (with distinct code values) in Unicode , a particular font might contain just one A which is used to present all of them. (For information about fonts, there is a very large comp.font FAQ , but it's rather old: last update in 1996. The Finding Fonts for Internationalization FAQ is dated, too.)
You should never use a character just because it "looks right" or "almost right". Characters with quite different purposes and meanings may well look similar, or almost similar, in some font s at least. Using a character as a surrogate for another for the sake of apparent similarity may lead to great confusion. Consider, for example, the so-called sharp s (es.
For some more explanations on this, see section Why should we be so strict about meanings of characters? in The ISO Latin 1 character repertoire - a description with usage notes .
Identity of characters: a matter of definition
The identity of characters is defined by the definition of a character repertoire . Thus, it is not an absolute concept but relative to the repertoire; some repertoire might contain a character with mixed usage while another defines distinct characters for the different uses. For instance, the ASCII repertoire has a character called hyphen . It is also used as a minus sign (as well as a substitute for a dash, since ASCII contains no dashes). Thus, that ASCII character is a generic, multipurpose character, and one can say that in ASCII hyphen and minus are identical. But in Unicode , there are distinct characters named "hyphen" and "minus sign" (as well as different dash characters). For compatibility, the old ASCII character is preserved in Unicode, too (in the old code position, with the name hyphen-minus ).
Similarly, as a matter of definition, Unicode defines characters for micro sign , n-ary product , etc., as distinct from the Greek letters (small mu, capital pi, etc.) they originate from. This is a logical distinction and does not necessarily imply that different glyphs are used. The distinction is important e.g. when textual data in digital form is processed by a program (which "sees" the code values, through some encoding, and not the glyphs at all). Notice that Unicode does not make any distinction e.g. between the greek small letter pi (π), and the mathematical symbol pi denoting the well-known constant 3.14159... (i.e. there is no separate symbol for the latter). For the ohm sign (Ω), there is a specific character (in the Symbols Area), but it is defined as being canonical equivalent to greek capital letter omega (Ω), i.e. there are two separate characters but they are equivalent). On the other hand, it makes a distinction between greek capital letter pi (Π) and the mathematical symbol n-ary product (∏), so that they are not equivalents.
If you think this doesn't sound quite logical, you are not the only one to think so. But the point is that for symbols resembling Greek letter and used in various contexts, there are three possibilities in Unicode:
- the symbol is regarded as identical to the Greek letter (just as its particular usage )
- the symbol is included as a separate character but only for compatibility and as compatibility equivalent to the Greek letter
- the symbol is regarded as a completely separate character.
You need to check the Unicode references for information about each individual symbol. Note in particular that a query to Indrek Hein's online character database will give such information in the decomposition info part (but only in the entries for compatibility characters!). As a rough rule of thumb about symbols looking like Greek letters, mathematical operators (like summation) exist as independent characters whereas symbols of quantities and units (like pi and ohm) are equivalent or identical to Greek letters.
Failures to display a character
In addition to the fact that the appearance of a character may vary
, it is quite possible that some program fails to display a character at all
.
Perhaps the program cannot interpret a particular way in which the
character is presented. The reason might simply be that some program-specific way
had been used to denote the character and a different program is in use
now. (This happens quite often even if "the same" program is used; for
example, Internet Explorer version 4.0 is able to recognize
α
as denoting the Greek letter alpha (α) but IE 3.0 is not and displays
the notation literally.) And naturally it often occurs that a program
does not recognize the basic character encoding
of the data, either because it was not properly informed about the
encoding according to which the data should be interpreted or because
it has not been programmed to handle the particular encoding in use.
But even if a program recognizes some data as denoting a character, it may well be unable to display it since it lacks a glyph for it. Often it will help if the user manually checks the font settings, perhaps manually trying to find a rich enough font. (Advanced programs could be expected to do this automatically and even to pick up glyphs from different fonts, but such expectations are mostly unrealistic at present.) But it's quite possible that no such font can be found. As an important detail, the possibility of seeing e.g. Greek characters on some Windows systems depends on whether "internationalization support" has been installed.
A well-design program will in some appropriate way indicate its inability to display a character. For example, a small rectangular box, the size of a character, could be used to indicate that there is a character which was recognized but cannot be displayed. Some programs use a question mark, but this is risky - how is the reader expected to distinguish such usage from the real "?" character?
Linear text vs. mathematical notations
Although several character repertoires
, most notably that of ISO 10646 and Unicode
, contain mathematical
and other symbols, the presentation of mathematical formulas
is essentially not a character level problem. At the character level, symbols like integration or n
-ary summation can be defined and their code positions
and encodings
defined, and representative glyphs
shown, and perhaps some usage notes given. But the construction of real
formulas, e.g. for a definite integral of a function, is a different
thing, no matter whether one considers formulas abstractly (how the
structure of the formula is given) or presentationally (how the formula
is displayed on paper or on screen). To mention just a few approaches
to such issues, the TeX
system is widely used by mathematicians to produce high-quality presentations of formulas, and MathML
is an ambitious project for creating a markup language for mathematics so that both structure and presentation can be handled.
In other respects, too, character standards usually deal with plain text only. Other structural or presentational aspects, such as font variation, are to be handled separately. However, there are characters which would now be considered as differing in font only but for historical reasons regarded as distinct.
Compatibility characters
There is a large number of compatibility characters in ISO 10646 and Unicode which are variants of other characters. They were included for compatibility with other standards so that data presented using some other code can be converted to ISO 10646 and back without losing information. The Unicode standard says (in section 2.4):
Compatibility characters are those that would not have been encoded except for compatibility and round-trip convertibility with other standards. They are variants of characters that already have encodings as normal (that is, non-compatibility) characters in the Unicode Standard.
There is a large number of compatibility characters in the Compatibility Area but also scattered around the Unicode space.
Many, but not all, compatibility characters have compatibility decompositions . The Unicode database contains, for each character, a field (the sixth one) which specifies its eventual compatibility decomposition.
Thus, to take a simple example, superscript two (²) is an ISO Latin 1 character with its own code position in that standard. In ISO 10646 way of thinking, it would have been treated as just a superscript variant of digit two . But since the character is contained in an important standard, it was included into ISO 10646, though only as a "compatibility character". The practical reason is that now one can convert from ISO Latin 1 to ISO 10646 and back and get the original data. This does not mean that in the ISO 10646 philosophy superscripting (or subscripting, italics, bolding etc.) would be irrelevant; rather, they are to be handled at another level of data presentation, such as some special markup .
There is a document titled Unicode in XML and other Markup Languages
and produced jointly by the World Wide Web Consortium (W3C
) and the Unicode Consortium. It discusses, among other things, characters with compatibility mappings
:
should they be used, or should the corresponding non-compatibility
characters be used, perhaps with some markup and/or style sheet that
corresponds to the difference between them. The answers depend on the
nature of the characters and the available markup and styling
techniques. For example, for superscripts, the use of
sup
markup (as in HTML) is recommended, i.e.
<sup>2</sup>
is preferred over sup2; This is a debatable issue; see my notes on
sup
and
sub
markup
.
The definition of Unicode indicates our sample character, superscript two , as a compatibility character with the compatibility decomposition "<super> + 0032 2". Here "<super>" is a semi-formal way of referring to what is considered as typographic variation, in this case superscript style, and "0032 2" shows the hexadecimal code of a character and the character itself.
Some
compatibility characters
have compatibility decompositions consisting of several characters. Due to this property, they can be said to represent ligatures
in the broad sense. For example, latin small ligature fi
(
U+FB01
)
has the obvious decomposition consisting of letters "f" and "i". It is
still a distinct character in Unicode, but in the spirit of Unicode
,
we should not use it except for storing and transmitting existing data
which contains that character. Generally, ligature issues should be
handled outside the character level, e.g. selected automatically by a
formatting program or indicated using some suitable markup
.
Note that the word
ligature can be misleading when it appears in a character name. In particular, the old name of the character "æ", latin small letter ae
(
U+00E6
), is latin small ligature ae
, but it is not
a ligature of "a" and "e" in the sense described above. It has no compatibility decomposition.
In comp.fonts FAQ, General Info (2/6) section 1.15 Ligatures , the term ligature is defined as follows:
A ligature occurs where two or more letterforms are written or printed as a unit. Generally, ligatures replace characters that occur next to each other when they share common components. Ligatures are a subset of a more general class of figures called "contextual forms."
Compositions and decompositions
A diacritic mark , i.e. an additional graphic such as an accent or cedilla attached to a character, can be treated in different ways when defining a character repertoire. See some historical notes on this in my description of ISO Latin 1 . It also explains why the so-called spacing diacritic marks are of very limited usefulness, except when taken into some secondary usage.
In the Unicode approach, there are separate characters called combining diacritical marks . The general idea is that you can express a vast set of characters with diacritics by representing them so that a base character is followed by one or more (!) combining (non-spacing) diacritic marks. And a program which displays such a construct is expected to do rather clever things in formatting, e.g. selecting a particular shape for the diacritic according to the shape of the base character. This requires Unicode support at implementation level 3. Most programs currently in use are totally incapable of doing anything meaningful with combining diacritic marks. But there is some simple support to them in Internet Explorer for example, though you would need a font which contains the combining diacritics (such as Arial Unicode MS); then IE can handle simple combinations reasonably. See test page for combining diacritic marks in Alan Wood's Unicode resources . Regarding advanced implementation of the rendering of characters with diacritic marks, consult Unicode Technical Note #2, A General Method for Rendering Combining Marks .
Using combining diacritic marks, we have wide range of
possibilities. We can put, say, a diaeresis on a gamma, although "Greek
small letter gamma with diaeresis" does not exist as a character
. The combination
U+03B3 U+0308
consists of two characters, although its visual presentation looks like
a single character in the same sense as "ä" looks like a single
character. This is how your browser displays the combination: "γ̈". In
most browsing situations at present, it probably isn't displayed
correctly; you might see e.g. the letter gamma followed by a box that
indicates a missing glyph, or you might see gamma followed by a
diaeresis shown separately (¨).
Thus, in practical terms, in order to use a character with a diacritic mark, you should primarily try to find it as a precomposed
character. A precomposed character, also called composite character
or decomposable character
, is one that has a code position
(and thereby identity
)
of its own but is in some sense equivalent to a sequence of other
characters. There are lots of them in Unicode, and they cover the needs
of most (but not all) languages of the world, but not e.g. the
presentation of the International phonetic alphabet
by IPA
which, in its general form, requires several different diacritic marks. For example, the character latin small letter a with diaeresis
(
U+00E4
, ä) is, by Unicode definition, decomposable to the sequence of the two characters latin small letter a
(
U+0061
) and combining diaeresis
(
U+0308
).
This is at present mostly a theoretic possibility. Generally by
decomposing all decomposable characters one could in many cases
simplify the processing of textual data (and the resulting data might
be converted back to a format using precomposed characters). See e.g.
the working draft Character Model for the World Wide Web
.
Typing characters
Just pressing a key?
Typing characters on a computer may appear deceptively simple: you press a key labeled "A", and the character "A" appears on the screen. Well, you actually get uppercase "A" or lowercase "a" depending on whether you used the shift key or not, but that's common knowledge. You also expect "A" to be included into a disk file when you save what you are typing, you expect "A" to appear on paper if you print your text, and you expect "A" to be sent if you send your product by E-mail or something like that. And you expect the recipient to see an "A".
Thus far, you should have learned that the presentation of a character in computer storage or disk or in data transfer may vary a lot. You have probably realized that especially if it's not the common "A" but something more special (say, an "A" with an accent), strange things might happen, especially if data is not accompanied with adequate information about its encoding .
But you might still be too confident. You probably expect that on your system at least things are simpler than that. If you use your very own very personal computer and press the key labeled "A" on its keyboard, then shouldn't it be evident that in its storage and processor, on its disk, on its screen it's invariably "A"? Can't you just ignore its internal character code and character encoding? Well, probably yes - with "A". I wouldn't be so sure about "Ä", for instance. (On Windows systems, for example, DOS mode programs differ from genuine Windows programs in this respect; they use a DOS character code .)
When you press a key on your keyboard , then what actually happens is this. The keyboard sends the code of a character to the processor. The processor then, in addition to storing the data internally somewhere, normally sends it to the display device. (For more details on this, as regards to one common situation, see Example: What Happens When You Press A Key in The PC Guide .) Now, the keyboard settings and the display settings might be different from what you expect. Even if a key is labeled "Ä", it might send something else than the code of "Ä" in the character code used in your computer. Similarly, the display device, upon receiving such a code, might be set to display something different. Such mismatches are usually undesirable, but they are definitely possible .
Moreover, there are often keyboard restrictions . If your computer uses internally, say, ISO Latin 1 character repertoire, you probably won't find keys for all 191 characters in it on your keyboard. And for Unicode , it would be quite impossible to have a key for each character! Different keyboards are used, often according to the needs of particular languages. For example, keyboards used in Sweden often have a key for the å character but seldom a key for ñ ; in Spain the opposite is true. Quite often some keys have multiple uses via various "composition" keys, as explained below. For an illustration of the variation, as well as to see what layout might be used in some environments, see
- International Keyboards at Terena (contains some errors)
- Keyboard layouts by HermesSOFT
- Alternative Keyboard Layouts at USCC
- Keyboard layouts documented by Mark Leisher ; contains several layouts for "exotic" languages too
- The interactive Windows Layouts page by Microsoft ; requires Internet Explorer with JavaScript enabled. (Actually, using it I found out new features in the Finnish keyboard I have: I can use Alt Gr m to produce the micro sign µ, although there is no hint about this in the "m" key itself.)
In several systems, including MS Windows, it is possible to switch between different keyboard settings. This means that the effects of different keys do not necessarily correspond to the engravings in the key caps but to some other assignments. To ease typing in such situations, "virtual keyboards" can be used. This means that an image of a keyboard is visible on the screen, letting the user type characters by clicking on keys in it or using the information to see the current assignments of the keys of the physical keyboard. For the Office software on Windows systems, there is a free add-in available for this: Microsoft Visual Keyboard .
Program-specific methods for typing characters
Thus, you often need program-specific ways of entering characters from a keyboard, either because there is no key for a character you need or there is but it does not work (properly). The program involved might be part of system software, or it might be an application program. Three important examples of such ways:
- On Windows systems, you can (usually - some application programs may override this) produce any character in the Windows character set (naturally, in its Windows encoding) as follows: Press down the (left) Alt key and keep it down. Then type, using the separate numeric keypad (not the numbers above the letter keys!), the four-digit code of the character in decimal. Finally release the Alt key. Notice that the first digit is always 0, since the code values are in the range 32 - 255 (decimal). For instance, to produce the letter "Ä" (which has code 196 in decimal), you would press Alt down, type 0196 and then release Alt. Upon releasing Alt, the character should appear on the screen. In MS Word, the method works only if Num Lock is set. This method is often referred to as Alt-0nnn . (If you omit the leading zero, i.e. use Alt-nnn , the effect is different , since that way you insert the character in code position nnn in the DOS character code ! For example, Alt-196 would probably insert a graphic character which looks somewhat like a hyphen. There are variations in the behavior of various Windows programs in this area, and using those DOS codes is best avoided.)
- In the Emacs editor (which is popular especially on Unix systems), you can produce any ISO Latin 1 character by typing first control-Q, then its code as a three-digit octal number. To produce "Ä", you would thus type control-Q followed by the three digits 304 (and expect the "Ä" character to appear on screen). This method is often referred to as C-Q-nnn . (There are other ways of entering many ISO Latin 1 characters in Emacs , too.)
- Text processing programs often modify user input e.g. so that when you have typed the three characters "(", "c", and ")", the program changes, both internally and visibly, that string to the single character "©". This is often convenient, especially if you can add your own rules for modifications, but it causes unpleasant surprises and problems when you actually meant what you wrote, e.g. wanted to write letter "c" in parentheses.
- Programs often process some keyboard key combinations , typically involving the use of an Alt or Alt Gr key or some other "composition key", by converting them to special characters. In fact, even the well-known shift key is a composition key: it is used to modify the meaning of another key, e.g. by changing a letter to uppercase or turning a digit key to a special character key. Such things are not just "program-specific"; they also depend on the program version and settings (and on the keyboard, of course), and could well be user-modifiable. For example, in order to support the euro sign , various methods have been developed, e.g. by Microsoft so that pressing the "e" key while keeping the Alt Gr key pressed down might produce the euro sign - in some encoding ! But this may require a special "euro update", and the key combinations vary even when we consider Microsoft products only. So it would be quite inappropriate to say e.g. "to type the euro, use AltGr+e" as general, unqualified advice.
The "Alt" and "Alt Gr" keys mentioned above are not present on all keyboards, and often they both carry the text "Alt" but they can be functionally different! Typically, those keys are on the left and on the right of the space bar. It depends on the physical keyboard what the key cap texts are, and it depends on the keyboard settings whether the keys have the same effect or different effects. The name "Alt Gr" for "right Alt" is short for "alternate graphic", and it's mostly used to create additional characters, whereas (left) "Alt" is typically used for keyboard access to menus.
The last method above could often be called "device dependent" rather than program specific, since the program that performs the conversion might be a keyboard driver . In that case, normal programs would have all their input from the keyboard processed that way. This method may also involve the use of auxiliary keys for typing characters with diacritic marks such as "á ". Such an auxiliary key is often called dead key , since just pressing it causes nothing; it works only in combination with some other key. A more official name for a dead key is modifier key . For example, depending on the keyboard and the driver, you might be able to produce "á" by pressing first a key labeled with the acute accent (´), then the "a" key.
My keyboard has two keys for such purposes. There's the accent key, with the acute accent and the grave accent (`) as "upper case" character, meaning I need to use the shift key for the grave. And there's a key with the dieresis (¨) and the circumflex (^) above it (i.e. as "upper case") and the tilde (~) below or left to it (meaning I need to use Alt Gr for it), so I can produce ISO Latin 1 characters with those diacritics. Note that this does not involve any operation on the characters ´`¨^~, and the keyboard does not send those characters at all in such situations. If I try to enter that way a character outside the ISO Latin 1 repertoire, I get just the diacritic as a separate character followed by the normal character, e.g. "^j". To enter the diacritic itself, such as the tilde (~) , I may need to press the space bar so that the tilde diacritic combines with the blank (producing ~) instead of a letter (producing e.g. "ã"). Your situation may well be different, in part or entirely. For example, a typical French keyboard has separate keys for those accented letters that are used in French (e.g. "à"), but the accents themselves can be difficult to produce. You might need to type AltGr è followed by a space to produce the grave accent `.
"Escape" notations ("meta notations") for characters
It is often possible to use various "escape" notations for characters. This rather vague term means notations which are afterwards converted to (or just displayed as) characters according to some specific rules by some programs. They depend on the markup, programming, or other language (in a broad but technical meaning for "language", so that data formats can be included but human languages are excluded). If different languages have similar conventions in this respect, a language designer may have picked up a notation from an existing language, or it might be a coincidence.
The phrase "escape notations" or even "escapes" for short is rather widespread, and it reflects the general idea of escaping from the limitations of a character repertoire or device or protocol or something else. So it's used here, although a name like meta notations might be better. It is any case essential to distinguish these notations from the use of the ESC (escape) control code in ASCII and other character codes.
Examples:
- In the PostScript language, characters have names , such as
Adieresisfor Ä , which can be used to denote them according to certain rules.
- In the RTF data format, the notation
/'c4is used to denote Ä .
- In TeX systems, there are different ways of producing characters, possibly depending on the "packages" used. Examples of ways to produce Ä :
/"A,
/symbol{196},
/char'0304,
/capitaldieresis{A}(for a large list, consult The Comprehensive LaTex Symbol List
- In the HTML language one can use the notation
Äfor the character Ä . In the official HTML terminology, such notations are called entity references (denoting characters) . It depends on HTML version which entities are defined, and it depends on a browser which entities are actually supported .
- In HTML, one can also use the notation
Äfor the character Ä . Generally, in any SGML based system, or "SGML application" as the jargon goes, a numeric character reference (or, actually, just character references ) of the form
&#number
;can be used, and it refers to the character which is in code position n in the character code defined for the "SGML application" in question. This is actually very simple: you specify a character by its index (position, number). But in SGML terminology, the character code which determines the interpretation of
&#number
;is called, quite confusingly, the document character set. For HTML, the "document character set" is ISO 10646 (or, to be exact, a subset thereof, depending on HTML version). A most essential point is that for HTML, the "document character set" is completely independent of the encoding of the document! (See Alan J. Flavell 's Notes on Internationalization .) The so-called character entity references like
Äin HTML can be regarded as symbolic names defined for some numeric character references. In XML, character references use ISO 10646 by language definition. Although both entity and character references are markup , to be used in markup languages, they often replaced by the corresponding characters, when a user types text on an Internet discussion forum. This might be a conscious decision by the forum designer, but quite often it is caused unintentionally.
- In CSS , you can present a character as
"/n
- In the C programming language , one can usually write
/0304to denote Ä within a string constant, although this makes the program character code dependent.
As you can see, the notations typically involve some (semi-)mnemonic name or the code number
of the character, in some number system
. (The ISO 8859-1
code number for our example character Ä
is 196 in decimal, 304 in octal, C4 in hexadecimal.) And there is some
method of indicating that the letters or digits are not to be taken as
such but as part of a special notation denoting a character. Often some
specific character such as the backslash /
is used as an "escape character". This implies that such a character
cannot be used as such in the language or format but must itself be
"escaped"; for example, to include the backslash itself into a string
constant in C, you need to write it twice (
//
).
In cases like these, the character itself does not occur in a file (such as an HTML document or a C source program). Instead, the file contains the "escape" notation as a character sequence, which will then be interpreted in a specific way by programs like a Web browser or a C compiler. One can in a sense regard the "escape notations" as encodings used in specific contexts upon specific agreements.
There are also "escape notations" which are to be interpreted by human readers directly. For example, when sending E-mail one might use A" (letter A followed by a quotation mark) as a surrogate for Ä (letter A with dieresis), or one might use AE instead of Ä. The reader is assumed to understand that e.g. A" on display actually means Ä. Quite often the purpose is to use ASCII characters only, so that the typing, transmission, and display of the characters is "safe". But this typically means that text becomes very messy; the Finnish word Hämäläinen does not look too good or readable when written as Ha"ma"la"inen or Haemaelaeinen . Such usage is based on special (though often implicit) conventions and can cause a lot of confusion when there is no mutual agreement on the conventions, especially because there are so many of them. (For example, to denote letter a with acute accent, á, a convention might use the apostrophe, a', or the solidus, a/, or the acute accent, a´, or something else.)
There is an old proposal by K. Simonsen, Character Mnemonics & Character Sets , published as RFC 1345 , which lists a large number of "escape notations" for characters. They are very short, typically two characters, e.g. A: for Ä and th for þ (thorn). Naturally there's the problem that the reader must know whether e.g. th is to be understood that way or as two letters t and h. So the system is primarily for referring to characters (see below), but under suitable circumstances it could also be used for actually writing texts, when the ambiguities can somehow be removed by additional conventions or by context. RFC 1345 cannot be regarded as official or widely known, but if you need, for some applications, an "escape scheme", you might consider using those notations instead of reinventing the wheel.
How to mention (identify) a character
There are also various ways to identify a character when it cannot be used as such or when the appearance of a character is not sufficient identification. This might be regarded as a variant of the "escape notations for human readers" discussed above, but the pragmatic view is different here. We are not primarily interested in using characters in running text but in specifying which character is being discussed.
For example, when discussing the Cyrillic letter that resembles the Latin letter E (and may have an identical or very similar glyph , and is transliterated as E according to ISO 9 ), there are various options:
- "Cyrillic E"; this is probably intuitively understandable in this case, and can be seen as referring either to the similarity of shape or to the transliteration equivalence; but in the general case these interpretations do not coincide, and the method is otherwise vague too
- "
U+0415"; this is a unique identification but requires the reader to know the idea of
U+nnnn notations
- "cyrillic capital letter ie " (using the official Unicode name ) or "cyrillic IE" (using an abridged version); one problem with this is that the names can be long even if simplified, and they still cannot be assumed to be universally known even by people who recognize the character
- "KE02", which uses the special notation system defined in ISO 7350 ; the system uses a compact notation and is marginally mnemonic (K = kirillica 'Cyrillics'; the numeric codes indicate small/capital letter variation and the use of diacritics )
- any of the "escape" notations discussed above, such as "
E=" by RFC 1345 or "
Е" in HTML; this can be quite adequate in a context where the reader can be assumed to be familiar with the particular notation.
Information about encoding
The need for information about encoding
It is hopefully obvious from the preceding discussion that a sequence of octets can be interpreted in a multitude of ways when processed as character data. By looking at the octet sequence only, you cannot even know whether each octet presents one character or just part of a two-octet presentation of a character, or something more complicated. Sometimes one can guess the encoding, but data processing and transfer shouldn't be guesswork.
Naturally, a sequence of octets could be intended to present other than character data, too. It could be an image in a bitmap format, or a computer program in binary form, or numeric data in the internal format used in computers.
This problem can be handled in different ways in different systems when data is stored and processed within one computer system. For data transmission , a platform-independent method of specifying the general format and the encoding and other relevant information is needed. Such methods exist, although they not always used widely enough. People still send each other data without specifying the encoding, and this may cause a lot of harm. Attaching a human-readable note, such as a few words of explanation in an E-mail message body, is better than nothing. But since data is processed by programs which cannot understand such notes, the encoding should be specified in a standardized computer-readable form.
The MIME solution
Media types
Internet media types
, often called MIME media types
, can be used to specify a major media type ("top level media type", such as
text
), a subtype (such as
html
), and an encoding (such as
iso-8859-1
). They were originally developed to allow sending other than plain ASCII
data by E-mail. They can be (and should be) used for specifying the
encoding when data is sent over a network, e.g. by E-mail or using the HTTP
protocol on the World Wide Web.
The media type concept is defined in RFC 2046
. The procedure for registering types in given in RFC 2048
; according to it, the registry is kept by IANA
at
but it has in fact been moved to
Character encoding ("charset") information
The technical term used to denote a character encoding in the Internet media type context is "character set", abbreviated "charset". This has caused a lot of confusion, since "set" can easily be understood as repertoire !
Specifically, when data is sent in MIME format, the media type and
encoding are specified in a manner illustrated by the following example:
Content-Type: text/html; charset=iso-8859-1
This specifies, in addition to saying that the media type is
text
and subtype is
html
, that the character encoding is ISO 8859-1
.
The official registry of "charset" (i.e., character encoding) names,
with references to documents defining their meanings, is kept by IANA
at
(According to the documentation of the registration procedure, RFC 2978 , it should be elsewhere, but it has been moved.) I have composed a tabular presentation of the registry , ordered alphabetically by "charset" name and accompanied with some hypertext references.
Several character encodings have alternate (alias) names in the registry. For example, the basic (ISO 646) variant of ASCII can be called "ASCII" or "ANSI_X3.4-1968" or "cp367" (plus a few other names); the preferred name in MIME context is, according to the registry, "US-ASCII". Similarly, ISO 8859-1 has several names, the preferred MIME name being "ISO-8859-1". The "native" encoding for Unicode, UCS-2 , is named "ISO-10646-UCS-2" there.
MIME headers
The
Content-Type
information is an example of information in a header
.
Headers relate to some data, describing its presentation and other
things, but are passed as logically separate from it. Possible headers
and their contents are defined in the basic MIME specification
, RFC 2045
.
Adequate headers should normally be generated automatically by the
software which sends the data (such as a program for sending E-mail, or
a Web server) and interpreted automatically by receiving software (such
as a program for reading E-mail, or a Web browser). In E-mail messages,
headers precede the message body; it depends on the E-mail program
whether and how it displays the headers. For Web documents, a Web
server is required to send headers when it delivers a document to a
browser (or other user agent) which has sent a request for the
document.
In addition to media types and character encodings, MIME addresses several other aspects too. Earl Hood has composed the documentation Multipurpose Internet Mail Extensions MIME , which contains the basic RFCs on MIME in hypertext format and a common table of contents for them.
An auxiliary encoding: Quoted-Printable (QP)
The MIME specification defines, among many other things, the general purpose "Quoted-Printable" (QP) encoding which can be used to present any sequence of octets as a sequence of such octets which correspond to ASCII characters. This implies that the sequence of octets becomes longer, and if it is read as an ASCII string, it can be incomprehensible to humans. But what is gained is robustness in data transfer, since the encoding uses only "safe" ASCII characters which will most probably get through any component in the transfer unmodified.
Basically, QP encoding means that most octets smaller than 128 are
used as such, whereas larger octets and some of the small ones are
presented as follows: octet n
is presented as a sequence of three octets, corresponding to ASCII codes for the
=
sign and the two digits of the hexadecimal
notation of n
. If QP encoding is applied to a sequence of octets presenting character data according to ISO 8859-1
character code, then effectively this means that most ASCII characters
(including all ASCII letters) are preserved as such whereas e.g. the
ISO 8859-1 character ä
(code position 228 in decimal, E4 in hexadecimal) is encoded as
=E4
. (For obvious reasons, the equals sign
=
itself is among the few ASCII characters which are encoded. Being in
code position 61 in decimal, 3D in hexadecimal, it is encoded as
=3D
.)
Notice that encoding ISO 8859-1 data this way means that the character code is the one specified by the ISO 8859-1 standard, whereas the character encoding is different from the one specified (or at least suggested) in that standard. Since QP only specifies the mapping of a sequence of octets to another sequence of octets, it is a pure encoding and can be applied to any character data, or to any data for that matter.
Naturally, Quoted-Printable encoding needs to be processed by a program which knows it and can convert it to human-readable form. It looks rather confusing when displayed as such. Roughly speaking, one can expect most E-mail programs to be able to handle QP, but the same does not apply to newsreaders (or Web browsers). Therefore, you should normally use QP in E-mail only.
How MIME should work in practice
Basically, MIME should let people communicate smoothly without hindrances caused by character code and encoding differences. MIME should handle the necessary conversions automatically and invisibly.
For example, when person A sends E-mail to person B , the following should happen: The E-mail program used by A encodes A 's message in some particular manner, probably according to some convention which is normal on the system where the program is used (such as ISO 8859-1 encoding on a typical modern Unix system). The program automatically includes information about this encoding into an E-mail header, which is usually invisible both when sending and when reading the message. The message, with the headers, is then delivered, through network connections, to B 's system. When B uses his E-mail program (which may be very different from A 's) to read the message, the program should automatically pick up the information about the encoding as specified in a header and interpret the message body according to it. For example, if B is using a Macintosh computer, the program would automatically convert the message into Mac's internal character encoding and only then display it. Thus, if the message was ISO 8859-1 encoded and contained the Ä (upper case A with dieresis) character, encoded as octet 196, the E-mail program used on the Mac should use a conversion table to map this to octet 128, which is the encoding for Ä on Mac. (If the program fails to do such a conversion, strange things will happen. ASCII characters would be displayed correctly, since they have the same codes in both encodings, but instead of Ä, the character corresponding to octet 196 in Mac encoding would appear - a symbol which looks like f in italics.)
Problems with implementations - examples
Unfortunately, there are deficiencies and errors in software so that users often have to struggle with character code conversion problems, perhaps correcting the actions taken by programs. It takes two to tango, and some more participants to get characters right. This section demonstrates different things which may happen, and do happen, when just one component is faulty, i.e. when MIME is not used or is inadequately supported by some "partner" (software involved in entering, storing, transferring, and displaying character data).
Typical minor (!) problems which may occur in communication in Western European languages other than English is that most characters get interpreted and displayed correctly but some "national letters" don't. For example, character repertoire needed in German, Swedish, and Finnish is essentially ASCII plus a few letters like "ä" from the rest of ISO Latin 1 . If a text in such a language is processed so that a necessary conversion is not applied, or an incorrect conversion is applied, the result might be that e.g. the word "später" becomes "spter" or "spÌter" or "spdter" or "sp=E4ter".
Sometimes you might be able to guess
what has happened,
and perhaps to determine which code conversion should be applied, and
apply it more or less "by hand". To take an example (which may have
some practical value in itself to people using languages mentioned)
Assume that you have some text data which is expected to be, say, in
German, Swedish or Finnish and which appears to be such text with some
characters replaced by oddities in a somewhat systematic way. Locate
some words which probably should contain the letter "ä"
but have something strange in place of it (see examples above). Assume
further that the program you are using interprets text data according
to ISO 8859-1
by default and that the actual data is not accompanied with a suitable indication (like a
Content-Type
header) of the encoding, or such an indication is obviously in error. Now, looking at what appears instead of "ä", we might guess:
To illustrate what may happen when text is sent in a grossly invalid form
,
consider the following example. I'm sending myself E-mail, using
Netscape 4.0 (on Windows 95). In the mail composition window, I set the
encoding to UTF-8
. The body of my message is simply
Tämä on testi.
(That's Finnish for 'This is a test'. The second and fourth character is letter a with umlaut.) Trying to read the mail on my Unix account, using the Pine E-mail program (popular among Unix users), I see the following (when in "full headers" mode; irrelevant headers omitted here):
X-Mailer: Mozilla 4.0 [en] (Win95; I)
MIME-Version: 1.0
To: jkorpela@cs.tut.fi
Subject: Test
X-Priority: 3 (Normal)
Content-Type: text/plain; charset=x-UNICODE-2-0-UTF-7
Content-Transfer-Encoding: 7bit
[The following text is in the "x-UNICODE-2-0-UTF-7" character set]
[Your display is set for the "ISO-8859-1" character set]
[Some characters may be displayed incorrectly]
T+O6Q- on testi.
Interesting, isn't it? I specifically requested UTF-8
encoding, but Netscape used UTF-7. And it did not include a correct header, since
x-UNICODE-2-0-UTF-7
is not a registered "charset" name
.
Even if the encoding had been a registered one, there would have been
no guarantee that my E-mail program would have been able to handle the
encoding. The example, "T+O6Q-" instead of "Tämä", illustrates what may
happen when an octet sequence is interpreted according to another
encoding than the intended one. In fact, it is difficult to say what
Netscape was really doing, since it seems to encode incorrectly.
A correct UTF-7 encoding for "Tämä" would be "T+AOQ-m+AOQ-". The "+" and "-" characters correspond to octets indicating a switch to "shifted encoding" and back from it. The shifted encoding is based on presenting Unicode values first as 16-bit binary integers, then regrouping the bits and presenting the resulting six-bit groups as octets according to a table specified in RFC 2045 in the section on Base64. See also RFC 2152 .
Practical conclusions
Whenever text data is sent over a network, the sender and the recipient should have a joint agreement on the character encoding used. In the optimal case, this is handled by the software automatically, but in reality the users need to take some precautions.
Most importantly, make sure that any Internet-related software that you use to send data specifies the encoding correctly in suitable headers. There are two things involved: the header must be there and it must reflect the actual encoding used; and the encoding used must be one that is widely understood by the (potential) recipients' software. One must often make compromises as regards to the latter aim: you may need to use an encoding which is not yet widely supported to get your message through at all.
It is useful to find out how to make your Web browser, newsreader,
and E-mail program so that you can display the encoding information for
the page, article, or message you are reading. (For example, on
Netscape use
View Page Info
; on News Xpress, use
View Raw Format
; on Pine, use
h
.)
If you use, say, Netscape to send E-mail or to post to Usenet news, make sure it sends the message in a reasonable form. In particular, make sure it does not send the message as HTML or duplicate it by sending it both as plain text and as HTML (select plain text only). As regards to character encoding, make sure it is something widely understood, such as ASCII , some ISO 8859 encoding, or UTF-8 , depending on how large character repertoire you need.
In particular, avoid sending data in a proprietary encoding (like the Macintosh encoding or a DOS encoding ) to a public network. At the very least, if you do that, make sure that the message heading specifies the encoding! There's nothing wrong with using such an encoding within a single computer or in data transfer between similar computers. But when sent to Internet, data should be converted to a more widely known encoding, by the sending program. If you cannot find a way to configure your program to do that, get another program.
As regards to other forms of transfer of data in digital form, such as diskettes, information about encoding is important, too. The problem is typically handled by guesswork. Often the crucial thing is to know which program was used to generate the data, since the text data might be inside a file in, say, the MS Word format which can only be read by (a suitable version of) MS Word or by a program which knows its internal data format. That format, once recognized, might contain information which specifies the character encoding used in the text data included; or it might not, in which case one has to ask the sender, or make a guess, or use trial and error - viewing the data using different encodings until something sensible appears.
Further reading
- The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets by Joel on Software. An enjoyable nice treatise, though probably not quite the absolute minimum.
- Character Encodings Concepts , adapted from a presentation by Peter Edberg at a Unicode conference. Old, but a rich source of information, with good illustrations.
- ISO-8859 briefing and resources by Alan J. Flavell . Partly a character set tutorial, partly a discussion of specific (especially ISO 8859 and HTML related) issues in depth.
- Section Character set standards in the Standards and Specifications List by Diffuse (archive copy)
- Guide to Character Sets , by Diffuse. (archive copy)
- Google 's section on internalization , which has interesting entries like i18nGurus
- "Character Set" Considered Harmful by Dan Connolly . A good discussion of the basic concepts and misconceptions.
- The Nature of Linguistic Data: Multilingual Computing - an old (1997) collection of annotated links to information on character codes, fonts, etc.
- John Clews: Digital Language Access: Scripts, Transliteration, and Computer Access ; an introduction to scripts and transliteration, so it's useful background information for character code issues.
- Michael Everson's Web site , which contains a lot of links to detailed documents on character code issues, especially progress and proposals in standardization.
- Johan W. van Wingen : Character sets. Letters, tokens and codes. Detailed information on many topics (including particular character codes).
- Steven J. Searle: A Brief History of Character Codes in North America, Europe, and East Asia
- Ken Lunde : CJKV Information Processing . A book on Chinese, Japanese, Korean & Vietnamese Computing. The book itself is not online, but some extracts are, e.g. the overview chapter.
- An online character database by Indrek Hein at the Institute of the Estonian Language . You can e.g. search for Unicode characters by name or code position, get lists of differences between some character sets, and get lists of characters needed for different languages.
- Free recode is a free program by François Pinard . It can be used to perform various character code conversions between a large number of encodings.
Character code problems are part of a topic called internationalization (jocularly abbreviated as i18n ), rather misleadingly, because it mainly revolves around the problems of using various languages and writing systems (scripts) . (Typically international communication on the Internet is carried out in English !) It includes difficult questions like text directionality (some languages are written right to left) and requirements to present the same character with different glyphs according to its context. See W3C pages on internationalization .
I originally started writing this document as a tutorial for HTML authors . Later I noticed that this general information is extensive enough to be put into a document of its own. As regards to HTML specific problems, the document Using national and special characters in HTML summarizes what currently seems to be the best alternative in the general case.
Acknowledgements
I have learned a lot about character set issues from the following people (listed in an order which is roughly chronological by the start of their influence on my understanding of these things): Timo Kiravuo , Alan J. Flavell , Arjun Ray , Roman Czyborra, Bob Bemer , Erkki I. Kolehmainen . (But any errors in this document I souped up by myself.) | http://blog.csdn.net/mmllkkjj/article/details/6138422 | CC-MAIN-2017-43 | refinedweb | 16,630 | 50.87 |
Often we want to remove extraneous characters from the end of a string. The prime example is a newline on a string read from input.
The chop method removes the last character of the string (typically a trailing newline character). If the character before the newline is a carriage return (\r), it will be removed also. The reason for this behavior is the discrepancy between different systems' conceptions of what a newline is. On some systems such as UNIX, the newline character is represented internally as a linefeed (\n). On others such as DOS and Windows, it is stored as a carriage return followed by a linefeed (\r\n).
str = gets.chop # Read string, remove newline s2 = "Some string\n" # "Some string" (no newline) s3 = s2.chop! # s2 is now "Some string" also s4 = "Other string\r\n" s4.chop! # "Other string" (again no newline)
Note that the "in-place" version of the method (chop!) will modify its receiver.
It is also important to note that in the absence of a trailing newline, the last character will be removed anyway:
str = "abcxyz" s1 = str.chop # "abcxy"
Because a newline may not always be present, the chomp method may be a better alternative:
str = "abcxyz" str2 = "123\n" str3 = "123\r" str4 = "123\r\n" s1 = str.chomp # "abcxyz" s2 = str2.chomp # "123" # With the default record separator, \r and \r\n are removed # as well as \n s3 = str3.chomp # "123" s4 = str4.chomp # "123"
There is also a chomp! method as we would expect.
If a parameter is specified for chomp, it will remove the set of characters specified from the end of the string rather than the default record separator. Note that if the record separator appears in the middle of the string, it is ignored:
str1 = "abcxyz" str2 = "abcxyz" s1 = str1.chomp("yz") # "abcx" s2 = str2.chomp("x") # "abcxyz" | https://flylib.com/books/en/2.491.1.36/1/ | CC-MAIN-2019-22 | refinedweb | 312 | 74.39 |
Simplot: A 2D Canvas Plotting Library for Dart
By Richard Griffith on Friday, May 31 2013, 08:57 - Dart - Permalink
When writing code, I often find myself working with arrays of numbers that I need to have some way of visualizing. Since I spend a large part of my time these days coding with the Dart programming language,, I figured that would be the most direct path to creating a simple data visualizer.
Introduction
When writing code, I often find myself working with arrays of numbers that I need to have some way of visualizing. Since I spend a large part of my time these days coding with the Dart programming language [1], [2], I figured that would be the most direct path to creating a simple data visualizer. In this article, we'll take a look at some of the capabilities of a library that takes an array of numbers and plots them to a browser window. We'll look at a few examples, including one that makes use of websockets to move data that resides in a file to our simplot library for plotting to an HTML window. If you are interested in the underlying code, the simplot library is available on Github [3].
Using the Simplot Library
My goal in creating this library was to be able to create plots similar to the one shown below. Each plot should be able to handle multiple sets of data and multiple plots should be able to be grouped together either vertically or in a matrix.
We'll take a look at how each of these plots was created in just a moment, but let's start with the simplest usage of the library itself. To get started, simply add the following to your pubspec.yaml file:
simplot: git: git://github.com/scribeGriff/simplot.git
Then import the library to your app:
import 'package:simplot/simplot.dart';
Let's plot some arbitrary data:
library plottest; import 'dart:html'; import 'package:simplot/simplot.dart'; void main() { var simpleData = [2, 17, 16,2, -0.5, 47, -12, 3, 8, 23.2, 67, 14, -7.5, 0, 31]; plot(simpleData); }
The y axis data is the only required parameter and the limits of the y axis are calculated using the provided data. The limits of the x axis are also calculated, either based on the default x axis data generated internally or from a
List provided as an optional named parameter. Using the
save() method of the
Plot2D class brings up a PNG image in a separate browser window to allow for saving the image for further use:
plot(simpleData).save();
The saved PNG image is below.
The resulting plot is not much to look at, but the library comes with a range of optional named parameters and configurable methods to dress things up a bit. Let's take a look at that next.
Configuring the plot() Command
The
plot() command accepts a number of optional named parameters to allow plotting up to 4 sets of data for a given set of x and y axes.
There are also a number of configurable public methods available to the
plot() function:
The library also contains two top level functions,
saveAll() and
requestDataWS(). The
saveAll() command accepts an array of plots and generates a PNG image of the plots arranged in either a quad or linear format. The
requestDataWS() receives an array of data from a server and plots it to a browser window. We'll look at an example of using
requestDataWS() near the end of this article.
The following examples illustrate the creation of a variety of plot styles using many of the features just described.
Examples
The first subplot in the quad plot above examines the partial sums of the Fourier series for two cycles of a square wave. For this example, we need to import the ConvoLab library [4] to have access to the waveform generator and the Fourier series algorithm. So first we need to add the library to our pubspec.yaml file:
convolab: git: git://github.com/scribeGriff/ConvoLab.git
The
fsps() function returns the solutions to the partial sums of the Fourier series in a map data structure. Before we can plot the data, therefore, we need to map each partial series to a
List(). Also, by default our waveform generator generates a vector 512 points long and, as such, our solutions are also 512 points long. But before plotting, we crop our square wave waveform to 500 points. Since this cropped waveform is passed to the
plot() command first and we are not passing a defined x axis array, its length determines the length of the x axis. All subsequent sets of data are then plotted against this same x axis, regardless of the length of the data sets.
library plotExamples; import 'dart:html'; import 'dart:math'; import 'package:simplot/simplot.dart'; import 'package:convolab/convolab.dart'; void main() { var allPlots = new List(); List waveform = square(2); var kvals = [2, 8, 32]; var fourier = fsps(waveform, kvals); var square2 = waveform.sublist(0, 500); var f0 = fourier.psums[kvals[0]].map((x) => x.real).toList(); var f1 = fourier.psums[kvals[1]].map((x) => x.real).toList(); var f2 = fourier.psums[kvals[2]].map((x) => x.real).toList(); var fspsCurve = plot(square2, y2:f0, y3:f1, y4:f2, style1:'curve', range:4, index:1); fspsCurve ..grid() ..title('Partial Sums of Fourier Series', color:'DarkSlateGray') ..legend(l1:'original', l2:'k = 2', l3:'k = 8', l4:'k = 32', top:false) ..xlabel('samples(n)') ..ylabel('signal amplitude') ..save(name:'fspsPlot'); allPlots.add(fspsCurve); }
We have set the range of the
plot() function to 4 in anticipation of the 3 remaining plots. This has the effect of scaling each plot to a fraction of the size of a single plot. The index has been defined as 1 for the first subplot, which sets the canvas id of this subplot to #simPlot1. All subplot canvases share a common class of .simPlot. We also add this plot to the growable array
allPlots so we can save all the plots to a single PNG image when we're finished. Executing the above code results in the following image when saved:
Our next example investigates the relationship between two forms of the sinc(x) function and the cos(x) function. Let's add the
plot() commands to our plotExamples library that we started above:
var x = new List.generate(501, (var index) => (index - 250) / 10, growable:false); var sincx = new List(x.length); var sincpix = new List(x.length); var cosx = new List(x.length); for (var i = 0; i < x.length; i++) { if (x[i] != 0) { sincx[i] = sin(x[i]) / x[i]; sincpix[i] = sin(x[i] * PI) / (x[i] * PI); cosx[i] = cos(x[i]); } else { sincx[i] = 1; sincpix[i] = 1; cosx[i] = cos(x[i]); } } var sincCurve = plot(cosx, xdata:x, y2:sincpix, y3:sincx, color1: 'LightBlue', color2: 'IndianRed', style1:'curve', range:4, index:2); sincCurve ..grid() ..xlabel('x') ..ylabel('f(x)') ..xmarker(0) ..ymarker(0) ..legend(l1:'cos(x)', l2:'sinc(pi*x)', l3:'sinc(x)') ..title('Sinc Function (normalized and unnormalized)', color:'MidnightBlue') ..save(name:'sincPlot'); allPlots.add(sincCurve);
For this example we have defined a specific x axis that specifies x values from -25 to 25. We then added an
xmarker() and
ymarker() at the 0 value on the x axis and y axis respectively. The saved image looks like the following:
The next example plots the resistance, inductance and capacitance of a path network. We add several
xmarker()s and set the
annotate parameter to true to display the data on the plot where the marker crosses each plot. Note that we have also been passing the optional
name parameter to the
save() method. This is only necessary when saving multiple individual plots to prevent each subsequent
save() from overwriting the previous image. If you are only saving a single image, just call the
save() command with no parameters.
var resistance = [77.98, 104.23, 107.9, 74.61, 73.54, 91.63, 100.54, 85.19, 81.46, 87.64, 69.26, 90.86, 100.15, 95.24, 72.26, 74.86, 84.68, 93.61, 102.54, 103.18, 94.03, 87.13, 85.03, 66.59, 82.45, 81.66, 81.4, 81.58, 84.71]; var inductance = [97.993, 136.77, 142.215, 93.1, 90.956, 117.34, 131.299, 108.633, 103.196, 112.219, 85.533, 116.96, 130.688, 123.414, 89.781, 93.508, 107.893, 121.004, 134.263, 135.15, 121.547, 111.277, 108.154, 81.526, 104.312, 103.11, 102.674, 102.915, 107.367]; var capacitance = [88.52, 123.02, 114.13, 79.69, 78.06, 98.84, 100.09, 79.69, 75.69, 82.13, 63.36, 85.74, 97.07, 93.29, 74.33, 74.98, 78.84, 103.8, 109.18, 111.45, 107.04, 94.02, 93.01, 67.72, 89.42, 83.06, 79.6, 83.1, 87.73]; var rlcLines = plot(resistance, y2:inductance, y3:capacitance, linewidth:1, range:4, index:3); rlcLines ..grid() ..xlabel('Network (n)') ..ylabel('Impedance') ..title('RLC Impedance Values for 29 Path Network') ..legend(l1:'R (mOhms)', l2:'L (10^-2 nH)', l3: 'C (10^-3 pF)') ..xmarker(2, annotate:true) ..xmarker(20, annotate:true) ..save(name:'rlcPlot'); allPlots.add(rlcLines);
We've set the linewidth of the line plots to 1px and added
xmarker()s at x = 2 and x = 20. Note that we have not specified a style so the plots are drawn using the default style of lines with points.
The last example is commonly referred to as a scatter or xy plot. Let's assume that we have some possibly correlated data in either a two-dimensional array or as an array of sets. We first separate out the pairs of points into an x array and a y array. We then generate a
List that corresponds to the best fit for our data. Finally, we set the
style1 parameter to
'points' for a point plot and the
style2 parameter to a
'line'.
var fat_calories = [[9, 260], [13, 320], [21, 420], [30, 530], [31, 560], [31, 550], [34, 590], [25, 500], [28, 560], [20, 440], [5, 300]]; var xscatter = fat_calories.map((x) => x.elementAt(0)).toList(); var yscatter = fat_calories.map((y) => y.elementAt(1)).toList(); var bestFit = xscatter.map((x) => (11.7313 * x) + 193.852).toList(); var scatterPoints = plot(yscatter, xdata:xscatter, style1:'points', color1:'#3C3D36', y2:bestFit, style2:'line', color2: '#90AB76', range:4, index:4); scatterPoints ..grid() ..xlabel('total fat (g)', color: '#3C3D36') ..ylabel('total calories', color: '#3C3D36') ..legend(l1: 'Calories from fat', l2: 'best fit: 11.7x + 193', top:false) ..date() ..title('Correlation of Fat and Calories in Fast Food', color: 'black') ..save(name:'scatterPlot'); allPlots.add(scatterPoints);
Since we expect this to be our last plot for our quad arrangement of subplots, we add a
date() stamp which will appear just below the graph in the lower right side. Here's the final plot:
For each plot, we have been adding the plot object to our
allPlots array. To save all plots to a single image, we can make use of the
saveAll() function as follows:
WindowBase myPlotWindow = saveAll(allPlots);
Executing this command opens a new browser window with the image that we showed at the beginning of the article. By default, the plots are arranged in a quad orientation, but a linear arrangement in the vertical direction is also available by setting the named optional parameter
quad to
false, ie,
quad:false.
Setting up the HTML and CSS
Displaying the plots in a web browser is left largely up to the user. All that is needed at a minimum is a div with either an ID of #simPlotQuad (the default value of the optional
container parameter), or an ID of the user's choosing that is passed as the
container parameter to
plot() and a class called .simPlot. A simple example might look like the following:
The HTML:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>A Simple Plot Example</title> <link rel="stylesheet" href="simple_plot.css"> </head> <body> <h1>SimPlot</h1> <p>A 2D canvas plotting tool in Dart.</p> <div id="container"> <div id="simPlotQuad"></div> </div> <script type="application/dart" src="simple_plot.dart"></script> <script src="packages/browser/dart.js"></script> </body> </html>
The CSS:
body { background-color: #F8F8F8; font-family: 'Open Sans', sans-serif; font-size: 14px; font-weight: normal; line-height: 1.2em; margin: 15px; } #container { width: 100%; position: relative; border: 1px solid #ccc; background-color: #fff; overflow:hidden; } #simPlotQuad { width: 1400px; margin: 0 auto; overflow: hidden; } .simPlot { margin: 30px; float: left; }
This example will place multiple plots in a quad arrangement as in the example image above.
Working with Server Side Data
Of course, much of the data we'd like to plot resides in files either on a local machine or a server. Reading files in this way requires using the dart:io library, which is not compatible with the dart:html library that we use with simplot. We can easily get around this limitation by using a websocket, which has full API support for both client and server side Dart apps [5] [6]. The simplot library contains a top level function,
requestDataWS(), which interacts with a server to retrieve data for plotting. Let's set up a simple example which reads a file into a list and then opens a websocket connection to send the data on to a client.
import 'dart:async'; import 'dart:io'; import 'dart:convert'; void main() { //Path to external file. String filename = '../../lib/external/data/sound4.txt'; var host = 'local'; var port = 8080; List data = []; Stream
stream = new File(filename).openRead(); stream .transform(UTF8.decoder) .transform(new LineSplitter()) .listen((String line) { if (line.isNotEmpty) { data.add(double.parse(line.trim())); } }, onDone: () { //connect with ws://localhost:8080/ws if (host == 'local') host = '127.0.0.1'; HttpServer.bind(host, port).then((server) { print('Opening connection at $host:$port'); server.transform(new WebSocketTransformer()).listen((WebSocket webSocket) { webSocket.listen((message) { var msg = JSON.decode(message); print("Received the following message: \n" "${msg["request"]}\n${msg["date"]}"); webSocket.add(JSON.encode(data)); }, onDone: () { print('Connection closed by client: Status - ${webSocket.closeCode}' ' : Reason - ${webSocket.closeReason}'); server.close(); }); }); }); }, onError: (e) { print('There was an error: $e'); }); }
If you were to execute this code in the Dart Editor, for example, you would see the following:
Opening connection at 127.0.0.1:8080.
The data is now available to the client, which we can retrieve using the
requestDataWS() function from the simplot library and then plot:
import 'dart:html'; import 'dart:async'; import 'package:simplot/simplot.dart'; void main() { String host = 'local'; int port = 8080; var myDisplay = querySelector('#console'); var myMessage = 'Send data request'; Future reqData = requestDataWS(host, port, message:myMessage, display:myDisplay); reqData.then((data) { var sndLength = data.length; var sndRate = 22050; var sndSample = sndLength / sndRate * 1e3; var xtime = new List.generate(sndLength, (var index) => index / sndRate * 1e3, growable:false); var wsCurve = plot(data, xdata:xtime, style1:'curve', color1:'green', linewidth:3); wsCurve ..grid() ..title('Sound Sample from Server') ..xlabel('time (ms)') ..ylabel('amplitude') ..save(); }); }
The server prints the following information:
Received the following message:
Send data request
2013-06-14 14:34:28.502
Connection closed by client: Status - 1000 : Reason - Got the data. Thanks!
If you provide an optional display parameter to the
requestDataWS(), the client prints the following message:
Opening connection at 127.0.0.1:8080
Successfully received data from the server.
Connection closed satisfactorily.
Finally, the data, which originated from a file on our local machine, is plotted to a canvas in the browser window:
Conclusion
The simplot library is a simple, lightweight tool for taking virtually any data that can be stored in a
List and plotting that data to a canvas element in a browser window.
Works Cited
[1] The Dart Programming Language: Building Structured Web Apps
[2] Dart CanvasRenderingContext2D API
[3] The simplot library on Github
[4] The ConvoLab library on Github
[5] Dart Client Side Websocket API
[6] Dart Server Side Websocket API
I would like to add a vertical line to a plot with another graph on it. There doesn't seem to be a way to do this with just simPlot, is that correct? I am aware I can do it with the default graphics api.
@Richard: I think what you are asking for is to be able to have multiple y axes, likely because of data that needs different scales but want to share a common x axis. Simplot does not currently handle multiple y axes on the same plot unfortunately, but I have added it as a feature request since it would be great to be able to do that. Thanks for the feedback. | http://www.greatandlittle.com/studios/index.php?post/2013/05/31/Simplot-A-2D-Canvas-Plotting-Library-for-Dart | CC-MAIN-2018-30 | refinedweb | 2,810 | 62.78 |
A simple and unified library to display images in Jupyter notebooks
Project description
sora (صورة)
Sora means
image/picture in Arabic. It is a simple library to display and embed images in Jupyter notebooks. You can use it to:
from sora import sora # Display a single image from a file: sora('./test.jpg') # Display all the images in a directory: sora('./images/') import tensorflow as tf (x, _), (_, _) = tf.keras.datasets.cifar10.load_data() # Display an image from a numpy array (ndarray): sora(x[0]) # Display a collection of images from a numpy array (ndarray) sora(x[0:10]) # You can also customize the grid sora(x[0:100], cell_width=42, cell_height=42, items_per_row=10)
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/sora/ | CC-MAIN-2020-16 | refinedweb | 143 | 53.61 |
Student s = new Person();
This is not allowed in C++ too, C++ is type-safe too, albeit not as strongly type as C#, which we will discuss later what makes C++ type-safety a bit weaker, this will not compile too:
Student *s = new Person();
Now, let's try to coerce C# to point Student object to Person object, this will compile:
Student s = (Student) new Person();
Let's try the same with C++, this will compile too:
Student *s = (Student *) new Person();
The two codes above, though both will compile, that is where C# and C++ diverge, not only in terms of type safety but also in terms of runtime security(which we will discuss immediately). During runtime, C# will raise an error; whereas C++ will happily do your bidding, it will not raise any error. Now, how dangerous is that kind of code?
Imagine this is your object:
class Person { public int favoriteNumber; } class Student : Person { public int idNumber; }
If you allocate memories for Person, for example, if it so happen that the locations of these memory allocations are adjacent to each other: Person pA = new Person(); Person pB = new Person(); If the runtime will let us point Student s to Person pA, there's a higher chance that the assignments on fields of Student s could overwrite the content of adjacent Person pB object.
To make the principle more concrete, let's do it in a language which leniently allow that sort of thing.
#include <cstdio> class Person { public: int favoriteNumber; }; class Student : public Person { public: int idNumber; }; int main() { Person *p = new Person[2]; p[0].favoriteNumber = 7; p[1].favoriteNumber = 6; printf("\nP1 Fave# %d", p[0].favoriteNumber); printf("\nP2 Fave# %d", p[1].favoriteNumber); void *objek = (void *) &p[0]; // above code is equivalent to C#'s: // object objek = p[0]; Student *s = (Student *) objek; // above code is equivalent to C#'s: // Student s = (Student) objek; s->idNumber = 9; printf("\n\n"); printf("\nS# %d", s->idNumber); printf("\nP1 Fave# %d", p[0].favoriteNumber); printf("\nP2 Fave# %d", p[1].favoriteNumber); p[1].favoriteNumber = 8; printf("\n\n"); printf("\nS# %d", s->idNumber); printf("\n\n"); }
The output of that code:
P1 Fave# 7 P2 Fave# 6 S# 9 P1 Fave# 7 P2 Fave# 9 S# 8
Can you spot the anomalies? We assign a value 9 to student's idNumber, yet P2's favoriteNumber also changes. We changed the P2's favoriteNumber, yet student's idNumber also changes. Simply put, Student's field(s) overlap other objects' field(s) location , so that's the problem if a language allows arbitrary pointing of objects to any object type.
.........Person [0xCAFE] (favoriteNumber) : 7 [0xCAFF] (favoriteNumber) : 6
Student points to first person(which has an adjacent person):
.........Person Student [0xCAFE] (favoriteNumber) : 7 (favoriteNumber) [0xCAFF] (favoriteNumber) : 6 (idNumber)
If a language allows pointing Student to a Person's memory location(0xCAFE), what will happen if we change the value of Student's idNumber? the adjacent second person's favoriteNumber will be changed too, an unintentional corruption of memory. Worse yet, if this is done on purpose, it is a potential security problem. Think if Student can point to any object type, the idNumber could be used to peek other object's contents, even those other object's fields are private | https://www.ienablemuch.com/2011/09/c-type-safety.html | CC-MAIN-2021-17 | refinedweb | 552 | 54.76 |
Hi all
I want to write a program containing a main.c and pmu.h and definition
of header file in assembly called pmu.s, i wrote these file in this way:
main.c
#include <stdio.h>
#include "pmu.h"
int main(void)
{
enable_pmu(); // Enable the PMU
printf("hello world\n");
return 0;
}
pmu.h
#ifndef _PMU_h
#define _PMU_h
void enable_pmu(void);
#endif
when i build this program with eclipse and codesourcery i got this
error:
Description Resource Path Location Type
undefined reference to `enable_pmu' main.c /test2 line 7 C/C++
Problem
this problem looks simple but i searched a lot about that and i cann't
find the problem could you please help me?
thanks
Your pmu.s is in ARM assembler format but not in GNU assembler for ARM
format. You should rewrite it
I have no GNU ARM toolchain installed, but for AVR it would look like:
pmu.S
#if 0
#else
.global enable_pmu
.type enable_pmu, @function
enable_pmu:
ret
#endif
File extension is .S (uppercase) in order to indicate that tis is a
assembler source file and not an intermediate assembler file from the
toolchain (.s in lowercase).
Sorry the last answer is related to "Forum: ARM programming with GCC/GNU
tools". For Codesourcery i don't know the assembler syntax.
thanks dear Krapao for your reply, this assemble code was part of a
sample code i found in...
for using performance monitoring unit of arm 11. i will search for how
to rewrite it.
Can anyone help me to rewrite this assemble code to GNU assemble?
i have changed it in this way:
.global enable_pmu
enable_pmu:
MRC p15, 0, r0, c15, c12, 0
ORR r0, r0, #0x01
MCR p15, 0, r0, c15, c12, 0
BX lr
and it builds with no error but when i run it on my board(OK6410 with
Linux OS) it says "Illegal Instruction"
does anybody have any idea what the problem is?
thanks
Now i understand what you try to do.
You try to worl with your toolchain, what is called with KEIL ARM
toolchain "Compiler support for accessing registers using named register
variables"...
In detail you try something like example 6, but for understanding i
include example 5 too:
[citation]
You can also use named register variables to access registers within a
coprocessor. The string syntax within the declaration corresponds to how
you intend to use the variable. For example, to declare a variable that
you intend to use with the MCR instruction, look up the instruction
syntax for this instruction and use this syntax when you declare your
variable. See Example 5.
Example 5. Setting bits in a coprocessor register using a named register
variable
register unsigned int PMCR __asm(”cp15:0:c9:c12:0”);
__inline void __reset_cycle_counter(void)
{
PMCR = 4;
}
Disassembly:
__reset_cycle_counter PROC
MOV r0,#4
MCR p15,#0x0,r0,c9,c12,#0 ; move from r0 to c9
BX lr
ENDP
In Example 5, PMCR is declared as a register variable of type unsigned
int, that is associated with the cp15 coprocessor, with CRn = c9, CRm =
c12, opcode1 = 0, and opcode2 = 0 in an MCR or MRC instruction. The MCR
encoding in the disassembly corresponds with the register variable
declaration.
The physical coprocessor register is specified with a combination of the
two register numbers, CRn and CRm, and two opcode numbers. This maps to
a single physical register.
The same principle applies if you want to manipulate individual bits in
a register, but you write normal variable arithmetic in C, and the
compiler does a read-modify-write of the coprocessor register. See
Example 6.
Example 6. Manipulating bits in a coprocessor register using a named
register variable
register unsigned int SCTLR __asm(”cp15:0:c1:c0:0”);
/* Set bit 11 of the system control register */
void enable_branch_prediction(void)
{
SCTLR |= (1 << 11);
}
Disassembly:
__enable_branch_prediction PROC
MRC p15,#0x0,r0,c1,c0,#0
ORR r0,r0,#0x800
MCR p15,#0x0,r0,c1,c0,#0
BX lr
ENDP
[/citation]
Unfortuantely i can not check your link in detail.
[citation].
[/citation]
I don't find "Basic example code" and DS-5 website is currenty down.
Questions:
1/ Is enable_pmu allowed on OK6410 with Linux OS? The code from your
first post would work on your board when translated with KEIL?
2/ Is r0 a register you can use in GNU toolchain freely? The code before
and after the call to enable_pmu doesn't depend on r0?
4/ Did you check whether GNU toolchain allows "named registers" too, so
you can write enable_pmu in C only instead in
KEIL-Disassembly-transferred-in-GNU-Assembly-linked-with-GNU-C?
Add:
GNU GCC doc may help esp....
>...
>AREA v6_pmu, CODE,READONLY
>...
See AREA in the Realview Assembler documentation in ARM's infocenter and
the about the SECTION-instruction in the GNU Manuals (GNU-Assembler,
GNU-Linker part of the binutils). You may also have to check the
scatter-load file in the example from ARM and modify your linker-script
to reflect the functionality. It seems the instructions for the
procedure are expected at a special memory-location. It's difficult to
help without a minimal but complete example.
Hi Krapao, I attached Performance Monitor Unit example code for ARM11
and Cortex-A/R , that I found on...
1/ Is enable_pmu allowed on OK6410 with Linux OS? The code from your
first post would work on your board when translated with KEIL?
I will appreciate if I could run these samples on OK6410 with Linux OS
and it’s my goal now. But I don’t know whether its possible or not. I
didn’t test translating with KEIL, if I’m right KEIL is used for
bare-metal devices and does not support GNU assemblers!?!?
4/ Did you check whether GNU toolchain allows "named registers" too, so
you can write enable_pmu in C only instead in
KEIL-Disassembly-transferred-in-GNU-Assembly-linked-with-GNU-C?
Based on your advice I wrote a test program for named register
variables, my test program was:
#include <stdio.h>
int main(void)
{
register unsigned int APSR __asm("apsr");
APSR = ~(~APSR | 0x40);
register unsigned int SCTLR __asm("cp15:0:c1:c0:0");
void enable_branch_prediction(void)
{
SCTLR |= (1 << 11);
}
register unsigned int PMCR __asm("cp15:0:c9:c12:0");
__inline void __reset_cycle_counter(void)
{
PMCR = 4;
}
register unsigned int cp15_control __asm("cp15:0:c1:c0:0");
cp15_control |= 0x1;
printf("Before call function\n");
enable_branch_prediction();
__reset_cycle_counter();
printf("after call function\n");
return 0;
}
I used Eclipse IDE for C/C++ Developers(Indigo Service Release 1) and
Sorcery Codebench lite(arm-2011.09-70-arm-none-linux-gnueabi)
But there are some build errors:
invalid register name for 'apsr'
invalid register name for 'APSR'
invalid register name for 'cp15_control'
invalid register name for 'PMCR'
invalid register name for 'SCTLR'
make: *** [main.o] Error 1
do you think these errors says that GNU toolchain doesn’t allows "named
registers"?
dear mthomas thanks for reply, unfortunately I can’t get your main
idea, could you please clarify your purpose with the steps that I should
done.
When you want to write a program for a device that has OS (same as my
board that has linux), is it possible for you to manage the memory
location that your program runs or operating system got the management?
In my case I only want to config performance monitoring registers of
ARM1176jzf-s processor with c and assembly programs that runs on linux
OS.
thanks
In your case (OK6410 AND GNU toolchain AND coprocessor use) you are
about to enter a unknown continent. You are amoung the first maybe this
very first explorer of this new land.
Your tools are the description of the GNU tools and your processor
manual and the way others took with KEIL toolchain.
> In my case I only want to config performance monitoring registers of
> ARM1176jzf-s processor with c and assembly programs that runs on linux
> OS.
> if I’m right KEIL is used for bare-metal devices
Then... In Linux access to some registers is possible only in privileged
modes. "Bare-metal" access like in the KEIL examples is not possible. So
you have to throw away "maps" from KEIL explorers and you have to find
another way...
Under Linux there can be a special v6 PMU driver together with a
Performance Events library (perfevents). Does your Linux on OK6410 has
this library enabled? If yes then look i the source and examples for
this library.
I'll stop here, because this is way to far away for me from initial
question. I'm guessing only and can't give much input now.
dear Krapao thanks for your advices and hints, it was so useful for me!
;-) | https://embdev.net/topic/249952 | CC-MAIN-2018-30 | refinedweb | 1,449 | 62.58 |
Cool site loads different addspe
...- where the user adds a new
Last week i got project to complete 15 assignments on cultural activities or sports activities .i have done it in two days .i will do work fast as soon ^_^
To be use on instastory igtv Insta @articases within the app. - An About Screen.
...dice (1...2...3...4...5) or by pressing the "Roll" button. For each
Need adds changed to new landing pages approx 10 adds to be changed
its for youtube but i cant sing im really bad so i need help and ill pay you in 3 days or less but you have to do i..
I want a new logo for my site [log masuk untuk melih ideas based on how our
Require Demo Videos of these areas. "Digital marketing", "PHP-My Sql", "Oracle PL-SQL" & "Oracle 11g DBA" My Budget is Low. So That I will pay a fixed amount as given in the Budget Quote. Thanks. Waiting for the providers of Videos of my requirement
.. testing and fill some data (this is calculation based data which would like at least the home page to be interactive and not just the standard flat photo based website that ca..
(theklothstore ) Find t-shirts online featuring a small pocket on the left side. These can be plain or printed but this cute pocket detailing adds a lot of style to your dressing. [log masuk untuk melihat URL] (Removed by Freelancer.com Admin)
am looking for a logo and art work design for my Doughnut Store. The doughnut shop name is DOUGHNUTLY. It must have its own unique FONT....include in some if the design and etc. The slogan must be marketable and related with the product and company name. Maybe it can rhyme with the name or etc. Looking for really cool, unique and marketable redirec...
import 2 xml of different language , WPML setup , site is already done
Trucking Loads Website
I am)
I own a laravel app that needs 2 new packages added to it. The two...com/orchestral/tenanti (this handles multi-database) Currently the app works with 1 database for multiple tenants by using a company ID identifier for the information. The other package adds subdomain support. This will be helpful for adding a frontpage home page for each client.
I need.
...time thinking of logo design ideas because the name is so long (my mistake) and I don't want it to be cheesy or spammy looking. I would like something professional, modern, cool, authoritative, trustworthy. Open to any ideas. Check images for designs I previously made myself (that I don't like.) Primarily thinking white on black or black on white but
..
..
Sila Dafter atau Log masuk untuk melihat butiran.
.. the customer adds 2 quantity to
Edits Videos Adds animation as needed using after effects/3D animation.
my web site is [log masuk untuk melihat URL] id like a cool logo for sk ventures. maybe. SKV. skv for short. monogram | https://www.my.freelancer.com/job-search/cool-site-loads-different-adds/ | CC-MAIN-2019-22 | refinedweb | 500 | 74.39 |
Frank on Fire: Getting Started with Sinatra and Ember-CLI
application rather than writing boilerplate. And the announcement in the Roadmap to Ember 2.0 that Ember-CLI will become a "first class part of the Ember experience" is all the more reason to start using it now.
Although Ember is impressively backend agnostic, the de-facto choice for serving Ember apps and providing APIs has generally been nodeJS. Using one language on both client and server, but in this post I'm going to talk about a different option: using Ruby and Sinatra as the backend.
I won't spend too long arguing for the value of Sinatra vs Node for the backend, but I think there are some worthy points in Ruby's favor. First and foremost, Ruby has a diverse set of tools for interacting with databases of all kinds, and has particularly good support for SQL databases courtesy of Rails and ActiveRecord. While there are ORM options for node, none have the maturity (or, in my opinion, elegance and ease of use) of the options on the Ruby side. Second, Ruby makes it very easy to get up and running with a prototype application, and Sinatra takes this attitude and applies it to web development. It also has a huge ecosystem of mature and well supported libraries ("gems" for the non-rubyist) that can be used to quickly add features to any application, making prototyping even easier. In short, the language has incredible whipupitude, which is just what we need for building a simple backend for an Ember application.
Setup
Before getting started on our app, we'll need to install a few prerequisites:
Once these are set up, we'll be able to manage installing libraries and any future dependencies via Bower and npm.
Installing npm can be a little tricky, but since it comes with Node.js I'd suggest just installing that if you're unsure where to start.
Once you have npm, the rest of the dependencies are easy to install:
Bower
npm install -g bower
Ember-CLI
npm install -g ember-cli
Ruby
If you're following along on a Mac, you already have Ruby installed, so you can skip this step! If not, you have a few different options to install Ruby. One good choice is RVM, which can be installed along with the current stable version of MRI Ruby by running
\curl -sSL | bash -s stable
Bundler
Bundler can be installed as a Ruby gem
gem install bundler
A Star Is Born - Generating a new Ember application
Once all the dependencies are installed, its time to generate a skeleton for our application! For the purposes of this tutorial, lets call our app newsboy_blues, and generate it using the "ember new" command
ember new newsboy_blues
Now you can switch into the new
newsboy_blues directory and check your setup by running
ember server and visiting your application at. You should see "Welcome to Ember.js" if everything is set up properly.
Sinatra Takes The Stage - Setting up the server
Next we'll create a basic Sinatra application to serve up our application. For now, this may seem like extra work compared to using the node server that comes with ember-cli, but we'll begin to see the advantages of using Ruby on the server when it comes time to access the database.
To start, create a file called
server.rb with the following contents:
# server.rb require 'sinatra' configure do set :public_folder, File.expand_path('dist') end get '*' do send_file 'dist/index.html' end
and another file called
Gemfile that we'll use to install Ruby libraries
# Gemfile source '' gem 'sinatra'
Then have bundler install Sinatra by running
bundle install
Finally, we'll need to compile the javascript for our application. Although our Sinatra application won't do this automatically, ember-cli includes a command to watch for changes to our project and automatically avenge when they happen. Open up a new terminal, and run
ember build --watch
Now all we have to do is start the application
ruby server.rb
And we can again see "Welcome to Ember.js" when visiting. Note the change of port; by default Sinatra runs on 4567.
Start Spreadin' The News - Building the client using Fixtures
Now that all the groundwork is laid, we can really begin building a news reader. In the end we'll be able to pull news from any RSS feed using our server, but to begin we'll just use fixture data so we can focus on building the interface.
To get started we can use ember-cli's
generate command to generate scaffolds for the routes, models, and templates we'll be creating. First we generate a model we can use to represent a news story
ember generate model story
Next we generate a route which we'll use to link our story model to a view template
ember generate route stories
we'll also need to create our own application adapter to use fixture models
ember generate adapter application
and an index route so we can redirect to our main 'stories' route
ember generate route index
With all of this finished, we can fill in some simple behavior for our nascent news app. First we'll change
app/adapters/application.js to tell ember to use the
FixtureAdapter adapter for loading models
// app/adapters/application.js import DS from 'ember-data'; export default DS.FixtureAdapter.extend({ });
Then set up
app/routes/index.js to redirect to our stories route
// app/routes/index.js import Ember from 'ember'; export default Ember.Route.extend({ redirect: function(){ this.transitionTo('stories'); } });
Next we'll set up our Story model in
app/models/story.js, defining a few basic attributes as well as some fixtures.
// app/models/story.js import DS from 'ember-data'; var Story = DS.Model.extend({ title: DS.attr('string'), url: DS.attr('string'), story_content: DS.attr('string') }); Story.reopenClass({ FIXTURES: [ { id: 1, title: 'Local Web Consultancy Fights Off Alien Invasion, Saves Kitten', url: '', story_content: 'Continuing their streak of epic daring do, crime fighting outfit Bendyworks has...' }, { id: 2, title: 'Teach Your Cat To Fly With This One Weird Trick', url: '', story_content: "You'll never believe..." } ] }); export default Story;
And update
app/routes/stories.jsto load our Story fixtures as the model for the 'stories' route
// app/routes/stories.js import Ember from 'ember'; export default Ember.Route.extend({ model: function() { return this.store.find('story'); } });
Last but certainly not least, we'll edit the 'stories' template in
app/templates/stories.hbs to define how our stories are displayed. Note the use of the 'triple handlebar' for story_content to prevent Handlebars from HTML escaping the contents of a story, which usually has HTML formatting. A word of warning though; mixing in content from unknown sources is a serious security issue. Eventually you'd want to sanitize the contents of each story on the server side, but since we're just using fixtures that we wrote there's no need to worry about it for now.
<--! app/templates/stories.hbs --> <ul> {{#each model}} <li><h4><a {{bind-attr href=url}}>{{title}}</a></h4></li> <p>{{{story_content}}}</p> {{/each}} </ul>
And now we can see our exciting news stories displayed at! If you don't see any change, make sure you're running
ember build --watch, and that its output isn't showing any errors.
Something Wonderful - Using gems to make Database and RSS access a breeze
Up until this point we haven't been using our Sinatra app for anything other than serving up the main Ember application. Since we had to go through a few (small) contortions to get this right, you may be wondering where Sinatra adds value in this application. The answer is in Ruby's excellent library support for all sorts of server side operations. In this case, database access and RSS parsing.
For collecting news stories to display in our ember app, we'll use the Feedjira gem, which provides a simple interface for fetching and parsing RSS feeds. To store and persist our stories, we'll be using the sinatra-activerecord gem with sqlite. This will let us get up and running faster, but if we ever want to handle large numbers of stories or deploy our application to Heroku it will be fairly simple to switch to PostgreSQL.
To get started, add the 'feedjira' and 'sinatra-activerecord' gems to your Gemfile, as well as 'sqlite3' and 'rake' which are required by sinatra-activerecord.
# Gemfile source '' gem 'sinatra' gem 'feedjira' gem 'sinatra-activerecord' gem 'sqlite3' gem 'rake'
Next, require the new libraries inside server.rb, and add a line to the configuration block telling the app which database to use.
# server.rb require 'sinatra' require 'sinatra/activerecord' require 'feedjira' configure do set :public_folder, File.expand_path('dist') set :database, {adapter: "sqlite3", database: "news.sqlite3"} end get '*' do send_file 'dist/index.html' end
Finally, create a Rakefile and require the sinatra-activerecord tasks and the main application.
# Rakefile require "sinatra/activerecord/rake" require "./server"
Now just run
bundle install and
rake db:create. If it returns successfully the database is setup and ready for us to start defining the tables that will go in it.
To begin building a simple "Stories" model and database table, run
rake db:create_migration NAME=create_stories. Fill out the migration as follows
class CreateStories < ActiveRecord::Migration def change create_table :stories do |t| t.string :title t.text :story_content t.string :url end end end
Sinatra won't generate a model file for us like Rails does, so let's create an empty one in
lib/models/story.rb.
# lib/models/story.rb class Story < ActiveRecord::Base end
You'll also need to load the model by adding
require_relative 'lib/models/story' to your main server.rb file
Then run
rake db:migrate, and we're ready to populate our stories table with data from an RSS feed! Although a fully featured news reader would probably have some concept of a "Feed" which could be used to organize stories, our simple ember app only models Stories, so we won't bother creating a model for Feeds.
Since we already have a Rakefile, lets just add another task to fetch all new stories from a feed for now.
# Rakefile require "sinatra/activerecord/rake" require "./server" desc "fetch stories from RSS feed" task :fetch_stories, [:url] do |t, args| Story.fetch_from(args[:url]) end
And then implement the
fetch_from(url) function in the Story model.
# lib/models/story.rb class Story < ActiveRecord::Base def self.fetch_from(url) f = Feedjira::Feed.fetch_and_parse(url) f.entries.each do |e| unless Story.find_by(url: e.url).present? Story.create(title: e.title, url: e.url, story_content: e.content) puts "Story #{e.title} added!" end end end end
Now you should be able to run
rake fetch_stories[]
and see a list of stories such as
Story The Iconic Madison – Free Icon Set added! Story 2014 Rails Rumble added! Story The Old and the New: SOAP and Ember.js added! Story Two Keynote tips that everyone should know added! Story Transducers: Clojure’s Next Big Idea added! Story Why We Can’t Wait for Madison+ Ruby added! Story BendyConf: Tech Education for the Rest of Us added! Story Tessel: A First Look at JavaScript on Hardware added! Story Externally Embedding Ember added! Story BendyConf: A Paean To Plain Text added!
Indicating that everything worked, and some stories have been added to our database.
Can't We Be Friends? - Integrating server and client
All that's left to do now is set up our Ember application to fetch and display these stories. Because we've already developed the front end using fixtures, we don't need to worry about any more template changes or controller logic.
First, add a route to
server.rb that will return the stories in our database
# server.rb require 'sinatra' require 'sinatra/activerecord' require 'feedjira' require_relative 'lib/models/story' configure do set :public_folder, File.expand_path('dist') set :database, {adapter: "sqlite3", database: "news.sqlite3"} end get '/api/stories' do content_type :json {stories: Story.all}.to_json end get '*' do send_file 'dist/index.html' end
Note that this new route is "namespaced" under
/api namespace, to avoid conflicting with the
/stories route on the Ember side.
To get Ember to fetch data from the server instead of from the fixtures, just remove the fixtures, change the application adapter from a
FixtureAdapter to the built in
ActiveModelAdapter, and place the new adapter under the api namespace.
// app/adapters/application.js import DS from 'ember-data'; export default DS.ActiveModelAdapter.extend({ namespace: 'api' });
// app/models/story.js import DS from 'ember-data'; var Story = DS.Model.extend({ title: DS.attr('string'), url: DS.attr('string'), story_content: DS.attr('string') }); export default Story;
Now your stories will be loaded and displayed when you visit
localhost:4567! (Don't forget to restart
server.rb and re-run
ember build)
Don't Like Goodbyes - What to work on next
Congratulations, you have a working Ember/Sinatra application that can fetch and display news stories from any RSS feed in the World! The finished code for this tutorial is also available on Github. But its not very feature rich, and it certainly won't be winning any design awards. Below are some ideas of where to go next in developing the app. Some may form the basis of future blog posts, and hopefully others will inspire readers to build their own amazing news reading tools.
Improve the interface
Our interface is about as bare bones as could be. We could start by adding a route and template to display individual stories at the very least, but adding some basic design touches or starting to style our page with Bootstrap would be a good goal.
Read and unread stories
Once you've read a story, it'd be nice to mark it as read and not see it again. The first step would be adding the necessary fields to the Stories table in our database, and the Story model in our Ember application. After that, Ember Data's ActiveModelAdapter will automatically generate a standard RESTful PUT request when we save a story model on the client side, and all we'll have to do is add a
put "/stories" route to our Sinatra server.
Automatically fetch new stories
Having to run the
rake fetch_stories command every time we want to get new stories from an RSS feed. You could just hard code a list of feed urls and fetch them each time, but a more robust solution would be to create a new "Feed" model and database table, using it to remember which feeds to update. You could then regularly schedule updates using cron or the excellent whenever gem.
Pagination
Once you get more than a few stories in your database, sending them all from the server and rendering each one in the DOM will waste resources, especially if you're only there to view the first few stories. Although Ember doesn't have any built in pagination functions, the ember-cli-pagination addon is worth investigating. I've also had luck with simply managing a
page variable in the
ApplicationController, and using the will_paginate gem to handle things on the server side.
Switching to Postgres and deploying to Heroku
We used sqlite for this tutorial since its easy to setup and use from Sinatra, but as the number of stories in our database grows we'll want to switch to a more performant SQL software. We're big fans of PostgreSQL here at Bendyworks, and switching to it from sqlite isn't particularly difficult once you have it installed and configured. Switching to Postgres gives us another unexpected benefit however; its the last hoop to jump through before we can easily deploy our application to Heroku using the Ember-CLI buildpack. Then you can read your own personalized news feed from any internet connected device! (although you may want to start thinking about authentication before adding features that update the database)
Take the news into your own hands
Although building a news reader with Ember-CLI has been a great way to learn more Ember and experiment with serving up the assets and API through Sinatra, my original motivation was at least in part to start building a news reader that won't get shut down or bought and integrated into an app I like less. Beyond that, I wanted the ability to score and sort stories by a user controlled 'interestingness' metric. I've also always wanted to build a news reader and call it Bottomless Soup Bowl in honor of this ig nobel prize. It's still pretty experimental, but if you'd like to see a demo of what the app could look like after some of the features discussed here are added, check out bsb.herokuapp.com. Any input, suggestions, or Pull Requests are also very welcome on the wstrinz/bsb Github repository. | https://bendyworks.com/blog/frank-fire-sinatra-ember-cli/index | CC-MAIN-2018-51 | refinedweb | 2,831 | 62.68 |
Classnames JoinerClassnames Joiner
A fast, even simpler utility for conditionally joining class names together in Javascript and Typescript.
Inspired by classnames, this package was created as a simpler Javascript/Typescript alternative that supports only the array of string|null|undefined syntax. This was by design, and to slim down the API surface.
FeaturesFeatures
😊Simple API 📜Typescript / Javascript ⚡Fast. The whole operation only loops once, and relies on string concatenation. This is faster than pushing into an array, and then joining the array, which is actually 2 loops. Will look into adding some benchmark comparisons. 🧘Flexible. Use it in any Javascript framework, for any CSS-in-JS solution. I use it to join CSS Modules and utility classes.
LimitationsLimitations
- Does not support the object notation for input for simplicity.
- Does not do de-duplication of class names for speed reasons.
UsageUsage
import { classnames } from "classnames-joiner"; const a = "someClass"; const b = null; const c = "someOtherClass"; const d = undefined; const result = classnames([a, b, c, d]); console.log(result); // "someClass someOtherClass" | https://www.npmjs.com/package/classnames-joiner | CC-MAIN-2022-27 | refinedweb | 169 | 50.63 |
Feature #14927
Description
Just a proof concept I wanted to share. Maybe it could be useful?
Say you want to load all the .rb files in your lib directory:
Dir['lib/**/*.rb'].each { |file| load(file) }
This approach may not work if your files have dependencies like that:
# lib/foo.rb class Foo < Bar end
# lib/bar.rb class Bar end
Foo class needs Bar class. You will get a NameError (uninitialized constant Bar).
So in my personal projects, I use this algorithm to load all my files and to automatically take care of dependencies (class/include):
def boot(files) i = 0 while i < files.length begin load(files[i]) rescue NameError i += 1 else while i > 0 files.push(files.shift) i -= 1 end files.shift end end end
boot Dir['lib/**/*.rb'] # It works! foo.rb and bar.rb are properly loaded.
My point is: it would be cool if Kernel#load could receive an array of filenames (to load all these files in the proper order). So we could load all our libs with just a single line:
load Dir['{path1,path2}/**/*.rb']
History
Updated by shevegen (Robert A. Heiler) over 1 year ago
I wanted to propose a more sophisticated load-process in ruby some time ago, but I never got
around it. I am even thinking of being able to load files based on abbreviations/shortcuts,
without necessiting a hardcoded path (e. g. require needs the path, whereas with an
abbreviation we could only refer to that abbreviation, and an internal list keeps track of
where the actual file resides instead). But it's not so simple to suggest something that
has a real chance of inclusion. I am glad to see other people have somewhat similar ideas -
of course your suggestion is quite different from my idea, but you tap into a very similar
situation:
- Handling multiple files.
This is especially useful for larger ruby projects. For smaller projects it is not so important
perhaps but when you have like +50 .rb files and growing, making things easier in regards to
handling files, would be great.
To the suggestion - I think several ruby hackers may benefit from better handling of files.
I am not sure if there is a big chance to see load() and require() itself being changed, but
I also don't know. I think we should ask matz, but there may be a chance that they may not be
changed, possibly due to backwards compatibility (if there is a problem). In the long run we
may want to consider using alternative means. For example, the require-family, such as
require_relative(). I don't mean require_relative in itself, but something related to require_*.
require_relative also handles location to other files, just relative to the directory at hand.
By the way, I also understand this use case:
This approach may not work if your files have dependencies like that:
And it is related to another use case which isn't a lot of fun:
- Circular dependencies + warnings about this
I also thought about this with my never-written proposal... :D
Circular dependency warnings are not a lot of fun IMO.
I think Hiroshi Shibata also had a suggestion in regards to ... require, I think, some months
or a few years ago, but I don't remember what it was exactly.
Anyway, before I write way too much and digress from the suggestion,
I am in general in favour of your suggestion. I don't have any particular
opinion on your proposed solution - another API may be fine or perhaps
a new method... load_files() ? Hmm... may not be an ideal name either.
But I think the specific API may be a detail. The more important aspect
is whether ruby can provide easier means for ruby users to load or
require a batch of files. Perhaps load() and require() will remain as
they are, for simplicity and backwards compatibility, but in such a
case we could think about better ways to handle the task of "pulling
all necessary files" into a project. This may also help people when
they create gems.
In my own larger gems I do something very similar as to what Sébastien
Durand showed, e. g. I also do Dir['*.rb'] often on a per-directory
basis. That way I don't have to specify the names of the individual
.rb files.
Anyway, +1 from me.
Updated by ahorek (Pavel Rosický) over 1 year ago
Dir glob has to find all files, sort them, create objects.
Then require loads them again from the filesystem...
I think Dir.glob + require is a very common pattern and if we have a function like require_directory / require_tree? some of these unnecessary steps could be skipped and simplified.
Updated by shevegen (Robert A. Heiler) about 1 year ago
I thought about creating a new issue but then I remembered that
the issue here refers to a similar use case that I wanted to
show.
Take the following link as an example:"
As you can see, there are several require statements for the
subdirectory at fpm/package/.
I think this is a very common use case. I encounter it myself
a lot in (almost) daily writing of ruby code, where I have to
load code stored in .rb files spread out.
Of course there are workarounds over the above, e. g. the
Dir[] or Dir.glob example that was given here (and the former
I use a lot). But I think it may be nicer to have an official
API support this as well.
The name could be:
require_files
The first argument could be the path to the subdirectory at
hand; the second argument could be an options Hash that allows
more fine-tuning, such as traversing subdirectories, handling
.so files as well, or exclusively, and so on and so forth.
I believe it may fit into the "require" family, since that
already has e. g. require_relative.
In the long run it would be nice to even be able to refer to
.rb files without having to use any hardcoded path at all -
but for the time being, any support for requiring/loading
files helps a lot.
(To the issue of dependencies in said .rb files, I usually
batch-load the .rb files, and if I get some error about
an uninitialized constant, I add it into that .rb file at
hand. It's a bit cumbersome but I understand that this part
is not easy to change presently.)
I think require_directory() is a better name that require_tree()
but I also like require_files().
The more important part is to want to convince that this is
a common pattern, which is also why I added an example from
a quite popular ruby project (fpm presently has ~7.3 million
downloads on rubygems.org).
What I encounter myself doing is that, for my larger projects
in ruby, I end up creating a subdirectory called requires/ and
in that directory I put .rb files that handle loading of
require-related activities, including subdirectories and external
dependencies.
Updated by nobu (Nobuyoshi Nakada) about 1 year ago
shevegen (Robert A. Heiler) wrote:"
Doesn't the order matter?
Updated by marcandre (Marc-Andre Lafortune) about 1 year ago
nobu (Nobuyoshi Nakada) wrote:
Doesn't the order matter?
Very often, it does not. If it does, one can always require the one that's needed first, say, then require the whole directory; require won't load the same file twice so this works fine.
We wrote a small method for this in
deep-cover:
One thing I really like about it is that it makes it clear that the whole directory is loaded. If there's a missing
require "fpm/package/something" in the list above, it can take a while to notice.
My opinion on the feature request: not great for
load, but could be useful for multiple
require_relative.
Updated by marcandre (Marc-Andre Lafortune) about 1 year ago
- Assignee set to matz (Yukihiro Matsumoto)
Until we convince Matz, I pushed a gem
require_relative_dir which hopefully can be helpful to others.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/14927 | CC-MAIN-2019-51 | refinedweb | 1,352 | 65.22 |
I want to create a link list that has two record,
and insert another record in between the two.
However my program is not working properly in the sense that the linked list did not really re-link after I commanded it to.
Could you also please give me some pointers how on to reduce the size of this program?Could you also please give me some pointers how on to reduce the size of this program?Code:/* This program will attempt to insert a record in between two records */ #include <iostream> using namespace std; struct list{ char event[20]; int year; list *next; }* root, event1, event2; void insert(list*, char incident[20], int); int main() { char event[20]; int year; list event1 = { "World War I", 1914}; list event2 = { "World War II", 1939}; root = &event1; event1.next = &event2; event2.next = NULL; cout<<"Please enter an event and a date in between 1914 and 1939."<<endl; cin>>event>>year; insert(root, event, year); return 0; } void insert(list *head, char incident[20], int year) { list event_between; event_between.event[20] = incident[20]; event_between.year = year; //The pointer in event1 should point to the reference of event_between and relink the linked list event1.next =&event_between; event_between.next = &event2; event2.next=NULL; cout<<"The new list has the following entries: "<<endl; while(head!=NULL) { cout<<head->event<<' '<<head->year; cout<<endl; head=head->next; } }
I'm only beginning C++, thanks for all helps | http://cboard.cprogramming.com/cplusplus-programming/141054-basic-linked-list-program-problem.html | CC-MAIN-2016-40 | refinedweb | 239 | 65.12 |
OpenCV can be used to easily read an image file and display it in a window. The
core and
highgui modules are needed to achieve this. The image is read into a
cv::Mat object.
cv::Mat is the modern replacement for the old
IplImage object, which came from Intel.
#include <opencv2/highgui.hpp> // Read image from file cv::Mat img = cv::imread("foobar.jpg"); // Create a window cv::namedWindow("foobar"); // Display image in window cv::imshow("foobar", img); // Wait for user to press a key in window cv::waitKey(0);
To be able to compile this code, link with the
core and
highgui modules using
-lopencv_core -lopencv_highgui.
The legacy method to achieve the same using
IplImage is:
#include <opencv2/opencv.hpp> // Read image from file IplImage* img = cvLoadImage("foobar.jpg"); // Display image in window cvLoadImage("foobar", img); // Wait for user to press a key in window cvWaitKey(0);
Tried with: Ubuntu 12.04 LTS | https://codeyarns.com/2014/02/11/how-to-read-and-display-image-in-opencv/ | CC-MAIN-2020-34 | refinedweb | 155 | 64.81 |
Migrating a Word VBA Solution to Visual Basic.
Jan Fransen, OfficeZealot.com
Lori Turner, Microsoft Corporation
Published: August 2005
Updated: January 2006
Applies to: Microsoft Visual Studio 2005 Tools for the Microsoft Office System, , Microsoft Office Word 2003, Microsoft Office Outlook 2003, Microsoft Visual Basic, Microsoft Visual Basic for Applications
Summary: Learn about opportunities and challenges we encountered when we migrated a traditional Microsoft Office Visual Basic for Applications (VBA) solution to managed code, using Visual Studio 2005 Tools for Office. (32 printed pages)
Download OfficeVSTOMigratingWordVBAtoVBNET.msi.
Contents
Currently, developers show strong interest in using Microsoft Office 2003 Professional as a smart client development platform. The terminology may be new, but the interest is no surprise to one segment of the Microsoft development community: the long-time Microsoft Office and Microsoft Visual Basic for Applications (VBA) developer. The renewed interest in Microsoft Office development is mostly because of two innovations: the greatly expanded support of XML in Microsoft Office Word 2003 and Microsoft Office Excel 2003, and the introduction of Microsoft Visual Studio 2005 Tools for the Microsoft Office System (Visual Studio 2005 Tools for Office).
XML support means that you can separate the data stored in an Office document from the formatting of the document. End users still work with Word documents and Excel workbooks using the applications that they know, but data can be easily added to or extracted from these files for use in other applications.
Visual Studio 2005 Tools for Office allows you to use Microsoft Visual Basic or Microsoft Visual C# to create the same types of document-centric Office solutions traditionally written in VBA. Visual Studio 2005 Tools for Office opens up capabilities that are not available to Microsoft Office developers using VBA or that are difficult to implement.
If you develop Microsoft Office solutions using VBA, you may be curious about what you gain by using Microsoft Visual Studio 2005 and about how the development experience compares with VBA. You may also want to know, from a practical point of view, where to focus your education, if you want to fully use the capabilities of Visual Studio and the Microsoft .NET Framework. This article explores these issues from the perspective of a single Word-based solution. First we look at the original VBA solution, and then we look at how to migrate the solution to Visual Basic, using Visual Studio 2005 Tools for Office.
Software Requirements
To run the migration solution, you must have the following software installed:
Microsoft Windows 2000 or later
Microsoft Office Professional 2003 SP1, the complete installation or the standalone versions of Microsoft Office Word 2003 and Microsoft Office Outlook 2003 (including the primary interop assemblies)
Microsoft Visual Studio 2005 Tools for the Microsoft Office System
Microsoft SQL Server 2000, Microsoft SQL Server 2000 Desktop Engine (MSDE 2000), or Microsoft SQL Server 2005 Express (included with Visual Studio 2005 Tools for Office)
Copying the Sample
To download the sample application that accompanies this article, click OfficeVSTOMigratingWordVBAtoVBNET.msi at the top of this article. In the Microsoft Download Center, click Download, and then follow the instructions to run the installation.
By default, the sample files copy to the My Documents\Visual Studio 2005\Projects\Word VBA Migration Sample directory.
Creating the Database and Security Policies
Create the SQL Server database on your local computer and add the security policies required by the migration solution.
To create the database and security policies
Start a Visual Studio 2005 command prompt:
On the Start menu, click All Programs.
Point to Microsoft Visual Studio 2005, and then point to Visual Studio Tools.
Click Visual Studio 2005 Command Prompt.
A command window opens.
Change to the folder where the sample files reside:
To set up the security policies, execute the following batch procedure:
If you are prompted to add the security policies, press Y and then press ENTER for each prompt.
Change to the folder where the database files reside:
To install the database on your local computer running SQL Server, execute the following batch procedure:
-or-
To specify a server name, use:
The default name for SQL Server 2005 Express is .\SQLExpress.
Close the command window.
Setting up the VBA Sample
The VBA sample defaults to a SQL Server connection where the server name is (local).
To specify a different server name
Start Word.
Open the CustomerCommunicationVBA.dot template in the Customer Communications VBA folder.
Open the Visual Basic Editor, and then view the code of the frmBuildLetter form.
Navigate to the UserForm_Initialize method, and then replace (local) in the cnn variable with the name of your server.
Save your changes to the code, and then close the Word template.
Building the Visual Studio 2005 Tools for Office Sample
Open Visual Studio 2005 Tools for Office.
On the File menu, click Open Project/Solution.
Navigate to the CategoryLibrary folder, and then choose CategoryLibrary.sln.
The path to the folder is My Documents\Visual Studio 2005\Projects\Word VBA Migration Sample\CategoryLibrary. CategoryLibrary defaults to a SQL Server connection where the server name is (local).
To specify a different server name:
Right-click the CategoryLibrary project in Solution Explorer and choose Properties.
Click the Settings tab.
Replace (local) (in the value property of the NorthwindVSTO2005ConnectionString setting) with the name of your server.
On the Build menu, click Build Solution.
On the File menu, click Open Project.
Navigate to the Customer Communication VSTO 2005 folder, and then choose CustomerCommunication.sln.
The path to the folder is My Documents\Visual Studio 2005\Projects\Word VBA Migration Sample\Customer Communication VSTO 2005. CustomerCommunication defaults to a SQL Server connection where the server name is (local).
Follow the same procedure as for the CategoryLibrary project to change the server name, if necessary.
Right-click the LetterWizard project in Solution Explorer, and then select Add Reference.
On the Browse tab, open the Release folder or the Debug folder.
The path to the folder is My Documents\Visual Studio 2005\Projects\Word VBA Migration Sample\CategoryLibrary\bin\Release (or Debug).
Select CategoryLibrary.dll, and then click OK.
Press F5 to build and run the solution.
If you did not choose the installation option "Complete" when you installed Microsoft Office 2003, you may receive compilation errors related to not being able to find Microsoft.Office.Core or Microsoft.Office.Interop components. One way to correct this problem is to create empty Excel and Word applications, which install the components before running the new applications. Another way is to select Advanced during Microsoft Office installation and specify these three options:
.NET Programmability Support for Excel and Word
Forms 2.0 .NET Programmability Support
Microsoft Graph .NET Programmability Support for Visual Studio Tools for Office.
To run the application outside of the Visual Studio development environment, you must open the CustomerCommunication.dot file in the Customer Communication VSTO 2005\bin folder.
The Word solution we used for this case study cannot include every possible scenario you might encounter in your own solutions based on Microsoft Office. Instead, the solution focuses on a few techniques that are frequently found in VBA solutions:
The user interface (UI) is implemented through a form.
Data stored in a database is used to fill form combo boxes and list boxes.
The document's content can be changed using:
Entries made in the form.
A control placed directly on the document.
Data that is accessible through a custom class library.
The VBA code uses the Word object model to update the document.
The completed document can be used as the body of an e-mail message, created on-demand in Microsoft Office Outlook using the Outlook object model.
VBA Solution Overview
The marketing department at the fictitious Northwind Traders Company uses several boilerplate documents for customer communication that is produced on a recurring schedule. Each document is in the form of a letter with consistently formatted address and signature information, but with different body text. To keep the case study simple, only three letters are included.
Before any solution was available, Northwind used templates to create new documents with the boilerplate text and formatting. But information about the sender and the recipient, as well as customized information for the particular letter, had to be typed in by the user.
The VBA solution was developed to assist in the creation of these kinds of customer communication documents. The VBA solution consists of:
A Word template, CustomerCommunicationVBA.dot
The template contains a user form that prompts the user for information about which letter to use, sender and recipient information, and additional information that varies depending on the letter chosen.
Three supporting document files
These files must reside in a subfolder of the template's folder, named DocumentBodies. These documents contain the text for each of the customer communication documents that users can create by using the template.
A custom SQL Server database, NorthwindVSTO2005
This database is based on the Northwind Traders sample database, which is included with Microsoft SQL Server.
A custom class library, CategoryLibrary
This library returns either a list of features or a list of testimonials for a given product. Note that the class library was written with Visual Studio and registered for COM; this class library looks like a regular COM component to VBA.
Creating a Customer Communication Document
To run the customer communication VBA solution, the user creates a document based on the CustomerCommumicationVBA.dot template. When the document opens in Word, the template's form opens as well, as shown in Figure 1. The form is built to look like a standard, wizard with multiple pages.
The user steps through the wizard by selecting the tabs of the Multipage control or by clicking Next and Back.
The second page of the wizard prompts the user for information about which letter to create and about the final format (letter or e-mail message), as shown in Figure 2.
Depending on which letter type is chosen, the user sees different items on the Additional Information tab. The options for the Follow-Up to a Prospective Customer letter are shown in Figure 3. Note the features and testimonials that are listed, but disabled, on this page of the form. They are pulled from a database using the CategoryLibrary after the product line is selected.
The last page of the wizard prompts the user to specify a recipient. For the letter to prospective customers, the text boxes are enabled and the list is disabled so the user can type in recipient information. For other letters, the user chooses a recipient from the customer list. Figure 4 shows what the form looks like when the user must choose from the customer list.
After completing all steps in the wizard, the user clicks Finish. The appropriate letter body text is inserted and all text is updated with the specified sender, recipient, and additional information.
After the letter is created and the form is closed, the user can change the recipient by using a combo box placed on the document itself, as shown in Figure 5.
Behind the Scenes of the VBA Solution
To understand some of the issues involved in migrating the solution to Visual Studio 2005 Tools for Office, it is helpful to look at the code in the VBA solution. As we demonstrate later in this article, the following VBA features are of particular interest because the corresponding Visual Studio 2005 Tools for Office solution implements them differently than the VBA solution does:
Changing how the form looks based on the letter selected, by toggling the visibility of the three different Additional Information pages
Populating the combo boxes and list boxes with ActiveX Data Objects (ADO)
Getting the features and testimonials from the CategoryLibrary
Using document properties and the Word Range object to populate the document with information specified in the form
Implementing the User Interface
The form is built to work like a wizard, with Back, Next, Cancel, and Finish buttons. The easiest way to get this functionality with a VBA user form is to use a MultiPage control. The MultiPage control includes one page for each step in the wizard. In this case the MultiPage control also includes a page for each letter's additional information, as shown in Figure 6.
When the form is running, only one Additional Information page is visible at a time. When the user selects a letter on the Communication Piece page, code runs to change which Additional Information page is visible.
When the user clicks Next or Back, the MultiPage control shows the next or previous page. This works because the value of the MultiPage control is the index number of the current page. When the user clicks Next or Back, code runs to add to or subtract from the value of the MultiPage control. To account for the two invisible pages, the code loops until it finds the next visible page before attempting to select the page.
When the form opens, all combo boxes and list boxes are populated with code. The Communication Piece list box is the only list that is not based on data from a table; instead, a two-dimensional array is defined and used to set the List property of the list box.
The lists for most of the other combo boxes and list boxes are based on data in SQL Server tables. For these controls, data is retrieved using an ADO recordset. The rows are read into an array, which is then used to set the List property of the combo box or list box. For example, the following code fills the Current Product and New Product combo boxes on one of the Additional Information pages:
Public Sub FillProductList() Dim rstProduct As ADODB.Recordset Dim intCount As Integer Dim strSQL As String Dim arProduct() As String Set rstProduct = New ADODB.Recordset ' cnn is a public variable set in the Userform_Initialize procedure. rstProduct.ActiveConnection = cnn strSQL = "SELECT Products.ProductID, Products.ProductName " & _ "FROM Products ORDER BY Products.ProductName;" rstProduct.Source = strSQL rstProduct.Open CursorType:=adOpenKeyset, Options:=adCmdText ReDim arProduct(0 To rstProduct.RecordCount - 1, 0 To 1) For intCount = 0 To rstProduct.RecordCount - 1 arProduct(intCount, 0) = rstProduct!ProductID arProduct(intCount, 1) = rstProduct!ProductName rstProduct.MoveNext Next intCount Me.cboCurrentProduct.List = arProduct Me.cboNewProduct.List = arProduct rstProduct.Close Set rstProduct = Nothing End Sub
Two list boxes are not filled when the form is initialized: The boxes that list the features and testimonials for the selected product line are displayed for reference only — the user cannot change them. They are filled whenever the product line is changed by calling functions in our custom class library, CategoryLibrary, which return data in XML format. The Microsoft XML object model is used to read and parse the fragment:
Set objMarketing = CreateObject("CategoryLibrary.FeaturesAndTestimonials") Set returnedFeatureDoc = New MSXML2.DOMDocument returnedFeatureDoc.LoadXml _ (objMarketing.wsm_GetFeatures(CLng(Me.cboProductLine))) Set objFeatureRoot = returnedFeatureDoc.documentElement Me.lstFeatures.Clear ' Add each feature to the list (each child has only one node). For Each objFeatureElement In objFeatureRoot.ChildNodes Me.lstFeatures.AddItem objFeatureElement.Text Next objFeatureElement
Building the Document
When the user clicks Finish, the body document that the user requested inserts at a bookmarked location using the InsertFile method of the Range object:
After all of the boilerplate text is added, the document must be updated with the recipient, sender, and additional information that the user specified in the form. These updates are made using custom document properties and fields. As shown in Figure 7, the document includes a custom document property for each value that needs to change based on user selections.
The custom document properties display as part of the document's content by using fields. Figure 8 shows several fields based on document properties within a fragment of the document.
After the code changes the values of all the relevant custom document properties, all fields in the document update to display new values.
If the user chooses the Follow-Up to a Prospective Customer letter, the features and testimonials returned by the Web service methods are added to the document using the Word Range object and its methods. For example, the following code takes the testimonials from the form's testimonials list box and adds them to the document:
For intRow = 0 To Me.lstTestimonials.ListCount - 1 rng.InsertAfter Me.lstTestimonials.Column(0, intRow) rng.InsertParagraphAfter rng.Style = "Testimonial" rng.Collapse wdCollapseEnd rng.InsertAfter "--" & Me.lstTestimonials.Column(1, intRow) rng.InsertParagraphAfter rng.Style = "Name" rng.Collapse wdCollapseEnd Next intRow rng.InsertParagraphAfter
Creating an E-Mail Message
If the user requests an e-mail message instead of a letter, the letter is still created. After the letter's body text updates, the code uses Outlook objects to create a new e-mail message with a body consisting of the text in the letter, from the greeting line to the signature line.
Before beginning any migration project, you should think through your goals for the migration. For the Northwind Traders solution example, the developer wants to migrate to Visual Studio 2005 Tools for Office because the resulting solution is more secure, has a more flexible deployment model, and provides more options for the UI.
The VBA solution serves its purpose as it is currently architected, but its users desire enhancements to the solution to better handle their workflow and work habits:
The VBA user form is modal and does not allow users to view or edit their documents during letter creation. Users want a more dynamic approach to building letters.
Also, after the user form closes, users cannot use the wizard again to make changes to their documents if they made a mistake. Their only option is to discard their work and start over.
Migrating to Visual Studio 2005 Tools for Office requires a new UI because you cannot directly import a VBA user form into a Visual Studio solution and you cannot convert a VBA user form to a Microsoft Windows Form. In our example, while moving the solution to Microsoft .NET, the developer also provides a more fluid UI for users. The developer could replace the VBA user form with a Windows Form, but she believes that the Document Actions task pane is a better choice because the Document Actions task pane is unobtrusive and can better satisfy the user's workflow requirements.
The developer also uses the migration as an opportunity to try the XML features available in Word 2003 and the data binding capabilities of Visual Studio 2005. This architecture provides more flexibility, if the developer wants to enhance the solution later.
User Interface Options
Visual Studio 2005 Tools for Office provides two primary choices for a UI: a Windows Form or the actions pane.
A Windows Form is analogous to a VBA user form. Visual Studio provides a form designer that you can use to create the form and write code to show it to the user. When the Windows Form displays, it covers at least part of the active document.
The actions pane, which is new in Visual Studio 2005 Tools for Office, is an object model that you can use to customize the Document Actions task pane, which previously you could customize only by using the Office 2003 Smart Document Software Development Kit. You write code to add controls to the actions pane, and (like all task panes) the Document Actions task pane appears alongside the active document.
These two UI options can coexist in the same solution: You could include a Windows Form to gather information, and then provide an actions pane to allow the user to change the recipient or to send the letter as an e-mail message after it is created. In our example, we decided that we could satisfy our goals by designing the main UI of our Visual Studio 2005 Tools for Office sample solution as a collection of user controls that display in the Document Actions task pane.
Options for Populating Controls and the Document
Data binding is an important feature in Visual Studio 2005. Using Visual Studio 2005 and Visual Studio 2005 Tools for Office, you can add data binding to Windows Forms controls, Excel ranges, Excel lists, Excel charts, Word bookmarks, and Word XML nodes. Our sample solution uses data binding features in Visual Studio 2005 Tools for Office to populate the controls in the actions pane, and to fill the sender and recipient information in the letters. Using data binding eliminates the need to migrate a large section of the VBA code, including:
The ADO code used to populate the form's list box and combo box lists, and the document's combo box list
The custom document properties for sender and recipient
The way you insert and update the document body text also changes in the Visual Studio 2005 Tools for Office solution. In the VBA solution, the document bodies are Word document files containing custom properties. The VBA code inserts the requested document, and then updates the custom properties and the fields based on those properties. For the Visual Studio 2005 Tools for Office solution, the document bodies are XML documents containing both WordprocessingML markup and elements from a custom XML schema. The Visual Studio 2005 Tools for Office code reads the requested document as an XML document, uses XML code to select and change the values of nodes in the custom schema, and then inserts an XML fragment using the InsertXML method of the Range object.
For the Follow-up to a Prospective Customer letter, the migration also creates the opportunity to make a change to the way that we insert into the document the XML fragments that are returned by the custom class library. The VBA solution uses document object model (DOM) code and the Word object model to parse, insert, and format the features and testimonials. For the Visual Studio 2005 Tools for Office solution, the returned XML fragments are transformed to WordprocessingML as they insert into the document. We no longer need to apply styles and other formatting through the Word object model.
Although it is not noticeable with this case study's small documents, manipulating XML in memory before inserting it offers significant performance advantages over manipulating the document's content through the Word object model.
Remembering the migration goals and with the overall architecture determined, we played the role of the Northwind Traders developers and began researching and coding. We found that, as Microsoft Office and VBA developers, the steepest "learning curve" is with Windows Forms controls. However, Visual Studio provides a much greater range of controls when compared to VBA user form controls, and even controls with user form counterparts often have enhanced functionality. After working with controls in .NET, we found it difficult to go back to the more limited user forms.
In the following sections of this article, we walk through the process used to create the Visual Studio 2005 Tools for Office solution. We do not provide a detailed walkthrough of steps to create a Visual Studio 2005 Tools for Office project or to add a data source; several published walkthrough white papers on MSDN are specifically designed to lead you through those processes. See the Additional Resources section for more information and links.
Migrating the Document
To begin the migration process, we made a copy of the template, CustomerCommunicationVBA.dot, and named it CustomerCommunicationVS.dot. We opened the document in Word, replaced each Recipient and Sender field in the document with a text string, and then bookmarked each string. After creating a Visual Studio 2005 Tools for Office project for the document, we bound these bookmarks to fields in the data source.
The sender's name field required special treatment. It appears in two places: the sender address block at the top of the letter and the signature. Instead of creating and binding two different bookmarks to the same data, we bookmarked the occurrence in the sender address block, and then used a cross-reference field to display the bookmarked text in the signature.
Because we wanted to replace the combo box control with a Windows Form control in the actions pane, we deleted the combo box from the document. Next, we deleted all the custom properties used for recipient and sender information. Then, we opened the VBA editor and deleted the code modules and the form from the project. We could not remove the ThisDocument class module; instead, we opened it and deleted the code it contained.
Migrating the Document Bodies
In the VBA solution, the document bodies are in Word .doc format. The Visual Studio 2005 Tools for Office solution works with the document bodies as XML. It is easy to open a document in Word and save it as XML, but we wanted to mark up the document with our own schema so that the values of certain nodes change based on user selections.
To start this part of the migration, we created an XML schema for the document bodies that are used in the solution:
<?xml version="1.0"?> <xs:schema <xs:element <xs:complexType> <xs:choice> <_- Define sequences for each type of document --> <xs:sequence> <_- New product promotion --> <xs:element <xs:element </xs:sequence> <xs:sequence> <_- Prospective Customer --> <xs:element <xs:element </xs:sequence> <xs:sequence> <_- Ex-customer --> <xs:element <xs:element <xs:element </xs:sequence> </xs:choice> </xs:complexType> </xs:element> </xs:schema>
Next, we migrated the document bodies by opening each document in Word and attaching the XML schema. We replaced the fields used by the VBA solution with plain text and then used the XML structure pane to mark up the document with nodes from the custom schema. If we used a node's value more than once within the document, we used the same technique that we used earlier with the sender's name: We bookmarked the first occurrence, and replaced the second and subsequent occurrences with fields referencing the bookmark text. After preparing a document, we saved it in XML format. Figure 9 shows an example of a completed document body.
Building XSL Transforms for Web Services Data
To build the transforms required for the Testimonials and Features XML fragments that are returned by the custom CategoryLibrary, we wrote code to call each function and write out the returned XML as a file. Then, we opened each file in Word and formatted it as we wanted to see it in the document. Then, we used a tool that is available for download from Microsoft, the Office 2003 Tool: WordprocessingML Transform Inference, to create the necessary XSL transform files.
Creating a Visual Studio 2005 Tools for Office Solution
With all the documents prepared, we were ready to create the Visual Studio 2005 Tools for Office project itself. We created a Word Template project based on an existing document called CustomerCommunicationVS.dot, and named the project CustomerCommunication.
When we saved our project, Visual Studio created the directory structure for the solution. Then, we copied the WordprocessingML versions of the documents and the transforms to the project folder.
We avoided writing code to navigate the directory structure by ensuring that the supporting files were copied to the same output folder as the solution's DLL file. To make this happen, we added each of the support files to the project, and then changed the Copy To Output Folder property for each file to Copy if newer. When we built the project later, a copy of each file was placed in the output folder, along with a copy of the document template and the assembly itself. In our example, the output folder is CustomerCommunication\bin.
Adding a Data Source
After we created the project, we set up the data source necessary for the solution by adding a new connection to the SQL Server database, NorthwindVsto2005, as shown in Figure 11.
We saved the connection string with the name NorthwindVsto2005ConnectionString and included only those database tables that the solution actually uses. The connection string to the database is stored in the project settings.
Configuring Data-Bound Controls on the Document
With the data source available, we could set up all the data-bound controls on the document. When we created our project, Visual Studio automatically recognized all of the bookmark controls as host controls, enabling us to view the properties of each in the Control Properties window. For example, when we click in the FirstName bookmark on the document in the sender section, we can view and change properties for that bookmark in the Properties window.
To start binding bookmarks in the document to fields in the data source, all we had to do was drop items from the Data Sources window onto the target bookmarks. So, for example, we started with the FirstName bookmark in the senders area of the document. In the Data Sources window, we expanded the Employees table, clicked on the FirstName field, and dragged it onto the FirstName bookmark in our document. Our bookmark was now data bound.
After we bound the bookmark to a field in the Employees table, Visual Studio automatically added a dataset, a table adapter, and a binding source to the document (these are visible in the component tray). By examining the DataBindings property of the bookmark, we see that Visual Studio bound the FirstName field in the data source to our bookmark, as shown in Figure 12.
The remaining steps of our example walkthrough are simple. From the Data Sources window, we dragged onto our document those fields in the Employees and Customers tables that we wanted to bind to the bookmarks in the sender and recipient areas of our document. Visual Studio added additional table adapters and binding sources to the document, as required by our data-binding selections. When we wanted to bind the CustomerCSZ bookmark, we found that there is not a field in the data source that exactly matches what we required: a calculated field that builds the recipient's City + Region + Postal Code as one string. In the Data Sources window, we right-clicked the dataset and chose Edit DataSet in the Designer. Then, in the designer, we added a new column to the Customers table and named it CustomerCSZ. We set the Expression property for this new column to return the desired string, as shown in Figure 13.
We ran a first pass on our project, with just the bookmark data bindings, to see the results. When running the project, we found that the bookmarks on our document bound to the actual data in the data source, as shown in Figure 14. It happened just like that; we did not have to write any code.
Building the Actions Pane
With the document data bound, we had to create the solution's UI. After much thought, research, and experimentation, we changed the overall design of the solution's UI significantly.
As noted earlier, the VBA solution uses a MultiPage control to contain each separate set of controls (using the wizard analogy, to contain each step's controls). This architecture makes it easy to toggle which Additional Information page is visible, depending on the letter chosen. It was also easy to program the Next and Back buttons because changing the value of the MultiPage control changes which page is active. Although the user can use the control's tabs, instead of using the navigation buttons, to select pages in any order, this functionality was secondary, not a requirement.
Visual Studio 2005 Tools for Office includes a Windows Form MultiPage control that could be used in this solution. However, we wanted to make each step in the wizard independent of the other steps. Doing so ensures that controls are more reusable and able to be shared with other solutions that we might create later. Although the MultiPage control looks like its user form counterpart, it works very differently. Although we could have used a Windows Form MultiPage control to create a wizard-like interface, we wanted to try something new. In addition to the obvious usability requirements of any form, we had one driving requirement: to change portions of the UI — to hide some controls and display others — based on selections made by the user at run time.
We decided to work with user controls for this purpose, creating a user control for each step in the solution's UI.
As we started to plan the design for each step's controls, we soon realized that the controls should look the same and share some similarities in their function. This is necessary so that the individual controls act in combination and appear as one wizard in the solution. We created one user control to act as a basis for the others. One user control can determine the baseline appearance of the other user controls; this control can also expose common properties and methods to reduce code redundancy. This approach is called control inheritance.
As previously stated, one of our goals was to design controls that we could use with other projects we might develop later. To meet this goal, we required a Control Library. On the File menu, we pointed to Add, and then clicked New Project. We selected the Windows Control Library project type and named the new project LetterWizard. By default, Visual Studio automatically added one user control, UserControl1.vb, to the project. We renamed this control WizardBase.vb, and then designed the control surface. The base control appearance is simple. As illustrated in Figure 15, WizardBase required only three controls: one Label control to display a caption and two LinkLabel controls for handling navigation. These three controls had to be fixed, so that controls that inherit from WizardBase can only modify those properties of the controls that we allow. To accomplish this, we set the Modifiers property of each of the three controls to Private.
After we started designing the base control, we considered the inherited controls, which add the correct functionality to WizardBase. Table 1 summarizes the user controls that would inherit from WizardBase.
Next, we added the first inherited control to the LetterWizard project. Logically, we started with the SelectSender control because is the first control to appear in the letter wizard. Because SelectSender inherits from WizardBase, it already contains the Label for a caption and two LinkLabels for navigation. We added one ComboBox to hold the list of senders, and Label and TextBox controls to display the user's selection, as shown in Figure 16.
Although we still had to design the remaining controls, we wanted to see how this first control looks in the actions pane within our document. Adding a control to the actions pane is simple. First, we required a reference to the LetterWizard control library. We added a private member variable to the ThisDocument class to hold the reference to the instance of the SelectSender control used in the actions pane:
Then, to add the control to the ActionsPane, we added code at the beginning of the document's Startup event:
Then, we pressed F5 to start the solution in debug mode. After Word loaded the document, the first control appeared in the actions pane, as shown in Figure 17.
The control looks great in the action pane, but we still had to add some functionality to the control. To make the SelectSender control functional:
The control caption should describe the step.
The ComboBox control needs to bind to the Employees data source, to show the list of available senders. Additionally, the ComboBox should bind to the same Employees data source as the document, so that the user's selection in the task pane syncs with the sender information displayed in the document.
We had to add navigation functionality to the Previous and Next link labels.
These three tasks are common to most, if not all, of the step controls in the wizard. So, we added this functionality to WizardBase. Adding this functionality to the base class reduces the amount of code to write, provides consistency, and simplifies maintenance — it is easier to change code in one place instead of in many.
We added the control caption and data source properties to an initialization routine, which we named InitializeStep. InitializeStep also hides the Previous link label, if the control is designated as the first control in the wizard (step number = 1), and it changes the text of the Next link label to Finish, if the control is designated as the last control in the wizard (isLastStep = true):
Private stepNum As Integer Private bindSource As BindingSource Protected Event DoDataBind() Public Sub InitializeStep(ByVal stepNumber As Integer, ByVal isLastStep _ As Boolean, ByVal stepCaption As String, Optional ByVal bindSrc _ As BindingSource = Nothing) stepNum = stepNumber If stepNum = 1 Then Me.previousLink.Visible = False If isLastStep Then Me.nextLink.Text = "Finish" Me.stepLabel.Text = stepCaption If Not (bindSrc Is Nothing) Then StepBindingSource = bindSrc End Sub Public Property StepBindingSource() As BindingSource Get Return bindSource End Get Set(ByVal value As BindingSource) bindSource = value RaiseEvent DoDataBind() End Set End Property
In our design, we decided that inherited controls should be responsible for their own data binding. For this reason, we raise an event called DoDataBind, which inherited controls can handle to bind their controls to the data source.
Next, we had to call InitializeStep on the first wizard control. We added the call to InitializeStep in the document's Startup event:
We had to set three properties to data bind the ComboBox control: DataSource, ValueMember, and DisplayMember. As the name implies, DataSource represents the data source that the control uses for data binding. ValueMember is the underlying value of an item in the list. DisplayMember is the text that displays in the list. For the ComboBox on the SelectSender control, we wanted to display the employee's full name but associate it with the employee's ID for later use. For this reason, we assigned the ValueMember property to the EmployeeID field and the DisplayMember property to a calculated field named FullName. FullName uses an expression (LastName + ', ' + FirstName). We added the FullName calculated field to the Employees table using the approach previously described for the CustomerCSZ calculated field in the Customers table:
Private Sub SelectSender_DoDataBind() Handles Me.DoDataBind 'Bind controls on this user control to the binding source With Me.sender .DataSource = Me.StepBindingSource .ValueMember = "EmployeeID" .DisplayMember = "FullName" End With Me.title.DataBindings.Add("Text", Me.StepBindingSource, "Title") Me.phone.DataBindings.Add("Text", Me.StepBindingSource, "Phone") End Sub
At this point, we ran the project again, to see the changes. Word opened the document and displayed the actions pane, as shown in Figure 18. Just as expected, when we changed the selection in the combo box, the bookmarks on the document (bound to fields in the Employees table) updated to reflect the selection in the combo box.
After completing the control caption and data source functionality, the next task was to handle the navigation in WizardBase — the LinkClicked event for the Previous and Next link label controls. In the LinkClicked event for each, we raised an OnPrevious (or OnNext) event. If OnPrevious (or OnNext) succeed, we then raise an Action event to signal that it is time to go to the previous (or next) step in the wizard:
Protected Event OnNext(ByRef Success As Boolean) Protected Event OnPrevious(ByRef Success As Boolean) Public Event Action(ByVal currentStep As Integer, ByVal stepDirection _ As Integer) Private Sub previousLink_LinkClicked(ByVal sender As System.Object, _ ByVal e As System.Windows.Forms.LinkLabelLinkClickedEventArgs) _ Handles previousLink.LinkClicked 'Raise OnPrevious. If it returns true, raise the Action event. Dim success As Boolean = True RaiseEvent OnPrevious(success) If success Then RaiseEvent Action(Me.stepNum, -1) End Sub Private Sub nextLink_LinkClicked(ByVal sender As System.Object, _ ByVal e As System.Windows.Forms.LinkLabelLinkClickedEventArgs) _ Handles nextLink.LinkClicked 'Raise OnNext. If it returns true, raise the Action event. Dim success As Boolean = True RaiseEvent OnNext(success) If success Then RaiseEvent Action(Me.stepNum, 1) End Sub
At least two controls are required to test navigation. So, we proceeded to design the SelectLetterType control for the second step in the wizard. Again, in Solution Explorer, we right-clicked the LetterWizard project, pointed to Add, and then clicked New Item. We selected the Inherited User Control template, named the control SelectLetterType, clicked OK, and then chose WizardBase for the control to inherit from. SelectLetterType needs only one additional control: a ListBox control to contain our three letter types. We added the list box and the code that is required to populate the list box at run time; SelectLetterType does not need data binding:
Public Enum LetterTypes newProductAnnouncement prospectFollowup formerCustomerInputRequest End Enum Public Event LetterTypeChanged(ByVal letter As LetterTypes) Private Sub SelectLetterType_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load ' Add the letter types to the letterList list box. With Me.letterList .Items.Add("Announcement of New Product to Retailers") .Items.Add("Follow-up to Prospective Customer") .Items.Add("Request for Input from Former Customer") .SelectedIndex = 0 End With End Sub Public ReadOnly Property SelectedLetter() Get Return Me.letterList.SelectedIndex End Get End Property Private Sub letterList_SelectedIndexChanged(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles letterList.SelectedIndexChanged RaiseEvent LetterTypeChanged(Me.letterList.SelectedIndex) End Sub
The second wizard control was complete. We added a private member variable to the ThisDocument class to reference an instance of SelectLetterType for the actions pane:
As mentioned earlier, WizardBase raises an Action event when the user navigates in the wizard. Because we declared Step 1 using the WithEvents keyword, we could use the Class Name and Method Name drop-down lists in the Code Editor to set up an event handler for the Action event of Step 1:
We could have used the same process to set up the event handler for Step 2, but we wanted to use a single function to streamline the steps in the wizard. Given that we could use one function for handling the same event for multiple controls with the Handles keyword on the procedure declaration, we modified the Action event handler for Step 1 to include Step 2 — that is not possible in VBA:
Private Sub WizardAction(ByVal currentStep As Integer, _ ByVal nextStep As Integer) Handles step1.Action, step2.Action ' Clear the controls from the actions pane. Me.ActionsPane.Controls.Clear() Select Case currentStep + nextStep Case 1 ' Show Step 1: Create and intialize the control and fill its ' data source. If step1 Is Nothing Then step1 = New SelectSender() step1.InitializeStep(1, False, "Select a sender:", _ Me.EmployeesBindingSource) Me.EmployeesTableAdapter.Fill( _ Me.NorthwindVSTO2005DataSet.Employees) Me.Fields.Update() End If Me.ActionsPane.Controls.Add(step1) Case 2 ' Show Step 2: Create and initialize the control. If step2 Is Nothing Then step2 = New SelectLetterType() step2.InitializeStep(2, False, "Select a letter") End If Me.ActionsPane.Controls.Add(step2) End Select End Sub
The WizardAction procedure manages initialization of the controls for the actions pane and handles removing controls from and adding controls to the actions pane, based on the current step number. We modified the Startup event again, so that it calls the WizardAction procedure to initialize and display the first control in the wizard:
At this point, we ran the project again to test the navigation. Word opened the document and displayed the SelectSender control in the actions pane, as before. We selected a sender and clicked Next. As illustrated in Figure 19, the SelectSender control disappeared and the SelectLetterType control appeared in the actions pane, as expected. We clicked Previous to go back to the first step: the earlier selection in the ComboBox remained but we could change the sender and, again, the document updated to reflect our selection.
With the WizardBase functionality in place for initializing the wizard controls and handling navigation, it was then clear how to proceed with adding the remaining controls to the wizard. We used the same approach for the remaining actions pane controls that we used for the SelectSender and SelectLetterType controls. This article does not discuss all the details of the code because you can download the project files for this solution and review them. However, these two data-related items may not be obvious from reviewing the code alone:
When we dragged a field from the Data Sources window and dropped it onto the Word document, Visual Studio automatically created a BindingSource and a DataTableAdapter for the data binding. These components appear in the component tray. When we bound the document bookmarks to fields in the Employees and Customers tables, Visual Studio added these components. But the Visual Studio 2005 Tools for Office solution also requires binding actions pane controls to fields in the Categories and Products tables; we added a BindingSource and a DataTableAdapter for both tables by selecting a BindingSource component in the Toolbox and dropping it onto the document. After we set the binding source's DataSource and DataMember properties, the table adapter was created for us automatically.
In all but one instance, we used the default Fill method for the DataTableAdapter, which returns all records in a table. For the New Product letter type, we had to return only those customers that ordered a certain product. To handle this situation, we added a parameterized query to the CustomersTableAdapter and associated the query with a custom Fill method named FillByProductID.
Working with Word Bookmarks
Code used throughout the solution references bookmarks in the document, but not in the same way that we were used to referring to bookmarks in VBA. When we worked with a document that included bookmarks, Visual Studio 2005 Tools for Office promoted each bookmark to be a first-class control, generically called a host control. A Word bookmark used in Visual Studio 2005 Tools for Office code has all the functionality of Word bookmarks, plus additional events and features like data binding. Word 2003 also promotes XMLNode objects and XMLNodes collections to be host controls. In Excel 2003, host controls include ranges, list objects, and chart objects.
This means that if we want to refer to a bookmark in Visual Studio 2005 Tools for Office code, we can refer to it as a member of the document. Also, the host control bookmark merges the functionality of a Bookmark object with that of a Range object. For example, we could change the text in a Word bookmark object with code like this:
It is more concise and easier to type the host control syntax because the bookmark names show up in the Microsoft IntelliSense member list. There is an additional benefit from another feature of the Bookmark host control: Unlike its VBA counterpart, the host control bookmark is not deleted when its text value is set.
Working with XML Documents
To work with an XML document in Visual Basic, we used the System.Xml namespace. In our solution, controls used for the third step in the wizard all use XML to build the body of the document:
We loaded the XML document from a file.
We used XPath to select nodes.
We set the values of the selected nodes based on the user's selections in the actions pane.
We raised an event to insert the XML document at a given bookmark.
We updated fields in the newly inserted range.
Because several controls use this technique for inserting formatted text in the document, it makes sense to create a helper class to streamline the code. For this purpose, we added the XmlHelper class to the LetterWizard control library project:
Imports System.Xml Public Class XmlHelper Private xmlDoc As XmlDocument Private nsManager As XmlNamespaceManager Public Sub New(ByVal template As String) ' Load the XML fragment. xmlDoc = New XmlDocument xmlDoc.Load(template) ' northwind and wordml namespaces must be defined ' for XPath navigation. nsManager = New XmlNamespaceManager(xmlDoc.NameTable) nsManager.AddNamespace( _ "ns1", "") nsManager.AddNamespace( _ "w", "") End Sub Public Sub ModifyNode(ByVal xPath As String, ByVal text As String) xmlDoc.SelectSingleNode(xPath, nsManager).InnerText = text End Sub Public ReadOnly Property XmlDocument() Get Return xmlDoc End Get End Property End Class
Notice that the XmlHelper constructor (Sub New) expects one string parameter. When we created a new instance of the XmlHelper class, we had to provide the file name and full path to the XML that we wanted to load. In the constructor, XmlHelper creates a new XmlDocument and calls its Load method to load the XML from the path we provided.
Selecting a Node with XPath
To use XPath for selecting XML nodes in an XML document with multiple namespaces, XmlHelper initializes a namespace manager and adds the namespaces that we need for XPath expressions. With the namespace manager in place, we used the SelectSingleNode method of the XmlDocument object to select nodes. The values of selected nodes are then changed using the XmlNode object's InnerText property.
Inserting XML at a Bookmark Location
In WizardBase, we added an event, InsertXmlAtBookmark, and a protected method, SetXMLData. The InsertXmlAtBookmark event is raised when XML is ready for insertion into the document. SetXmlData is responsible for raising this event:
The controls in the third step of the wizard (SelectFormer, SelectNewProduct, and SelectVisitingSalesperon) each call SetXmlData when new XML is prepared and ready for insertion into the document. This then raises the InsertXmlAtBookmark event, which the ThisDocument class handles. Notice that, again, we used the Handles keyword to use one function for handling the same event on multiple controls:
Private Sub InsertXmlAtBookmark(ByVal bookmarkName As String, _ ByVal xml As String, ByVal xsl As String) Handles _ step3Former.InsertXmlAtBookmark, step3NewProduct.InsertXmlAtBookmark, _ step3Prospect.InsertXmlAtBookmark 'Insert the XML at the bookmark & update fields in the bookmark range. Dim targetRange As Word.Range = Me.Bookmarks(bookmarkName).Range targetRange.InsertXML(xml, xsl) targetRange.Fields.Update() ' Recreate the bookmark because it was removed by the call to InsertXML. Me.Bookmarks.Add(bookmarkName, targetRange) End Sub
Migrating the E-Mail Feature
If the user chooses the E-Mail option in the last step, the VBA solution uses the letter's content to automatically create an e-mail message. To move this e-mail feature to the Visual Studio 2005 Tools for Office solution, we had to add a reference to the Outlook object library to the project, and write code using the Outlook object model to create and preview the e-mail message.
To make it easier to read and write the code that uses the Outlook object model, we added an Imports statement to the form's class module just before the class declaration:
We assigned the name "Outlook" to the reference. That way, after the reference is set, the code to send an e-mail message is very similar to the VBA code for the same feature:
' Start Outlook using Automation and create a new mail message from ' the text in the document. Dim otlApp As Outlook.Application Dim otlEMail As Outlook.MailItem otlApp = New Outlook.Application otlEMail = otlApp.CreateItem(Outlook.OlItemType.olMailItem) With otlEMail .To = Me.CustomersBindingSource.Current("EmailAddress").ToString .Subject = emailSubject otlEMail.BodyFormat = Outlook.OlBodyFormat.olFormatHTML otlEMail.Body = Me.Range( _ Me.EMailBodyStart.Start, Me.YourTitle.End).Text .Display() End With
The sample code referred to in this article is intended for instructional purposes, and should not be used some examples of identified threats that you should take into consideration before expanding or deploying this solution:
Visual Studio 2005 Tools for Office Assemblies are replaced
If the Visual Studio 2005 Tools for Office assemblies are replaced with other assemblies, this can result in the application behavior being altered to compromise the system. You can mitigate this threat by signing the code, using strong name conventions, or providing hash evidence. This sample does not currently implement these precautions, but they are strongly recommended before deploying an application to a production environment.
SQL Server database is compromised and contains invalid data
This threat can affect the data binding in the document and the Windows Form. You can mitigate this threat by ensuring that only users with a valid user name and password have rights to read or change the SQL Server data.
For more information on code security, visit the Microsoft Security Developer Center.
Not every VBA solution benefits from a Visual Studio 2005 Tools for Office migration. If you are uncertain whether to move a particular solution, check out the Additional Resources and evaluate the benefits and concerns for your individual situation. But if you want a better security model, more flexible deployment, better memory management, and an easier way to write data-based solutions, Visual Studio 2005 Tools for Office is worth investigating.
No matter which development tool you choose, learning more about XML and how it is used in Microsoft Office, and about Web services, will help you create richer Microsoft Office solutions.
If you migrate to Visual Studio 2005 Tools for Office, you need to learn a new language. Visual Basic is different from VBA, but not dramatically different. If you choose to move to Visual C# instead, you may require more time to feel comfortable in the language. The areas you focus on within Visual Studio and the .NET Framework will vary depending on the types of applications you develop, but in general, learning about Windows Forms and controls, data binding, ADO.NET, and working with XML in .NET will help you develop richer Microsoft Office solutions.
Jan Fransen spends her days helping people find ways to use technology to do their jobs better. As a consultant, she creates solutions in Microsoft Office, VBA, and Visual Basic .NET. As a trainer and writer, she teaches people how to program effectively in the Visual Studio and VBA environments, and how to perform the occasional interactive Office trick.
Lori Turner is a developer consultant with Microsoft Services with many years of expertise in Office development. Lori has been with Microsoft for more than 12 years and still loves it! When not assisting Microsoft customers or writing cool Office solutions with .NET, she enjoys spending time with her husband Robbie and twin daughters Haley and Jordan. She's most happy when outdoors: either at the beach or in her garden in Charlotte, North Carolina.
This document was created in partnership with A23 Consulting.
What's New in Visual Studio 2005 Tools for Office
XML in Microsoft Office Word 2003
Office 2003 Tool: WordprocessingML Transform Inference Tool
New XML Features of the Microsoft Office Word 2003 Object Model
Microsoft Office Word 2003 XML Object Model Overview
Microsoft Office Word 2003 XML: Memo Styles Sample
The XML Files: XML in Microsoft Office Word 2003
VBA to Visual Basic
Convert VBA Code to Visual Basic When Migrating to Visual Studio 2005 Tools for Office
Don't Freak Out About Visual Studio
Office Developer Center
Code Security | http://msdn.microsoft.com/en-us/library/aa537162(v=office.11).aspx | CC-MAIN-2014-35 | refinedweb | 9,093 | 51.89 |
Data is beautiful. And with modern technologies it is crazy easy to visualize your data and create great experiences. In this quick how to, we cover how to interact with the npm 💘 API to get download statistics of a package and generate a chart from this data with Chart.js
âš¡ Quickstart
We will build npm-stats.org and will be using following tools:
- Vue.js with vue-router
- Chart.js
- vue-chartjs
- vue-cli
- axios
With Vue.js we will build the basic interface of the app and and routing with
vue-router. And we scaffold our project with
vue-cli which creates our basic project structure. For the chart generation we will be using Chart.js and as a wrapper for Vue,
vue-chartjs. As we need to interact with an API, we’re using
axios to make the http requests. However feel free to swap that one out with any other lib.
🔧 Install & Setup
At first we need to install
vue-cli to scaffold our project. I hope you have a current version of node and npm already installed! 🙠Even better if you have yarn installed! If not, you really should! If you don’t want, just swap out the yarn commands with the npm equivalents.
$ npm install -g vue-cli
Then we can scaffold our project with vue-cli. If you want to can enable the unit and e2e tests, however we will not cover them.🔥 But you need to check vue-router!
$ vue init webpack npm-stats
Then we cd in our project folder and install the dependencies with
cd npm-stats && yarn install. So our basic project dependencies are installed. Now we need to add the one for our app.
$ yarn add vue-chartjs chart.js axios
Just a quick check if everything is running with
yarn run dev. Now we should see the boilerplate page of vue.
Aaaand we’re done! ðŸ‘
💪 Time to build
Just a small disclaimer here, I will not focus on the styling. I guess you’re able to make the site look good by your own 💅 so we only cover the javascript related code.
And another disclaimer, this is rather a small MVP then super clean code right now. I will refactor some of it in later stages. Like in the real world.
Components
Let’s think about what components we need. As we’re looking at the screenshot we see an input field for the package name you’re looking for and a button. Maybe a header and footer and the chart itself.
You totally could make the button and input field a component however as we don’t build a complex app, why bother? Make it simple. Make it work!
So I ended up with following components:
- components/Footer.vue
- components/Header.vue
- components/LineChart.vue
- pages/Start.vue
I will skip the Header and Footer as they only contain the logo and some links. Nothing special here. The LineChart and Start page are the important ones.
LineChart
The LineChart component will be our chart.js instance which renders the chart. We need to import the Line component and extend it. We create two props for now. One for the data which is the number of downloads and the labels which are for example the days, weeks, years.
props: { chartData: { type: Array, required: false }, chartLabels: { type: Array, required: true } },
As we want all our charts to look the same, we define some of the Chart.js styling options in a data model which get passed as options to the renderChart() method.
And as we will have only one dataset for now, we can just build up the dataset array and bind the labels and data.
<script> import { Line } from 'vue-chartjs' export default Line.extend({ props: { chartData: { type: Array | Object, required: false }, chartLabels: { type: Array, required: true } }, data () { return { options: { scales: { yAxes: [{ ticks: { beginAtZero: true }, gridLines: { display: true } }], xAxes: [ { gridLines: { display: false } }] }, legend: { display: false }, responsive: true, maintainAspectRatio: false } } }, mounted () { this.renderChart({ labels: this.chartLabels, datasets: [ { label: 'downloads', borderColor: '#249EBF', pointBackgroundColor: 'white', borderWidth: 1, pointBorderColor: '#249EBF', backgroundColor: 'transparent', data: this.chartData } ] }, this.options) } }) </script>
📺 Our start page
As we have our LineChart component up and working. It’s time to build the rest. We need an input field and button to submit the package name. Then request the data and pass the data to our chart component.
So, let’s first think about what data we need and what states / data models. First of all we need a
package data model, which we will use with v-model in our input field. We also want to display the name of the package as a headline. So
packageName would be good. Then our two arrays for the requested data
downloads and
labels and as we’re requesting a time period we need to set the
period. But, maybe the request goes wrong so we need
errorMessage and
showError. And last but not least
loaded as we want to show the chart only after the request is made.
npm API
There are various endpoints to get the downloads of a package. One is for example
GET{period}[/{package}]
However this one gets only a point value. So the total downloads. But to draw our cool chart, we need more data. So we need the range endpoint.
GET{period}[/{package}]
The period can be defined as for example
last-day or
last-month or a specific date range
2017-01-01:2017-04-19 But to keep it simple we set the default value to
last-month. Later in Part II we can then add some date input fields so the user can set a date range.
So our data models are looking like this:
data () { return { package: null, packageName: ‘’, period: ‘last-month’, loaded: false, downloads: [], labels: [], showError: false, errorMessage: ‘Please enter a package name’ } },
💅 Template
Now it’s time to build up the template. We need 5 things:
- Input field
- Button to trigger the search
- Error message output
- Headline with the package name
- Our chart.
<input class=”Search__input” @keyup.enter=”requestData” placeholder=”npm package name” type=”search” name=”search” v-model=”package” > <button class=”Search__button” @click=”requestData”>Find</button> <div class="error-message" v- {{ errorMessage }} </div> <h1 class="title" v-{{ packageName }}</h1> <line-chart</line-chart>
Just ignore the css classes for now. We have our Input Field which has an keyup event on enter. So if you press enter you trigger the
requestData() method. And we bind
v-model to package
For the potential error we have a condition, only if
showError is true we show the message. There are two types or errors that can occur. The one is, someone try to search for a package without entering any name or he’s entering a name that does not exist.
For the first case, we have our default errorMessage, for the second case we will grab the error message that comes from the request.
So our full template will look like this:
<template> <div class="content"> <div class="container"> <div class="Search__container"> <input class="Search__input" @keyup. <button class="Search__button" @Find</button> </div> <div class="error-message" v- {{ errorMessage }} </div> <hr> <h1 class="title" v-{{ packageName }}</h1> <div class="Chart__container" v- <div class="Chart__title"> Downloads per Day <span>{{ period }}</span> <hr> </div> <div class="Chart__content"> <line-chart</line-chart> </div> </div> </div> </div> </template>
🤖 Javascript
Now it’s time for the coding. First we will do our requestData() method. It is rather simple. We need to make a request to our endpoint and then map the data that comes in. In our response.data we have some information about the package:
Like the start data, end date, the package name and then the downloads array. However the structure for the downloads array is something like this:
downloads: [ {day: ‘2017–03–20’, downloads: ‘3’}, {day: ‘2017–03–21’, downloads: ‘2’}, {day: ‘2017–03–22’, downloads: ‘10’}, ]
But we need to separate the downloads and days, because for chart.js we need one array only with the data (downloads) and one array with the labels (day). This is an easy job for map.
requestData () { axios.get(`{this.period}/${this.package}`) .then(response => { this.downloads = response.data.downloads.map(download => download.downloads) this.labels = response.data.downloads.map(download => download.day) this.packageName = response.data.package this.loaded = true }) .catch(err => { this.errorMessage = err.response.data.error this.showError = true }) }
Now if we enter a package name, like vue and hit enter, the request is made, the data mapped and the chart rendered! But, wait. You don’t see anything. Because we need to tell vue-router to set the index to our start page.
Under
router/index.js we import or page and tell the router to use it
import Vue from ‘vue’ import Router from ‘vue-router’ import StartPage from ‘@/pages/Start’ Vue.use(Router) export default new Router({ routes: [ { path: ‘/’, name: ‘Start’, component: StartPage }, ] })
💎 Polish
But, we are not done yet. We have some issues, right? First our app breaks if we don’t enter any name. And we have problems if you enter a new package and hit enter. And after an error the message does not disappear.
Well, it’s time to clean up a bit. First let’s create a new method to reset our state.
resetState () { this.loaded = false this.showError = false },
Which we call in our
requestData() method before the
axios api call. And we need a check for the package name.
if (this.package === null || this.package === ‘’ || this.package === ‘undefined’) { this.showError = true return }
Now if we try to search an empty package name, we get or default errorMessage.
I know, we covered a lot, but let’s add another small cool feature. We have
vue-router, but not really using it. At our root
/ we see the starting page with the input field. And after a search we stay at our root page. But it would be cool if we could share our link with the stats, wouldn’t it be?
So after a valid search, we add the package name to our url.
npm-stats.org/#/vue-chartjs
And if we click on that link, we need to grab the package name and use it to request our data.
Let’s create a new method to set our url
setURL () { history.pushState({ info: `npm-stats ${this.package}`}, this.package, `/#/${this.package}`) }
We need to call
this.setURL() in our response promise. Now after the request is made, we add the package name to our URL. But, if we open a new browser tab and call it, nothing happens. Because we need to tell
vue-router that everything after our
/ will also point to the start page and define the string as a query param. Which is super easy.
In our
router/index.js we just need to set another path in the routes array. We call the param package.
{ path: ‘/:package’, component: StartPage }
Now if you go to
localhost:8080/#/react-vr you will get the start page. But without a chart. Because we need to grab the param and do our request with it.
Back in our
Start.vue we grab the param in the mounted hook.
mounted () { if (this.$route.params.package) { this.package = this.$route.params.package this.requestData() } },
And thats it! Complete file:
import axios from 'axios' import LineChart from '@/components/LineChart' export default { components: { LineChart }, props: {}, data () { return { package: null, packageName: '', period: 'last-month', loaded: false, downloads: [], labels: [], showError: false, errorMessage: 'Please enter a package name' } }, mounted () { if (this.$route.params.package) { this.package = this.$route.params.package this.requestData() } }, methods: { resetState () { this.loaded = false this.showError = false }, requestData () { if (this.package === null || this.package === '' || this.package === 'undefined') { this.showError = true return } this.resetState() axios.get(`{this.period}/${this.package}`) .then(response => { console.log(response.data) this.downloads = response.data.downloads.map(download => download.downloads) this.labels = response.data.downloads.map(download => download.day) this.packageName = response.data.package this.setURL() this.loaded = true }) .catch(err => { this.errorMessage = err.response.data.error this.showError = true }) }, setURL () { history.pushState({ info: `npm-stats ${this.package}` }, this.package, `/#/${this.package}`) } } }
You can check out the full source at GitHub and view the demo page at 📺 npm-stats.org
Improvements
But hey, there is still room for improvements. We could add more charts. Like monthly statistics, yearly statistics and add date fields to set the period and many more things. I will cover some of them in Part II ! So stay tuned!
Posted on by:
Jakub Juszczak
Freelance Developer - #Blogger, #Student, #Tea addicted, likes #photography and #gamedev.
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/apertureless/lets-build-a-web-app-with-vue-chartjs-and-an-api | CC-MAIN-2020-34 | refinedweb | 2,132 | 67.45 |
I do not find the term item satisfying. It’s somewhat meaningless and is not even explained in the reference. What are these things called in other languages? Surely there must be something better out there?
Can we change the language term "item" to something else
Items are the “top level” of a Rust crate if name resolution is ignored. When compared to other languages, Rust has a relatively rich set of top-level things. I think C-family languages call these “declarations”.
There’s also the “are impl-items and trait-items items?” business
No, in C-like languages “items” would be either declarations or definitions, except that I’m not sure
use statements (or
using namespace) fit that classification (well, the reference calls
use statements declarations). I’m not aware of a general classification more specific than “item” (except maybe “line”).
Maybe simply replacing “items” with “declarations and definitions” is good enough?
Rust does not really have “declarations” in the C sense (except for externs, and these are used only for FFI), only definitions (I mixed these 2 up).
use statements are not items, in any case.
use statements are “items” according to the linked reference. I think the idea is that the crate can be chopped up into a series of items.
Rust does have “declarations”:
mod foo; is exactly a declaration according to the C sense. (You could, I suppose, argue that this is not a crate-level construct since from the point of view of compiling the crate (in the naive view at least)
mod foo; is replaced with the entire module:
mod foo { ... }, neverless
mod declarations do occur at the file level.)
As for
extern crate and
use ..., they are either declarations or instructions (or some word to that effect).
Then there are attributes and macro definitions, neither of which are listed in that section of the reference.
use and
extern crate are not really items (at least from an implementation standpoint), regardless of what the reference says - they merely reference other items. I think “directive” may be the word we want.
attributes and macro definitions are also not items - attributes are attached to something (which may be a proper outer item or some inner item - e.g. a struct field), while macro definitions, along with
mod foo;, are defined solely by their expansion.
Ah, I was thinking in terms of the contents of the file. You’re correct, those things are not really “items” from the point of view of the crate (the
use and
extern crate directives are more like attributes of the module and crate respectively).
Note that traits do use declarations in the C sense.
At the crate level though, are we agreed that “items” are either directives or definitions?
Traits are definitions in the same way C structs definitions are. Anyway, from an implementation standpoint, we only sometimes regard directives as items.
Don’t fix what ain’t broke. ‘Item’ is a perfectly descriptive name, given that the term is so generic.
“item” is a rather generic concept (especially if we also count impl-items and trait-items), and most places deal which more specific items (e.g. “struct”, “function”).
‘thing’ isn’t okay not because it’s too generic, but because it’s too casual. ‘item’ is okay because that’s all they are: items. There’s nothing else tying them together. The only alternative is a term like ‘declarations and definitions’ which isn’t a term, it’s a list of terms. You can call them ‘structs, functions, …’ but again that is just a list of things, it isn’t a name for that grouping as a whole.
Anything can be called an item, but in the case of Rust, an item is something specific. That’s why I think using item is not ideal. top-levels would be a better name, but there are likely better names.
There are lots of things that can be called “type” (or “value”), but Rust gives these terms a specific meaning. We could use better names (for item, outer-item, impl-item, trait-item, variant field). I prefer “item” rather than “definition” because cross-crate items are not really definitions.
Except that items don’t have to appear at the “top-level” you can define structs and functions inside a function body.
Item is fine in context and it’s not really a term that you need to know, it’s mostly an implementation and reference concern. | https://internals.rust-lang.org/t/can-we-change-the-language-term-item-to-something-else/2752 | CC-MAIN-2018-30 | refinedweb | 747 | 73.07 |
v2.2.2
notcurses_render - sync the physical display to the virtual ncplanes
#include <notcurses/notcurses.h>
int ncpile_render(struct ncplane* n);
int ncpile_rasterize(struct ncplane* n);
int notcurses_render(struct notcurses* nc);
char* notcurses_at_yx(struct notcurses* nc, int yoff, int xoff, uint16_t* styles, uint64_t* channels);
int notcurses_render_to_file(struct notcurses* nc, FILE* fp);
int notcurses_render_to_buffer(struct notcurses* nc, char** buf, size_t* buflen);
Rendering reduces a pile of ncplanes to a single plane, proceeding from the top to the bottom along a pile's z-axis. The result is a matrix of nccells (see notcurses_cell). Rasterizing takes this matrix, together with the current state of the visual area, and produces a stream of optimized control sequences and EGCs for the terminal. By writing this stream to the terminal, the physical display is synced to some pile's planes.
ncpile_render performs the first of these tasks for the pile of which n is a part. The output is maintained internally; calling ncpile_render again on the same pile will replace this state with a fresh render. Multiple piles can be concurrently rendered. ncpile_rasterize performs rasterization, and writes the result to the terminal. It is a blocking call, and only one rasterization operation may proceed at a time. It does not destroy the render output, and can be called multiple times on the same render. notcurses_render calls ncpile_render and ncpile_rasterize on the standard plane, for backwards compatibility. It is an exclusive blocking call.
It is necessary to call ncpile_rasterize or notcurses_render to generate any visible output; the various notcurses_output(3) calls only draw to the virtual ncplanes. Most of the notcurses statistics are updated as a result of a render (see notcurses_stats(3)), and screen geometry is refreshed (similarly to notcurses_refresh(3)) following the render.
While notcurses_render is called, you must not call any other functions modifying the same pile. Other piles may be freely accessed and modified. The pile being rendered may be accessed, but not modified.
notcurses_render_to_buffer performs the render and raster processes of notcurses_render, but does not write the resulting buffer to the terminal. The user is responsible for writing the buffer to the terminal in its entirety. If there is an error, subsequent frames will be out of sync, and notcurses_refresh(3) must be called.
A render operation consists of two logical phases: generation of the rendered scene, and blitting this scene to the terminal (these two phases might actually be interleaved, streaming the output as it is rendered). Frame generation requires determining an extended grapheme cluster, foreground color, background color, and style for each cell of the physical terminal. Writing the scene requires synthesizing a set of UTF-8-encoded characters and escape codes appropriate for the terminal (relying on terminfo(5)), and writing this sequence to the output FILE. If the renderfp value was not NULL in the original call to notcurses_init, the frame will be written to that FILE as well. This write does not affect statistics.:
At each plane P, we consider a cell C. This cell is the intersecting cell, unless that cell has no EGC. In that case, C is the plane's default cell.
If the algorithm concludes without an EGC, the cell is rendered with no glyph and a default background. If the algorithm concludes without a color locked in, the color as computed thus far is used.
notcurses_at_yx retrieves a call as rendered. The EGC in that cell is copied and returned; it must be free(3)d by the caller.
On success, 0 is returned. On failure, a non-zero value is returned. A success will result in the renders stat being increased by 1. A failure will result in the failed_renders stat being increased by 1.
notcurses_at_yx returns a heap-allocated copy of the cell's EGC on success, and NULL on failure.
In addition to the RGB colors, it is possible to use the "default foreground color" and "default background color" inherited from the terminal. Since notcurses doesn't know what these colors are, they are not considered for purposes of color blending.
notcurses(3), notcurses_cell(3), notcurses_input(3), notcurses_output(3), notcurses_plane(3), notcurses_refresh(3), notcurses_stats(3), notcurses_visual(3), console_codes(4), utf-8(7) | https://notcurses.com/notcurses_render.3.html | CC-MAIN-2021-10 | refinedweb | 694 | 55.24 |
First, I wanted to thank those of you who replied to my last message on or about 12/01 on a problem I was having with a program to tell the day of the week from any date. One of you figured out it was suppose to use the Zeller formula, which was right (I'm surprised you could tell, it was such a clumzy attempt!). Anyway, with your help I managed to figure out what I was doing wrong. Thank again!
As I am just learning programming, so I am trying to digest a little at a time. This project is really stumping me! I have got it to compile, but it goes into an infinite loop. If I try a different type of loop (for, do. . .while, etc.) it gives a bush of errors.
P.S. Oh, by the way, I am using Code Warrior. I'm not sure if that makes any difference, but just in case. Thanks again in advance.
//Project 8 - Reads amount of a loan, annual interest rate, &
//monthly payment. Then displays the payment number, interest for that
//month, the balance remaining after that payment, & total internest
//paid to date in a table with appropriate headings
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std; //introduces namespace std
void LoanCalc(double, double, double);//prototype
double main( double, double, double )
{
double a, b, c;
char YorN;
do
{
cout << "What is the loan amount?\n";
cin >> a; //loan amount
cout << "What is the is annnual interest rate as decimal?\n";
cin >> b; //annual interest rate
cout << "What is the amount of monthly payment?\n";
cin >> c; //monthly payment
LoanCalc(a, b, c);
cout << "\n\nEnter Y to do another, N to stop.\n\t\t";
cin >> YorN;
}
while (YorN=='Y'||YorN=='y');
return 0;
}
void LoanCalc(double a, double b, double c)//function definition
{
cout << "\n\tPymt #\t\tMon's Int\tBalance\t\tTot Int Pd\n";
cout << "\t==========================================\ n";
cout << fixed << showpoint << right << setprecision(2);
double Balance = a;
int count = 0;
for(;
{
count ++;
double MonIntRate = b / 12;
double MonInt = a * MonIntRate;
double PrinPymt = c - MonInt;
double TotIntPd = TotIntPd + MonInt;
double Balance = Balance - PrinPymt;
cout << count << "\t" << MonInt << "\t" << Balance << "\t" << TotIntPd << "\n";
if(Balance <= c) cout << "\tLast Pymt\t" << Balance + (Balance * MonIntRate) << "\t"
<< (Balance + (Balance * MonIntRate)) + TotIntPd;
else break;
}
} | http://cboard.cprogramming.com/cplusplus-programming/6914-infinite-loop.html | CC-MAIN-2015-32 | refinedweb | 384 | 67.18 |
This document describes the details of the
Model API. It builds on the
material presented in the model and database
query guides, so you’ll probably want to read and
understand those documents before reading this one.
Throughout this reference we’ll use the example Weblog models presented in the database query guide.
To create a new instance of a model, just instantiate it like any other Python class:
Model(**kwargs)[source]¶
The keyword arguments are simply the names of the fields you’ve defined on your
model. Note that instantiating a model in no way touches your database; for
that, you need to
save().
Note
You may be tempted to customize the model by overriding the
__init__
method. If you do so, however, take care not to change the calling
signature as any change may prevent the model instance from being saved.
Rather than overriding
__init__, try using one of these approaches:
Add a classmethod on the model class:
from django.db import models class Book(models.Model): title = models.CharField(max_length=100) @classmethod def create(cls, title): book = cls(title=title) # do something with the book return book book = Book.create("Pride and Prejudice")
Add a method on a custom manager (usually preferred):
class BookManager(models.Manager): def create_book(self, title): book = self.create(title=title) # do something with the book return book class Book(models.Model): title = models.CharField(max_length=100) objects = BookManager() book = Book.objects.create_book("Pride and Prejudice")
Model.
from_db(db, field_names, values)[source]¶ ] new =.
In older versions, you could check if all fields were loaded by consulting
cls._deferred. This attribute is removed and
django.db.models.DEFERRED is new.
If you delete a field from a model instance, accessing it again reloads the value from the database:
>>> obj = MyModel.objects.first() >>> del obj.field >>> obj.field # Loads the field from the database
In older versions, accessing a deleted field raised
AttributeError
instead of reloading it.
Model.
refresh_from_db(using=None, fields=None)[source]¶
If you need to reload a model’s values from the database, you can use the
refresh_from_db() method. When this method is called without arguments the
following is done:)
Model.
get_deferred_fields()[source]¶
A helper method that returns a set containing the attribute names of all those fields that are currently deferred for this model.
There are three steps involved in validating a model:
Model.clean_fields()
Model.clean()
Model.validate_unique()
All three steps are performed when you call a model’s
full_clean() method.
When you use a
ModelForm, the call to
is_valid() will perform these validation steps for
all the fields that are included on the form. See the ModelForm
documentation for more information. You should only
need to call a model’s
full_clean() method if you plan to handle
validation errors yourself, or if you have excluded fields from the
ModelForm that require validation.
Model.
full_clean(exclude=None, validate_unique=True)[source]¶
This method calls
Model.clean_fields(),
Model.clean(), and
Model.validate_unique() (if
validate_unique is
True), in that
order and raises a
ValidationError that has a
message_dict attribute containing errors from all three stages.
The optional
exclude argument can be used to provide a list of field names
that can be excluded from validation and cleaning.
ModelForm uses this argument to exclude fields that
aren’t present on your form from being validated since any errors raised could
not be corrected by the user.
Note that
full_clean() will not be called automatically when you call
your model’s
save() method. You’ll need to call it manually
when you want to run one-step model validation for your own manually created
models. For example:
from django.core.exceptions import ValidationError try: article.full_clean() except ValidationError as e: # Do something based on the errors contained in e.message_dict. # Display them to a user, or handle them programmatically. pass
The first step
full_clean() performs is to clean each individual field.
Model.
clean_fields(exclude=None)[source]¶
This method will validate all fields on your model. The optional
exclude
argument lets you provide a list of field names to exclude from validation. It
will raise a
ValidationError if any fields fail
validation.
The second step
full_clean() performs is to call
Model.clean().
This method should be overridden to perform custom validation on your model.
Model.
clean()[source]¶
This method should be used to provide custom model validation, and to modify attributes on your model if desired. For instance, you could use it to automatically provide a value for a field, or to do validation that requires access to more than a single field:
import datetime from django.core.exceptions import ValidationError from django.db import models from django.utils.translation import ugettext_lazy as _ class Article(models.Model): ... def clean(self): # Don't allow draft entries to have a pub_date. if self.status == 'draft' and self.pub_date is not None: raise ValidationError(_('Draft entries may not have a publication date.')) # Set the pub_date for published items if it hasn't been set already. if self.status == 'published' and self.pub_date is None: self.pub_date = datetime.date.today().
Model.
validate_unique(exclude=None)[source]¶
This method is similar to
clean_fields(), but validates all
uniqueness constraints on your model instead of individual field values. The
optional
exclude argument allows you to provide a list of field names to
exclude from validation. It will raise a
ValidationError if any fields fail validation.
Note that if you provide an
exclude argument to
validate_unique(), any
unique_together constraint involving one of
the fields you provided will not be checked.
To save an object back to the database, call
save():
Model.
save(force_insert=False, force_update=False, using=DEFAULT_DB_ALIAS, update_fields=None)[source]¶
If you want customized saving behavior, you can override this
save()
method. See Overriding predefined model methods for more details.
The model save process also has some subtleties; see the sections below.
If a model has an
AutoField — an auto-incrementing
primary key — then that auto-incremented value will be calculated and saved as
an attribute on your object the first time you call
save():
>>>
in your model. See the documentation for
AutoField
for more details.
pkproperty¶
Model.
pk¶
Regardless of whether you define a primary key field yourself, or let Django
supply one for you, each model will have a property called
pk. It behaves
like a normal attribute on the model, but is actually an alias for whichever
attribute is the primary key field for the model. You can read and set this
value, just as you would for any other attribute, and it will update the
correct field in the model.
If a model has an
AutoField but you want to define a
new object’s ID explicitly when saving, just define it explicitly before
saving, rather than relying on the auto-assignment of the ID:
>>>.
When you save an object, Django performs the following steps:
Emit a pre-save signal. The signal
django.db.models.signals.pre_save with fields use a Python
datetime object to store data. Databases don’t store
datetime
objects, is sent, allowing
any functions listening for that signal to take some customized
action.
You may have noticed Django database objects use the same
save() method
for creating and changing objects. Django abstracts the need to use
INSERT
or
UPDATE SQL statements. Specifically, when you call
save(), Django
follows this algorithm:
True(i.e., a value other than
Noneor the empty string), Django executes an
UPDATE.
UPDATEdidn’t update anything, and Forcing an INSERT or UPDATE below.
In Django 1.5 and earlier, Django did a
SELECT when the primary key
attribute was set. If the
SELECT found a row, then Django did an
UPDATE,
otherwise it did an
INSERT. The old algorithm results in one more query in
the
UPDATE case. There are some rare cases where the database doesn’t
report that a row was updated even if the database contains a row for the
object’s primary key value. An example is the PostgreSQL
ON UPDATE trigger
which returns
NULL. In such cases it is possible to revert to the old
algorithm by setting the
select_on_save
option to
True.
In some rare circumstances, it’s necessary to be able to force the
save() method to perform an SQL
INSERT and not fall back to
doing an
UPDATE. Or vice-versa: update, if possible, but not insert a new
row. In these cases you can pass the
force_insert=True or
force_update=True parameters to the
save() method.
Obviously, passing both parameters is an error: you cannot both insert and
update at the same time!
It should be very rare that you’ll need to use these parameters. Django will almost always do the right thing and trying to override that will lead to errors that are difficult to track down. This feature is for advanced use only.
Using
update_fields will force an update similarly to
force_update.
Sometimes you’ll need to perform a simple arithmetic task on a field, such as incrementing or decrementing the current value. The obvious way to achieve this is to do something like:
>>> product = Product.objects.get(name='Venezuelan Beaver Cheese') >>> product.number_sold += 1 >>> product.save()
If the old
number_sold value retrieved from the database was 10, then
the value of 11 will be written back to the database.
The process can be made robust, avoiding a race condition, as well as slightly faster by expressing
the update relative to the original field value, rather than as an explicit
assignment of a new value. Django provides
F expressions for performing this kind of relative update. Using
F expressions, the previous example is expressed
as:
>>> from django.db.models import F >>> product = Product.objects.get(name='Venezuelan Beaver Cheese') >>> product.number_sold = F('number_sold') + 1 >>> product.save()
For more details, see the documentation on
F expressions and their use in update queries.
If
save() is passed a list of field names in keyword argument
update_fields, only the fields named in that list will be updated.
This may be desirable if you want to update just one or a few fields on
an object. There will be a slight performance benefit from preventing
all of the model fields from being updated in the database. For example:
product.name = 'Name changed again' product.save(update_fields=['name'])
The
update_fields argument can be any iterable containing strings. An
empty
update_fields iterable will skip the save. A value of None will
perform an update on all fields.
Specifying
update_fields will force an update.
When saving a model fetched through deferred model loading
(
only() or
defer()) only the fields loaded
from the DB will get updated. In effect there is an automatic
update_fields in this case. If you assign or change any deferred field
value, the field will be added to the updated fields.
Model.
delete(using=DEFAULT_DB_ALIAS, keep_parents=False)[source]¶
Issues an SQL
DELETE for the object. This only deletes the object in the
database; the Python instance will still exist and will still have data in
its fields..
When you
pickle a model, its current state is pickled. When you unpickle
it, it’ll contain the model instance at the moment it was pickled, rather than
the data that’s currently in the database.
A few object methods have special purposes.
__str__()¶
Model.
__str__()[source
Model.
__eq__()[source
Model.
__hash__()[source
Model.
get_absolute_url()¶
Define a
get_absolute_url() method to tell Django how to calculate the
canonical URL for an object. To callers, this method should appear to return a
string that can be used to refer to the object over HTTP.
For example:
def get_absolute_url(self): return "/people/%i/" % self.id
While this code is correct and simple, it may not be the most portable way to
to write this kind of method. The
reverse() function is
usually the best approach.
For example:
def get_absolute_url(self): from django.urls import reverse return reverse('people.views.details', args=[str(self.id)])
One place Django uses
get_absolute_url() is in the admin app. If an object
defines this method, the object-editing page will have a “View on site” link
that will jump you directly to the object’s public view, as given by
get_absolute_url().
Similarly, a couple of other bits of Django, such as the syndication feed
framework, use
get_absolute_url() when it is
defined. If it makes sense for your model’s instances to each have a unique
URL, you should define
get_absolute_url()./'.
It’s good practice to use
get_absolute_url() in templates, instead of
hard-coding your objects’ URLs. For example, this template code is bad:
<!-- BAD template code. Avoid! --> <a href="/people/{{ object.id }}/">{{ object.name }}</a>
This template code is much better:
<a href="{{ object.get_absolute_url }}">{{ object.name }}</a>
The logic here is that if you change the URL structure of your objects, even
for something simple such as correcting a spelling error, you don’t want to
have to track down every place that the URL might be created. Specify it once,
in
get_absolute_url() and have all your other code call that one place.
Note
The string you return from
get_absolute_url() must contain only
ASCII characters (required by the URI specification, RFC 2396) and be
URL-encoded, if necessary.
Code and templates calling
get_absolute_url() should be able to use the
result directly without any further processing. You may wish to use the
django.utils.encoding.iri_to_uri() function to help with this if you
are using unicode strings containing characters outside the ASCII range at
all.
In addition to
save(),
delete(), a model object
might have some of the following methods:
Model.
get_FOO_display()¶
For every field that has
choices set, the
object will have a
get_FOO_display() method, where
FOO is the name of
the field. This method returns the “human-readable” value of the field.
For example:
from django.db import models class Person(models.Model): SHIRT_SIZES = ( ('S', 'Small'), ('M', 'Medium'), ('L', 'Large'), ) name = models.CharField(max_length=60) shirt_size = models.CharField(max_length=2, choices=SHIRT_SIZES)
>>> p = Person(name="Fred Flintstone", shirt_size="L") >>> p.save() >>> p.shirt_size 'L' >>> p.get_shirt_size_display() 'Large'
Model.
get_next_by_FOO(**kwargs)¶
Model.
get_previous_by_FOO(**kwargs)¶
a
DoesNotExist exception when appropriate.
Both of these methods will perform their queries using the default manager for the model. If you need to emulate filtering used by a custom manager, or want to perform one-off custom filtering, both methods also accept optional keyword arguments, which should be in the format described in Field lookups.
Note that in the case of identical date values, these methods will use the primary key as a tie-breaker. This guarantees that no records are skipped or duplicated. That also means you cannot use those methods on unsaved objects.
DoesNotExist¶
Model.
DoesNotExist¶
This exception is raised by the ORM in a couple places, for example by
QuerySet.get() when an object
is not found for the given query parameters.
Django provides a
DoesNotExist exception as an attribute of each model
class to identify the class of object that could not be found and to allow
you to catch a particular model class with
try/except. The exception is
a subclass of
django.core.exceptions.ObjectDoesNotExist. | http://doc.bccnsoft.com/docs/django-docs-1.10-en/ref/models/instances.html | CC-MAIN-2019-13 | refinedweb | 2,503 | 57.67 |
Type: Posts; User: darwen
Actually it sounds like this is a windows message loop problem.
By calling show dialog the dialog is setting up its one message loop which pumps the window's messages around.
The hardware...
Or in your case :
wchar_t ch = ' '
array<wchar_t>^ returnstr = gcnew array<wchar_t>(10);
int pos = str->IndexOf(ch);
str->CopyTo(0, returnstr, 0, pos);
return gcnew String(returnstr);
Dead easy :
const wchar_t *test = L"hello there";
String ^xx = gcnew String(test);
Darwen.
To fix your original problem :
foreach (HtmlAgilityPack.HtmlNode node in rowNodes)
{
string url_1 = node.InnerText;
urlpool0 = new ArrayList(); // problem is here ! creating a new...
C++ is a great starting point in my opinion but beware ! The learning curve is very steep (especially when you start moving into templates etc).
For instance if you know (and I really do mean know...
Try not to mix .NET and native C++ constructs like this.
Vector is a native C++ container and in .NET you should be using the .NET equivalents : System.Collections.Generic.List for instance.
...
You can do this using delegates :
e.g.
// have to fill in the types with '...' yourself, don't know what they are
IEnumerable<...> GetQuery(... checkData, Guid guidToFind, Func<Guid, ...>...
You've not shown any code which is the slow part - you mentioned TickImporter.ImportDataToStore can you post that ?
If this is loading 20 files from disc at the same time then this'll be causing...
What you're doing isn't C++ - it's C++/CLI which is a completely different language although it looks very similar. Google for "native C++" and "managed C++" and see the differences.
Some of your...
One thing I've done to speed up this sort of thing before is to add all elements to a list (unordered) first, then sort the list.
You can then easily remove duplicates by checking adjacent...
Just about any application will use data structures. Try writing one without them !
Darwen.
I think it is essential for any programmer to know standard data structures nomatter which language they use.
They should also know basic algorithms like search algorithms.
Darwen
You can't return structs from PInvoke methods.
You'll have to do it like this :
// C++
struct GetPluginData
{
int data[22];
Sounds like a homework question to be, but I'll answer it regardless.
The behaviour of standard data structures is something everyone should know in any language, not just C#.
In order to...
Why is the c# copying the exe into it's local folder ? Are you adding the game exe as a reference ? It won't do this by default. You shouldn't be adding the exe as a reference anyway if you are doing...
This really isn't the way to do things. DoEvents or anything like it should be avoided at all costs because it can cause all sorts of problems (re-entrancy of code etc etc).
What you really should...
Don't worry about creating a new instance every time. .NET will clean up automatically for you (that's what working in a managed language is all about).
You could always make the instance of the...
Doing this in WPF is really easy. There's lots of examples on the net as well - see here..
Darwen.
You should have a member called 'cmbDocumentClasses'. Windows forms puts a member variable named the same as the 'Name' property for each control into the form.
e.g.
public void Example()...
The only way to do this is to use generics. Otherwise you can't define the type of the return value.
Darwen.
Have you looked at string.Replace method ?
e.g.
string x = "AT52156123156\r\nNL648312315";
string result = x.Replace("AT", string.Empty);
result = result.Replace("NL", string.Empty);
I personally wouldn't use a regex for this.
Try this :
using System;
using System.Linq;
// ...
In answer - yes, you can do this with reflection.
However you shouldn't - it leads to runtime errors rather than compile-time errors (so they tend to happen infront of users which makes you look...
Another consideration is memory.
List (unless you set its capacity) continually doubles the memory it uses when its internal buffer becomes full.
This can lead to large sections of unused...
Or server-side developement. Or linux.
And that's if they write a UI at all.
Oooh I love 10,000 lines of xml config files and console apps.
Reminds me of when I started by University degree... | http://forums.codeguru.com/search.php?s=341a3b9fd916f3eb57459fedc33c0183&searchid=6446505 | CC-MAIN-2015-11 | refinedweb | 742 | 69.58 |
We are pleased to release version 0.8.0 of the Snowplow Java Tracker. This release introduces several performance upgrades and a complete rework of the API. Many thanks to David Stendardi from Viadeo for his contributions!
In the rest of this post we will cover:
- API updates
- Emitter changes
- Performance
- Changing the Subject
- Other improvements
- Upgrading
- Documentation
- Getting help
1. API updates
This release introduces a host of API changes to make the Tracker more modular and easier to use. Primary amongst these is the introduction of the builder pattern for almost every object in the Tracker. This pattern lets us:
- Set default values for almost everything without the need for overloaded functions
- Add features without breaking the API in the future
- Add new events for Tracking without changing the API
Please read the technical documentation for notes on setting up the Tracker.
Tracker setup
To setup a basic Tracker under the new API:
OkHttpClient client = new OkHttpClient<span class="o">(); HttpClientAdapter adapter = OkHttpClientAdapter.builder() .url("") .httpClient(client) .build<span class="o">(); Emitter emitter = BatchEmitter.builder() .httpClientAdapter(adapter) .build<span class="o">(); Tracker tracker = new Tracker.TrackerBuilder(emitter, "namespace", "appid") .base64(true) .platform(DevicePlatform.Desktop) .build<span class="o">();
Event tracking: old approach
We have also updated how you track events. In place of many different types of
trackXXX functions, we now have a single
track function which can take different types of
Events as its argument. These events are also built using the builder pattern.
Let’s look at how we were tracking a page view event before, in version 0.7.0:
tracker.trackPageView("", "example", ""<span class="o">);
For events like an Ecommerce Transaction it quickly becomes difficult to understand:">);
Event tracking: new approach
By contrast, here is a page view in version 0.8 number of items here! .build());
The new builder pattern is slightly more verbose but the readbility is greatly improved. You also no longer have to pass in
null entries for fields that you don’t want to populate.
2. Emitter changes
The Emitter has also undergone a major overhaul in this release to allow for greater modularity and asynchronous capability.
Emitter setup
Firstly, we have removed the need to define whether you would like to send your events via
GET or
POST by introducing two different types of Emitters instead. You now use the
SimpleEmitter for
GET requests and the
BatchEmitter for
POST requests.
You can build the emitters like so:
Emiter simple = SimpleEmitter.builder() .httpClientAdapter( ... ) .threadCount(20) // Default is 50 .requestCallback( ... ) // Default is Null .build<span class="o">(); Emiter batch = BatchEmitter.builder() .httpClientAdapter( ... ) .bufferSize(20) // Default is 50 .threadCount(20) // Default is 50 .requestCallback( ... ) // Default is Null .build<span class="o">();
Builder functions explained:
httpClientAdapteradds an
HttpClientAdapterobject for the emitter to use
threadCountsets the size of the Thread Pool which can be used for sending events
requestCallbackis an optional callback function which is run after each sending attempt; it will return failed event Payloads for further processing
bufferSizeis only available for the
BatchEmitter</code>; it allows you to set how many events go into a
POSTrequest
HttpClient setup
Secondly, we now offer more than one
HttpClient for sending events. On top of the
ApacheHttpClient we have now added an
OkHttpClient. The following objects are what we would embed in the
httpClientAdapter( ... ) builder functions above:
CloseableHttpClient apacheClient = HttpClients.createDefault<span class="o">(); HttpClientAdapter apacheClientAdapter = ApacheHttpClientAdapter.builder() .url("") .httpClient(apacheClient) .build<span class="o">(); OkHttpClient okHttpClient = new OkHttpClient<span class="o">(); HttpClientAdapter okHttpClientAdapter = OkHttpClientAdapter.builder() .url("") .httpClient(okHttpClient) .build<span class="o">();
Thus you now have control over the actual client used for sending and can define your own custom settings for it.
Builder functions explained:
urlis the collector URL where events are going to be sent
httpClientis the
HttpClientto use
Many thanks to David Stendardi from Viadeo for this contribution in making the Tracker so modular!
3. Performance
This release also fixes a major performance issue experienced around sending events. The Tracker was, up until now, sending all events using a synchronous blocking model. To fix this we are now sending all of our events using a pool of background threads; the pool size is configurable in the emitter creation step. As a result:
- All event sending is now non-blocking and fully asynchronous
- You control the amount of events that can be sent asychronously to directly control the load on your tracker’s host system
To emphasise the speed changes we performed some stress testing on the Tracker with the previous model and the new model:
- 1000
PageViewevents were sent into the Tracker
- Request type was
- Buffer size was 10
Reported Times:
- Version 0.7.0 took ~40 seconds to finish sending, blocking execution
- Version 0.8.0 took ~2-3 seconds to finish sending, non-blocking execution
That is more than a 1300% speed increase! This increase could potentially get even bigger when running the Tracker on more powerful systems and increasing the Thread Pool accordingly.
We also spent some time exploring the most efficient buffer-size for the Tracker on our system. To test this we sent 10k events from the Tracker and recorded the time taken to successfully send all of them. As you would imagine the larger the buffer-size the lower the latency in getting the events to the collector:
If you are expecting large event volumes, do adjust your buffer size and thread count to allow the Tracker to handle this. However please be aware of the 52000 byte limit per request, if you set the buffer too high it is likely you won’t be able to successfully send anything!
4. Changing the Subject
In an environment where many different Subjects are involved (e.g. a web server or a RabbitMQ bridge), having a single Subject associated with a Tracker is very restrictive.
This release lets you pass a Subject along with your event, to be used in place of the Tracker’s Subject. In this way, you can rapidly switch Subject information between different events:
// Make multiple Subjects Subject s1 = new Subject.SubjectBuilder() .userId("subject-1-uid") .build<span class="o">(); Subject s2 = new Subject.SubjectBuilder() .userId("subject-2-uid") .build<span class="o">(); // Track event with Subject s1 tracker.track(PageView.builder() .pageUrl("pageUrl") .pageTitle("pageTitle") .referrer("pageReferrer") .subject(s1) .build()); // Track event with Subject s2 tracker.track(PageView.builder() .pageUrl("pageUrl") .pageTitle("pageTitle") .referrer("pageReferrer") .subject(s2) .build());
5. Other improvements
Other changes worth highlighting:
- Added several new key-value pairs to the Subject class with new
setXXXfunctions (#125, #124, #88, #87)
- Made the
TrackerPayloadmuch more typesafe by only allowing String values (#127)
- Added a fail-fast check for an invalid collector URL (#131)
6. Upgrading
The new version of the Snowplow Java Tracker is 0.8.0. The Java Setup Guide on our wiki has been updated to the latest version.
Please note this releae breaks compatibility with Java 6; from now on we will only be supporting Java 7+.**
7. Documentation
You can find the updated Java Tracker usage manual on our wiki.
You can find the full release notes on GitHub as Snowplow Java Tracker v0.8.0 release.
8. Getting help
Despite its version number the Java Tracker is still relatively immature and we will be working hard with the community to improve it over the coming weeks and months; in the meantime, do please share any user feedback, feature requests or possible bugs.
Feel free to get in touch or raise an issue Java Tracker issues on GitHub! | https://snowplowanalytics.com/blog/2015/09/14/snowplow-java-tracker-0-8-0-released/ | CC-MAIN-2022-27 | refinedweb | 1,252 | 53.1 |
IntroductionGenerally, when you write your game, very little thought will initially be given to system specifications. The usual train of thought might be "well it runs on my system, so i'll publish this as the minimum or recommended specs in a readme file, or on my website". However, what will your game do if the player's PC fails to meet your expectations? This article will outline what sort of things you should be checking for, and when. There is a decidedly windows and DirectX slant to this article as these are my platforms of choice. The concepts are transferable to other platforms and frameworks, however, the source code i give here is not.
Why should i even attempt to detect system specifications?
It gives a good impressionChecking for the user's system specification will ensure that all users who can run your game will run it without issue, and those who cannot run it will be presented with something useful. A game which crashes or even worse, takes down the player's system when they try to start it, with no reason or rhyme as to why will discourage them from trying again, and what's worse they might even go onto Twitter and disparage your game's name. One player spreading bad news about your game is enough to deter many other potential players.
It cuts down on support issuesIf the player receives a well thought out and instructive error message in the event of a failure, they will know who to contact, and how. The error message could even advise them on what they need to do next before they call you, e.g. to purchase a better graphics card, or delete some programs to free up space, or even to change their display resolution. If they aren't told this beforehand, they will have to contact someone. That someone might be you, and this is your time they will take up which is better spend producing more games.
It helps with system stabilityChecking for the correct capaibilities beforehand will cut down on the amount of crashes that a player might encounter if their machine isn't quite up to par. As outlined above, a game which crashes is a bad thing, but worse than that, a complete system crash (e.g. by switching to full screen mode with no easy way out) might risk other data on the user's machine, causing damage as well as annoyance.
How and when should i detect system specifications?You should attempt to detect system specifications whenever your game starts. This should preferably be done before any systems are properly initialised, so that windows is still in a state where the user can properly click any error messages away and get back to what they were doing before trying to run your game. In my own game, I have split system specifications detection into several classes, each of which is responsibile for detecting the state of one subsystem. Each is called in turn, with the simplest checks done first as some depend on others for success. It is best to leave the code which checks for system specifications till last in your game, as you won't know what specifications your game needs until this point and are likely to go back and change it repeatedly, otherwise. Important subsystems to check are:
- System RAM - is there enough to run the game?
- CPU speed - is the CPU fast enough? Is it multi-core?
- Hard disk space - is there enough for save game files and other data you might put there?
- Hard disk speed - will your game fall over when streaming assets from disk?
- GPU speed and video RAM - Is the graphical performance of the machine sufficient?
- Network connectivity - Is there a network connection? Is the internet reachable, e.g. to look for updates?
- Windows version - Is the version of windows up to date enough to do what you need to do?
Checking system RAM sizeYou can check the system RAM size on windows using the GlobalMemoryStatusEx() function, which will tell you amongst other things the amount of free and total RAM, and the amount of free and total pagefile:
const ONE_GIG = 1073741824; MEMORYSTATUSEX status; ZeroMemory(&status); status.dwLength = sizeof(status); GlobalMemoryStatusEx(&status); if (status.ullTotalPhys < ONE_GIG) { MessageBox(0, "You don't have enough RAM, 1GB is needed", "Epic Fail", MB_OK); exit(0); }
Checking video RAM sizeYou can check the video RAM size using DXGI, and then based upon this you could load lower resolution textures to cut down on memory usage, or you could outright tell the player to get a new graphics card. I prefer the first of these two options wherever possible, as it gives a more friendly experience. Only once you have exhausted all possibilities should you give up. The code to detect video RAM is relatively simple:
#includeThe FeatureLevel variable is also useful here, as it will show you which of the graphics card features the PC actually supports.
#include #include int main() { HRESULT hr; D3D_FEATURE_LEVEL FeatureLevel; // Create DXGI factory to enumerate adapters CComPtr DXGIFactory; hr = CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&DXGIFactory); if(SUCCEEDED(hr)) { CComPtr Adapter; hr = DXGIFactory->EnumAdapters1(0, &Adapter); if(SUCCEEDED(hr)) { CComPtr Device; CComPtr Context; hr = D3D11CreateDevice(Adapter, D3D_DRIVER_TYPE_UNKNOWN, nullptr, D3D11_CREATE_DEVICE_BGRA_SUPPORT, nullptr, 0, D3D11_SDK_VERSION, &Device, &FeatureLevel, &Context); if(SUCCEEDED(hr)) { DXGI_ADAPTER_DESC adapterDesc; Adapter->GetDesc(&adapterDesc); std::wstring Description = adapterDesc.Description; INT64 VideoRam = adapterDesc.DedicatedVideoMemory; INT64 SystemRam = adapterDesc.DedicatedSystemMemory; INT64 SharedRam = adapterDesc.SharedSystemMemory; std::wcout << L"***************** GRAPHICS ADAPTER DETAILS ***********************"; std::wcout << L"Adapter Description: " << Description; std::wcout << L"Dedicated Video RAM: " << VideoRam; std::wcout << L"Dedicated System RAM: " << SystemRam; std::wcout << L"Shared System RAM: " << SharedRam; std::wcout << L"PCI ID: " << Description; std::wcout << L"Feature Level: " << FeatureLevel; } } } }
Detecting the windows versionDetecting the windows version may be important if you only wish to support certain types of installation. For example, you might not want the user to run your game on a server, or you might want to ensure, before your game even tries to access DirectX, that they are not running windows XP or earlier if this will have an impact on your game. Detecting the version information of windows is very simple and should be done using the GetVersionEx win32 function:
WindowsVersion::WindowsVersion() : Checkable("windows/version") { OSVERSIONINFOEX vi; ZeroMemory(&vi, sizeof(OSVERSIONINFOEX)); vi.dwOSVersionInfoSize = sizeof(vi); GetVersionEx((LPOSVERSIONINFO)&vi); vMajor = vi.dwMajorVersion; vMinor = vi.dwMinorVersion; spMajor = vi.wServicePackMajor; spMinor = vi.wServicePackMinor; Build = vi.dwBuildNumber; Platform = vi.dwPlatformId; ProductType = vi.wProductType; } bool WindowsVersion::IsServer() { return (ProductType == VER_NT_DOMAIN_CONTROLLER || ProductType == VER_NT_SERVER); } bool WindowsVersion::IsGreaterThanXP() { return (Platform == VER_PLATFORM_WIN32_NT && vMajor >= 6); }Please note, however, that there is an important gotcha to this function call. You cannot use it to detect if the user is running windows 8.1, only version 8.0. This is because the call will only return the newer version number if your executable embeds the correct manifest. If you want to detect this, you should use the newer Version helper API from the Windows 8.1 SDK instead. If all you want to do is detect XP, or anything older than windows 8.0, then GetVersionEx will do fine.
Detecting hard disk spaceDetecting hard disk space is relatively simple, and can be done via the GetDiskFreeSpaceEx function. You should always avoid the simpler GetDiskFreeSpace function, which operates in number of clusters rather than number of free bytes, taking more work to get a simple answer rather than just returning a simple 64 bit value you can check. Using this function is very simple:
INT64 userbytes; INT64 totalbytes; INT64 systembytes; BOOL res = GetDiskFreeSpaceEx(L".", (PULARGE_INTEGER)&userbytes, (PULARGE_INTEGER)&totalbytes, (PULARGE_INTEGER)&systembytes); std::cout << "Your disk has " << userbytes << " bytes available for use.";Note the difference between userbytes and systembytes in the example above. The userbytes value is the amount of disk space available to the current user, as the disk might be limited by a quota. The systembytes is the total space ignoring quotas, available to all users. Therefore, you should usually check the first result field.
Detecting CPU speedThere are many ways to detect the CPU speed. Some of the more common ones are:
- Using WMI to read the Win32_Processor information - my personally preferred method
- Using the machine code CPUID instruction via inline assembly - less portable, but accurate
- Using a busy loop to calculate CPU - mostly deprecated as this is extremely hard to get right on multi-tasking operating systems, and not recommended outside of kernel level code
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/how-to-check-that-a-players-pc-meets-your-requirements-r4014/?tab=comments | CC-MAIN-2019-39 | refinedweb | 1,441 | 53.1 |
Hide Forgot
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20020830
Description of problem:
I was advised by RH Support to file this defect report...
As root...
Running 'up2date --configure' yields:
Traceback (most recent call last):
File "/usr/sbin/up2date", line 11, in ?
import rpm
ImportError: No module named rpm
Running 'up2date --register' yields:
Traceback (most recent call last):
File "/usr/sbin/up2date", line 11, in ?
import rpm
ImportError: No module named rpm
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
See description.
Actual Results: See description.
Expected Results: No error should be reported
Additional info:
What is
rpm -q rpm-python
returning?
Also, what version of Red Hat Linux are you using?
See RH Support ticket # 225471. Since I have no RHN, I need a way to get these
fixes without RHN.
stevewi@sw722142:~ $ rpm -q rpm-python
rpm-python-4.0.4-7x.18
RH 8.0 Personal
Whats the ouput of:
rpm -V rpm-python
and:
rpm -V up2date
also:
rpm -q rpm
rpm -V rpm
Haven't seen this error before, and since it looks pretty
serious right off the bat, I'm thinking there is something
wrong with your install (possibly rpm or rpm-python didn't
get installed somehow...). So going down that path first...
stevewi@sw722142:~ $ rpm -V rpm-python
Unsatisfied dependencies for rpm-python-4.0.4-7x.18: rpm = 4.0.4, popt = 1.6.4
stevewi@sw722142:~ $ rpm -V up2date
SM?....T c /etc/sysconfig/rhn/up2date
..?..... c /etc/sysconfig/rhn/up2date-keyring.gpg
SM5....T /usr/share/rhn/up2date_client/config.pyc
stevewi@sw722142:~ $ rpm -q rpm
rpm-4.1-1.06
stevewi@sw722142:~ $ rpm -V rpm
....L... /usr/lib/rpm/athlon-linux
....L... /usr/lib/rpm/i386-linux
S.5..... /usr/lib/rpm/i386-linux/macros
....L... /usr/lib/rpm/i486-linux
....L... /usr/lib/rpm/i586-linux
....L... /usr/lib/rpm/i686-linux
....L... /usr/lib/rpm/noarch-linux
hmm:
stevewi@sw722142:~ $ rpm -q rpm-python
rpm-python-4.0.4-7x.18
Thats the wrong version of rpm-python. Thats a version
for the 7.x series. I missed that before.
Not sure how that version got on there, but you need
rpm-python-4.1-1.06.i386.rpm
You can get it from:
Was this system an upgrade from a 7.x box? Something odd happened
at some point, and you got an older version of rpm-python installed.
This was an upgrade from RH 7.3. I installed the package you mention and all
seems to be right with up2date. Thanks a lot. BTW, I like the Bluecurve stuff
a lot...
Fixed, closing. | https://bugzilla.redhat.com/show_bug.cgi?id=82483 | CC-MAIN-2020-50 | refinedweb | 456 | 69.38 |
J
... the
diagrams. Using the same canvas class we are going to draw a box around the text
in our show text MIDlet Example. We have created a class called CanvasBoxText
Check Box Midlet Example
J2ME CheckBox ChoiceGroup MIDlet
...;,
"J2ME", "J2EE", "JSF"). if user select a check
box... that is:
getText()
setText(String text)
Constructor StringItem Syntax
|
Text
Field MIDlet J2ME |
J2ME Contact List |
Date Field MIDlet J2ME... URL | Text
MIDlet J2ME | Arc
MIDlet J2ME |
Simple Line Canvas J2ME...
| Line Canvas MIDlet
| Align Text
MIDlet | Text
in J2ME | J2ME
Canvas
calculator midlet
calculator midlet give me code calculator midlet in bluetooth application with j2me
j2me
j2me why we extends class MIDlet in j2me application
Align Text MIDlet Example
to the text.
In this J2ME Midlet we are going to set the text at different locations...
Align Text MIDlet Example
... in this small j2me
example..
int width = getWidth();
j2me
j2me i want code in which when we input same data in midlet page ... than on ok button it will show in servlet page.. code that pass information to mobile application page to servlet page
Phone Book Midlet Example
button then
number text box will open, now user enters phone number. In case...() method will be called. And both
"Enter Name" text box and the "...
J2ME Contact List
J2ME
J2ME What is the source code for Mortgage Calculator. using text fields are home price,loan amount,down payments,down payment percent, Annual tax... in J2ME language for Symbian developing.
so please help me
Creating Midlet Application For Login in J2ME
Creating MIDlet Application For Login in J2ME
This example show to create the MIDlet application for user login . All
MIDlet applications for the MIDP ( Mobile Information
Text MIDlet Example
Text MIDlet Example
With the help of text midlet example, we are going to show text using...*;
public class TextExample extends MIDlet{
select box and text box validations
select box and text box validations hi,
any one please tell me how to set validations for select box and text boxes using bean classes?
thank you
Please visit the following link:
J2ME Tutorial
the text
in our show text MIDlet Example.
J2ME Canvas... of
timer class for drawing the canvas.
J2ME Text Box...;
Creating MIDlet Application For Login in J2ME
This example show
enable text box and label on selection
enable text box and label on selection hello,
Please tell me how to enable label and text box on selection of drop down list box.
in drop down list box all values come from database.
please reply
Text box control--keypress event
Text box control--keypress event In my form have a text box...My requirement is... when is type characters in that box... before typing the 3rd character the space should come automatically or programmatically...the sturucture
J2me - MobileApplications
J2me Hi, I would like to know how to send orders linux to a servlet which renvoit answers to a midlet. thank you
J2ME Servlet Example
J2ME Servlet Example
... you, how to
create the servlet and implement it with the midlet. In this servlet... steps.
For Details follow this link: J2ME Cookies Example
To display suggestions in a text box - Ajax
, to get the suggestions i mean when i enter the alphabet in a text box(For ex:'A'), the names that starts from 'A' have to display in the text box... enter the character in A in the text box,
The names that starts from A have
J2ME... text fields on my mobile phone i.e.nokia -n79.please Help
import... javax.microedition.midlet.MIDlet;
public class MoneyL extends MIDlet
implements CommandListener
J2ME Books
J2ME Books
Free
J2ME Books
J2ME programming camp...;
The
Enterprise J2ME
This book helps experienced Java
J2ME Cookies Example
J2ME Cookies Example
.... In this
example we are creating a MIDlet ( CookieMIDlet )
for access... MIDlet.
The Application is as follows:
CookieMIDlet.java
J2ME Icon MIDlet Example
J2ME Icon MIDlet Example
... element and an array of image element.
In the Icon MIDlet class we are creating...;javax.microedition.midlet.*;
public class SlideImage extends MIDlet{
Text Field Midlet Example
Text Field MIDlet Example
This example illustrates how to insert text field in your form... maxSize)
setString(String text)
size()
Application
J2ME question
J2ME question Lets say i have 2 screens. One for new user, another for existing user. Currently, the midlet contains radio boxes that allows users... the user chooses "new user". IM stuck at if command part
J2ME Tutorials
how to create a text box using awt
how to create a text box using awt give an example how creat multi buttons & text boxes
J2ME Timer MIDlet Example
J2ME Timer MIDlet Example
This Example shows how to use of timer class. In this example we are using
the Timer class to create the time of execution of application
j2me database question
j2me database question **Is there any possibility to install a database into the mobile.
If possible how can i connect it through midlet(j2me)**
pls help me
J2ME Read File
of this file by the help of j2me midlet.
...
J2ME Read File
In this J2ME application, we are going to read the specified file.
This example
J2ME Item State Listener Example
the
ItemStateListener interface in the j2me midlet. The ItemStateListener interface...
J2ME Item State Listener Example
...;class ItemStateListenerMIDlet extends MIDlet{
j2me - MobileApplications
j2me Hi,
I have developed a midlet application in j2me now i want...,
For more information on J2me visit to :
Thanks
j2me - MobileApplications
j2me i am trying to load one image in j2me program..but get... class Midlet extends MIDlet
implements CommandListener
{
private Display... ImageItem imageItem;
public Midlet()
{
display = Display.getDisplay
Get Help MIDlet Example
Get Help MIDlet Example
This example illustrates how to take help from any other text file which is
stored in res folder in your midlet. In this example we are creating
j2me code - MobileApplications
j2me code Hi Roseindiamembers,
I want immediate help from you for "how to extract picture taken and date and time in j2me midlet?"
Thanks in advance
Regards
susmitha
Change background color of text box - Java Beginners
Change background color of text box Hi how can i change the background color to red of Javascript text box when ever user enters an incorrect value while on entering a correct value its background color should change green
J2ME Form Class
J2ME Form Class
In this J2ME Extends Form example, we are going to discuss
about form... type of items such as images, text and text fields to get or show
values to users
Rectangle Canvas MIDlet Example
Rectangle Canvas MIDlet Example
... of rectangle in J2ME.
We have created CanvasRectangle class in this example...;extends MIDlet{
private Display display;
image application
j2me image application i can not get the image in my MIDlet .........please tell me the detailed process for creating immutable image without Canvas
j2me project - Java Beginners
j2me project HOW TO CREATE MIDLET WHICH IS A GPRS BASED SOLUTION... SALES DATA FROM THE SERVER.
THIS MIDLET IS FOR THE PROJECT MEDICAL CENTRAL...://
Thanks
Radio Button in J2ME
Radio Button in J2ME
In this tutorial you will see the MIDlet Example that is going to
demonstrate, how to create the radio button in J2ME using MIDlet. The radio button
J2ME Draw String
J2ME Draw String
... on the
screen. Here in this example, we are going to show the string in J2ME. For that
we... of the text on the screen.
setColor()
fillRect()
drawString()
getHeight
J2ME Canvas Repaint
J2ME Canvas Repaint
In J2ME repaint is the method of the canvas class, and is used to repaint the
entire canvas class. To define the repaint method in you midlet follow
J2ME Record Store MIDlet Example
J2ME Record Store MIDlet Example
This is a simple program to record data and print it on the console. In this
example we are using the following code to open, close Event Handling Example
J2ME Event Handling Example
In J2ME programming language, Event Handling are used to handle certain type
of events that are generated at the time of loading MIDlet on the mobile
J2ME Command Class
J2ME Command Class
In the given J2ME Command Class example, we have set the various...;exit
and item
to the Midlet. And also set the priority for it such as 1, 2
List the names of classes used to create button and text box in Java.
List the names of classes used to create button and text box in Java. List the names of classes used to create button and text box in Java
radio
how can i store text box values as it is in database table
how can i store text box values as it is in database table CUSTOMER DESCRIPTION
Text box size varying in IE 7 but ok in firefox - Java Beginners
Text box size varying in IE 7 but ok in firefox I have problem with the size of text field which varies in IE 7 only but its fixed in Firefox( I... box size and placed in td with same width ,STill the first text box grows
JDialog to create a customized dialog box for entering text data
JDialog to create a customized dialog box for entering text data ... dialog box for entering text data.
The dialog can be split into panes, left pane can contain an image file and the right pane
will contain multiple text fields
retrieve value from db in text box + calendar implementation.
retrieve value from db in text box + calendar implementation. I... there is already a text box..Now i want to get the value retrieved from database in that text field how can i do it...
Please help...
Please visit
J2ME Record Store Example
J2ME Record Store Example
In this Midlet, we are going to read string data and write.... In J2ME a record store consists of a collection of records
and that records remain
form text box connection with mysql database feild - JDBC
form text box connection with mysql database feild Respected Sir,
What is the coding to connect a form text box field with mysql database table field
will you explain me with simple example..
thanking you..
J2ME Kxml Example
and how to parse the xml file in the midlet
J2ME...
J2ME Kxml Example
J2ME Kxml Example
This is the simple
j2me how to compile and run j2me program at command prompt
j2me
j2me i need more points about j2me
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/91052 | CC-MAIN-2013-20 | refinedweb | 1,761 | 61.56 |
When it comes to protecting your users' information, it can be a hassle to figure out the best course of action. Conveniently enough, Twilio offers solutions to not only bring your project to life, but to help protect your users and their information.
You can use Twilio Verify to generate one-time passcodes for your user to verify their identity and access your app with the peace of mind knowing that they have a less chance of being hacked. Secure authentication on a site helps reduce the chance of fraud and data loss.
In this article, you will learn how to develop a functional website to authenticate your users and protect their identity and access to the site.
Tutorial requirements
- Python 3.6 or newer. If your operating system does not provide a Python interpreter, you can go to python.org to download an installer.
Set up the environment
Create a project directory in your terminal called twilioverify to follow along:
$ mkdir twilioverify $ cd twilioverify $ python3 -m venv venv $ source venv/bin/activate (venv) $ pip install flask twilio python-dotenv
If you are on a Windows machine, enter the following commands in a prompt window:
$ md twilioverify $ cd twilioverify $ python -m venv venv $ venv\bin\activate (venv) $ pip install flask twilio python-dotenv
NOTE: Depending on what distribution of Python you are on, you might have to specify python3.
If you are curious to learn more about the packages installed in the command above, you can check them out here:
- The Flask framework, to create the web application that will receive message notifications from Twilio
- The python-twilio package, to send messages through the Twilio service
Create your first Twilio Verify
In order to use Twilio Verify, an API key must be generated. Head to the Twilio Verify Dashboard - you should be on a page that says Services.
Click on the red plus (+) button to create a new service. Give the service a friendly name of "site-verify". The friendly name will actually show up on the text message that is sent to people's phones so if you have another specific name you would like to use, such as "<YOUR_NAME> website verify" feel free to do so.
Click on the red Create button to confirm.
Creating the Twilio Verify service will lead you to the General Settings page where you can see the properties associated with your new Twilio Verify service.
Open your favorite code editor and create an .env file. Inside this file, create a new environment variable called
VERIFY_SERVICE_SID. Copy and paste the SERVICE SID on the web page as the value for this new variable. .
To complete the .env file, create two additional environment variables:
TWILIO_ACCOUNT_SID and
TWILIO_AUTH_TOKEN. You can find the values for these variables on the Twilio Console as seen below:
Set up a development Flask server
Make sure that you are currently in the virtual environment of your project’s directory in the terminal or command prompt. Since we will be utilizing Flask throughout the project, we will need to set up the development server. Add a .flaskenv file (make sure you have the leading dot) to your project with the following lines:
FLASK_APP=app.py FLASK_ENV=development
These incredibly helpful lines will save you time when it comes to testing and debugging your project.
FLASK_APPtells the Flask framework where our application is located.
FLASK_ENVconfigures Flask to run in debug mode.
These lines are convenient because every time you save the source file, the server will reload and reflect the changes.
Then, run the command
flask run in your terminal to start the Flask framework.
The screenshot above displays what your console will look like after running the command
flask run. The service is running privately on your computer’s port
5000 and will wait for incoming connections there. You will also notice that debugging mode is active. When in this mode, the Flask server will automatically restart to incorporate any further changes you make to the source code.
However, since you don't have an app.py file yet, nothing will happen. Though, this is a great indicator that everything is installed properly.
Feel free to have Flask running in the background as you explore the code. We will be testing the entire project at the end so that we don't make too many calls to the Twilio Verify API when we don't need to generate a new verification code.
Create a database file of eligible users
For the purposes of this tutorial, we will be hardcoding a list of accounts that are allowed to enter the website, along with their phone numbers. In a production setting, you would have to use your chosen database instead.
Keep in mind that if you were to use your own database, you would have to avoid storing passwords as plaintext. There are plenty of libraries that help developers manage passwords such as Flask Security.
Create a file in your working directory named settings.py and copy the code below into the file:
KNOWN_PARTICIPANTS = { 'herooftime@hyrule.com': '<YOUR_PHONE_NUMBER> ', 'zelda@hyrule.com': '+15552211986', 'tetra@hyrule.com': '+15553242003' }
The dictionary can be modified to include different emails and phone numbers as you please. Make sure the phone numbers are in E.164 format as seen in the settings.py example above. Be sure to add your phone number to an existing item in the dictionary, or create a new item with your information. Each username is a unique key which is helpful in our case because we want to look up the usernames quickly in the login step.
Plan the logic of the project
The flow of logic for the project goes as follows:
- A user from
KNOWN_PARTICIPANTSwill enter their email on the website homepage.
- The Flask application sends a one time passcode to the user's phone number.
- The user is prompted to enter the verification code they received from their phone to verify their identity to their account.
With that said, let's start coding!
In your working directory, create a file named app.py and copy and paste the following code:
import os from dotenv import load_dotenv from twilio.rest import Client from flask import Flask, request, render_template, redirect, session, url_for from twilio.rest import Client from twilio.base.exceptions import TwilioRestException load_dotenv() app = Flask(__name__) app.secret_key = 'secretkeyfordungeon' app.config.from_object('settings') TWILIO_ACCOUNT_SID = os.environ.get('TWILIO_ACCOUNT_SID') TWILIO_AUTH_TOKEN= os.environ.get('TWILIO_AUTH_TOKEN') VERIFY_SERVICE_SID= os.environ.get('VERIFY_SERVICE_SID') client = Client(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN) KNOWN_PARTICIPANTS = app.config['KNOWN_PARTICIPANTS']
At the top of the file, we imported the necessary Python modules and libraries so that the project can load the environment variables, the list of participants from settings.py, and start the Flask app.
The Flask app will also have a
secret_key for some level of encryption. Any random string can replace "secretkeyfordungeon". This is also required in our project since we need to store the users' account information and pass it along to other routes on the site using Flask's session.
Create the template folder for HTML pages
To build the UI for this project, you’ll be using Flask templates. Create a folder in the working directory named templates and create the following files inside of the folder:
- index.html - the landing page for the user to enter their email and request a verification token.
- verifypage.html - for the user to enter the verification code when prompted.
- success.html - page indicating the success of protection for the user's account!
Build the user login page
For this project, the user will go to the website and enter their username, which is an email in this case. Copy and paste the following code at the bottom of your app.py file:
@app.route('/', methods=['GET', 'POST']) def login(): error = None if request.method == 'POST': username = request.form['username'] if username in KNOWN_PARTICIPANTS: session['username'] = username send_verification(username) return redirect(url_for('verify_passcode_input')) error = "User not found. Please try again." return render_template('index.html', error = error) return render_template('index.html')
A
POST request is made to allow the participant's username to be stored in the Flask session. If the username is in the database, in this case the
KNOWN_PARTICIPANTS dictionary, then the username is stored in the current Flask session and the verification token is sent to the corresponding phone number. The participant is redirected to another route where they will see another form allowing them to submit the verification code.
However, if the user enters an unknown username, then the page will be refreshed with an error message.
In order to retrieve the text from the participant, a proper HTML form must be created for the participant to interact with. Create a form that takes in a
username input, as well as a button to submit. Feel free to copy and paste this barebones HTML form into the index.html file:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <h1>Login</h1> {% if error %} <p class=error><strong>Error:</strong> {{ error }} {% endif %} </head> <body> <form method="POST"> <div class="field"> <label class="label">Username</label> <input class="input" type="text" name="username" placeholder="Username"> </div> <div class="field"> <p class="control"> <button type="submit" class="button is-success"> Request verification code </button> </p> </div> </form> </body> </html>
With the form set up, it’s now time to build the `send_verification` function that will fire after the user submits the form.
Generate a verification code with Twilio Verify
Time for the fun part - calling the Twilio Verify API!
We want to send the verification token after the user enters a valid email in our database. Add the following code to the app.py file under the same route as the `login` function:
@app.route('/', methods=['GET', 'POST']) # ... def send_verification(username): phone = KNOWN_PARTICIPANTS.get(username) client.verify \ .services(VERIFY_SERVICE_SID) \ .verifications \ .create(to=phone, channel='sms')
The Twilio Client sends a verification token to the phone number associated with the username stored in the current Flask session. The specified channel in this case is SMS but it can be sent as a call if you prefer.
Keep in mind that this is a simple function that sends a verification passcode and does not yet account for error handling.
Time to test it out. On the webpage, enter the first username in settings.py that corresponds to your phone number. You should get an SMS with a passcode shortly.
Check your phone to see the notification for the verification code provided by Twilio Verify. In my case the passcode was 864831.
Verify the user's phone number
In this route, we will be taking the user input from a new form and making sure it is the same exact verification code that Twilio texted via SMS to the phone.
Let's wrap it up by creating the form on the HTML side. Copy and paste the HTML into the body of verifypage.html:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Verify your account</title> </head> <body> <h1 class="title"> Please verify your account {{username}} </h1> {% if error %} <p class=error><strong>Error:</strong> {{ error }} {% endif %} <form method="POST"> <div class="field"> <label class="label">Enter the code sent to your phone number.</label> <input class="input" type="password" name= "verificationcode" placeholder="verificationcode"> </p> </div> <div class="field"> <p class="control"> <button type="submit" class="is-success", Submit Verification Code </button> </p> </div> </form> </body> </html>
Awesome! Now the user is able verify their identity with the 6 digit code that was sent to their SMS enabled device.
But wait - how can we verify the 6 digit code if Twilio is the one that sends out the code? We need to define the
/verifyme route and define the appropriate functions so that the user can verify the passcode.
Copy and paste the following code to the bottom of the app.py file:
@app.route('/verifyme', methods=['GET', 'POST']) def verify_passcode_input(): username = session['username'] phone = KNOWN_PARTICIPANTS.get(username) error = None if request.method == 'POST': verification_code = request.form['verificationcode'] if check_verification_token(phonenumber, verification_code): return render_template('success.html', username = username) else: error = "Invalid verification code. Please try again." return render_template('verifypage.html', error = error) return render_template('verifypage.html', username = username)
We need to define the
check_verification_token() function beneath the
verify_passcode_input() code so that this function can be called within this route:
def check_verification_token(phone, token): check = client.verify \ .services(VERIFY_SERVICE_SID) \ .verification_checks \ .create(to=phone, code=token) return check.status == 'approved'
The
check_verification_token() function takes in the Flask session's phone number and the
verification_code that the user typed into the textbox and calls the Verify API to make sure they entered the one time passcode correctly.
So once the user submits the form, which then makes the
POST request to the
/verifyme route, the
verify_passcode_input() function is called. If the passcode was correct, the success page is rendered. Similar to the logic for the login page, if the participant enters an incorrect verification code, the page will refresh and show an error message. The page will also let the user enter the verification code again.
Here's an example of what you would see:
Display a success message
At this point, the user has entered their credentials and verification code correctly. You can now redirect them somewhere else as you please, but in this tutorial, you’ll redirect them to a success page, as coded in the
verify_passcode_input() function.
Copy and paste the HTML into the success.html file within the templates directory:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Successful Login!</title> </head> <body> <div class="container"> <h1 class="title"> {{username}}'s Profile </h1> <h2 class="subtitle"> Thanks for protecting your account! </h2> </div> </body> </html>
Authenticate your account with Twilio Verify
It's time to test out the app. Feel free to look at the completed code on GitHub.
Make sure that Flask is running on your terminal with
flask run. Visit and enter any username from the defined dictionary in settings.py.
I'll use Link's username which is the key for my own phone number for testing:
Check your phone to see the notification for the verification code provided by Twilio Verify. Seems like the code for my case was 864831.
After entering the code correctly, you'll see this page:
Whew, hopefully the journey to protecting your account and identity is not too difficult!
What’s next for authenticating users with Twilio Verify?
Congratulations on implementing safe practices and incorporating security measures into your project!
Another way you can use Verify for authentication is to send a verification code over email using Twilio Verify and SendGrid. In that case, you would use the username in the database or any email address on the user's profile. You also don't need to ask for the user's phone number if it's already registered in the database.
You can also build a one-time passcode protected conference line with Twilio Verify and Python or add two-factor authentication to a blog.
Let me know if you used Twilio Verify API to protect your users in your project by reaching out to me over email!
Diane Phan is a Developer on the Developer Voices team. She loves to help beginner programmers get started on creative projects that involve fun pop culture references. She can be reached at dphan [at] twilio.com or LinkedIn.
Special thanks to Amy Yee for the phone wallpaper art. | https://www.twilio.com/blog/basic-flask-python-twilio-verify | CC-MAIN-2021-10 | refinedweb | 2,600 | 55.95 |
C++ Tutorial
- Pointers III - 2017
Not long time ago, before references came out, the only option programmers had for returning objects from functions was using pointers. References provide a cleaner syntax we may still need to return objects through pointers.
The example below demonstrates returning pointers. Through the returned pointers, the code displays and modifies the values of a vector that holds the name of composers.
#include <iostream> #include <vector> #include <string> using namespace std; string *getComposer(vector
*const pVec, int i) { return &((*pVec)[i]); } int main() { vector composers; composers.push_back("Rachmaninov"); composers.push_back("Debussy"); composers.push_back("Wagner"); cout << *(getComposer(&composers;,0)) << endl; string *pStr = getComposer(&composers;,1); cout << *pStr << endl; string str = *(getComposer(&composers;,2)); cout << str << endl; *pStr = "Ravel"; cout << composers[1] << endl; return 0; }
Output is:
Rachmaninov Debussy Wagner Ravel
Let's take a look at the function, getComposer().
string *getComposer(vector
*const pVec, int i) {
The string* at the start indicates that the function getComposer() is returning a pointer to a string object which is obviously not the object itself. Now, let's look at the body of the function:
return &((*pVec)[i]);
(*pVec)[i] represents the i-th element of the vector pointed to by pVec. So, &((*pVec)[i]) becomes the address of the i-th element of the vector pointed by pVec.
How about the lines in the main().
cout << *(getComposer(&composers;,0)) << endl;
The code calls getComposer(), which returns a pointer to composers[0]. Then, the line sends the string object pointed by the pointer to cout, and as a result, "Rachmaninov" is displayed.
The next output:
string *pStr = getComposer(&composers;,1); cout << *pStr << endl;
Here, we assign a returned pointer to another pointer, pStr. Then we send *pStr to cout to print out "Debussy".
The third output:
string str = *(getComposer(&composers;,2)); cout << str << endl;
We assign the value pointed to by a returned pointer to a variable. The call to getComposer() returns a pointer to composer[2]. But it can't assign the pointer returned to str because str is a string object not a pointer. Instead, the compiler makes a copy of the string object to which the pointer points and assigns that object to str. An assignment like this one, where an object is copied, is more expensive that the assignment of one pointer to another.
Now, the final out of the code:
*pStr = "Ravel"; cout << composers[1] << endl;
We can modify the object to which a returned pointer points. Because pStr points to the element in position 1 of composer, this code alters composer[1]. As a result, composer[1] is now changed to "Ravel" from "Debussy".
When a pointer is pointing to an valid data, dereferencing such a pointer is a runtime error. Any pointer set to 0 is called a null pointer, and since there is no memory location 0, it is an invalid pointer. We should check whether a pointer is null before dereferencing it. Pointers are often set to 0 to indicate that they are not currently valid. Dereferencing pointers to data that has been erased from memory also usually causes runtime errors as shown below:
int *f() { int a = 100; return &a; }
a is deallocated when f() exits, so the pointer the function returns is invalid. As with any other variable, the value of a pointer is undefined until it is initialized, so it may be invalid.
Pointers can point to functions even though its use is less common that for pointers that points to variable or objects.
As a start, let's think about what the functions are.
Functions, like data items, have addresses. A function's address is the memory address at which the stored machine language code for the function begins. Normally, it's neither important nor useful for us, but it can be useful to a program. For example, it's possible to write a function that takes the address of another function as an argument. That enables the first function to find the second function and run it.
That approach is more awkward than simply having the first function call the second one directly, but it leaves open the possibility of passing different function addresses at different times. That means the first function can use different functions at different times.
Functions are not objects, and there is no way to copy or assign them, or to pass them as arguments directly. In particular, there is no way for a program to create or modify a function. Only the compiler can do that. All that a program can ever do with a function is call it or take its address.
Function pointers allow referencing functions with a particular signature. For example, to store the address of the standard function abs in the variable my_abs:
int (*my_abs)(int) = abs;
Nevertheless, we can call a function with another function as an argument. What really happens is that the compiler secretly translates such calls so as to use pointers to functions instead of using functions directly.
In fact, C++ allows operations with pointers to functions. The typical use of this is for passing a function as an argument to another function, since these cannot be passed dereferenced.
In other words, pointers to functions behave similarly to any other pointers. Once we have dereferenced such a pointer, however, all we can do with the resulting function is call it - or take the function's address yet again.
Declaration of pointers to functions resembles other declarations. For example, just as we write
int *p;
to say that *p has type int and implying that p is a pointer, we may write
int (*pf)(int);
to say that if we dereference pf, and call it with an int argument, the result has type int. By implication, pf is a pointer to a function that takes an int argument and returns an int result.
A pointer to a function is similar to a pointer to data but it should always be enclosed in parenthesis when using the dereference operator (*) to avoid an compilation error. It then be followed by another parenthesis containing any arguments to be passed to the function when using the *.
Because all that we can do with a function is to take its address or call it, any use of a function that is not a call is assumed to be taking its address, even without an explicit &.
Suppose we have a function with the following prototype:
int f(int); //prototype
Note that it look just like our declaration of pointer to a function:
int (*pf)(int);
Because f is a function, so is (*pf). And if (*pf) is a function, then pf must be a pointer to a function.
The declaration requires the parentheses around *pf to provide the proper operator precedence. Parentheses have a higher precedence than the * operator, so *pf(int) means pf() is a function that returns a pointer while (*pf)(int) means pf is a pointer to a function.
int (*pf)(int); // pf points to a function that returns int int *pf(int); // pf() is a function that returns a pointer to int
Once we declare pf properly, we can assign to it the address of a matching function:
int f(int); int (*pf)(int); pf = f; // pf now points to the f() function
As we discussed in the previous section, (*pf) serves as a name of a function. In other words, function pointers are invoked by name just like normal function calls.
int f(int); int (*pf)(int); pf = f; // pf now points to the f() function int i = f(4); // call f() using the function name int j = (*pf)(4); // call f() using the pointer pf int k = pf(5) // also call f() using the pointer pf
The code below shows very simple usage of function pointer:
#include <iostream> using namespace std; int square(int n) { return n*n; } int main() { int (*pf)(int); pf = square; cout << "square(7) = " << pf(7) << endl; return 0; }
Here is another example using a pointer to a function.
#include <iostream> using namespace std; int operation (int i, int j, int (*pf)(int,int)) { int k; k = (*pf)(i,j); /* This works, too. k = pf(i,j) */ return k; } int addition (int a, int b) { return a+b; } int division (int a, int b) { return a/b; } int main() { int m = operation (10, 5, addition); int n = operation (10, 5, division); cout << "m = " << m << endl; cout << "n = " << n << endl; return 0; }
Output is:
m = 15 n = 2
Here is an example of an array of pointers to functions or function pointers:
#include <iostream> using namespace std; int pass(int n) { return n; } int square(int n) { return n*n; } int add(int n) { return n+n; } int main() { int num = 100; cout << "a function pointer:" << endl; int (*pf)(int); pf = pass; cout << pf(num) << endl; cout << endl; cout << "an array of function pointers:" << endl; int (*apf[3])(int); apf[0] = pass; apf[1] = square; apf[2] = add; cout << apf[0](num) << endl; cout << apf[1](num) << endl; cout << apf[2](num) << endl; return 0; }
Output:
a function pointer: 100 an array of function pointers: 100 10000 200
Do you see an error in the following code?
#include <iostream> using namespace std; int foo(int x, int y) { return x*y;} double foo(double x, double y) { return x*y;} int main() { double (*ptr)(int, int); ptr = foo; cout << ptr(5, 4) << endl; return 0; }
It appears it's an issue related to the function signature. The two foo()s clearly have different function signatures not because of the return types but because of parameter type. In the code, ptr is a pointer to a function that's returning double. However, when we try to assign foo() to the pointer, the return type should be a int not double. So, the 2nd line in main() is wrong or ill-formed.
The constant NULL is a special pointer value indicating that the pointer points to nothing. It turns out to be convenient to have a well defined pointer value which represents the idea that a pointer does not have an object it is pointing to. It is a runtime error to dereference a NULL pointer.
The C language uses the symbol NULL for this purpose. NULL is equal to the integer constant 0, so NULL can play the role of a boolean false. Official C++ no longer uses the NULL symbolic constant. Instead it uses the integer constant 0 directly and calls it null pointer. C++ guarantees that the null pointer never points to valid data, so it is often used to indicate failure for operators or functions that otherwise return valid pointers.
int * p; p = 0; if(p != 0) // Is p valid? if(p) // Is p valid?
Deleting the null pointer doesn't do anything, and it's harmless:
int *p = 0; delete p; // OK: no action ... delete p; // OK: no action
Unlike deleting a null pointer twice, deleting a regular pointer twice is a bad mistake:
int *p = new int(20); delete p; // OK: p points to an object created by new delete p; // error: p points to memory owned by the memory manager
We have two issues regarding the second delete:
- We no longer own the object pointed to. So, the free-store manager may have changed its internal data structure in such a way that it can't correctly execute delete p again.
- The free-store manager may have recycled the memory pointed to by p so that p now points to another object; deleting that other object will cause error in our program.
- null pointer
- NULL is defined as (void *)0.
- A null pointer is a value that any pointer may take to represent that it is pointing to "nowhere".
- A null pointer refers to the value stored in the pointer itself .
- void pointer
- A void pointer is a special type of pointer that can point to somewhere without a specific type.
- A void pointer refers to the type of data it points to.
There is no such thing as a null reference. A reference must always refer to some object. As a result, if we have a variable whose purpose is to refer to another object, but it is possible that there might not be an object to refer to, we should make the variable a pointer, because then we can set it to null. On the other hand, if the variable must always refer to an object, i.e., if our design does not allow for the possibility that the variable is null, we should make the variable a reference.
Before we look at the void*, let's briefly review the memory allocation with respect to the new operator:
- The new operator returns a pointer to the allocated memory.
- A pointer value is the address of the first byte of the memory.
- A pointer points to an object of a specified type.
- A pointer does not know how many elements it points to. }
For more info on void*, goto Pointers II- void Pointers.
When a pointer is first allocated, it does not have an object it is pointing to. The pointer is said to be uninitialized or simply bad.
A dereference operation on a bad pointer is a serious runtime error. If we are lucky, the dereference operation will crash or halt immediately. If we are unlucky, however, the bad pointer dereference will corrupt a random area of memory, slightly altering the operation of the program so that it goes wrong some indefinite time later.
Each pointer must be assigned an object before it can support dereference operations. Before that, the pointer is bad and must not be used..
Bad pointers are very common, and actually every pointer starts out with a bad value. Correct code overwrites the bad value with a correct reference to an object, and thereafter the pointer works fine.
There is nothing automatic that gives a pointer a valid object. Instead, it is very easy to omit this crucial step. We just have to code carefully. If our code is crashing, a bad pointer should be our first suspicion.
Why is it so often the case that programmers will allocate a pointer, but forget to set it to
refer to an object?
The rules for pointers don't seem that complex, yet every programmer makes this error repeatedly. Why? The problem is that we are trained by the tools we use. Simple variables don't require any extra setup. We can allocate a simple variable, such as int, and use it immediately. The built-in variable may be used once it is declared.
However, pointers look like simple variables but they require the extra initialization before we use them. It's unfortunate, in a way, that pointers happen look like other variables, since it makes it easy to forget that the rules for their use are very different.
Pointers in dynamic languages such as LISP and Java work a little differently. The run-time system sets each pointer to NULL when it is allocated and checks it each time it is dereferenced so that/C++.
For simple pointers and arrays, go to Pointers and Arrays.
C does not have true multidimensional arrays. However, we can have arrays of arrays. In other words, a two-dimensional array is actually a one-dimensional array, each of whose element is an array. It should be written as:
array[i][j] /* [row][[col] */
not
array[i,j] /* error */
The elements are stored by rows, so the columns varies fastest as elements are accessed in storage order. For example, array[2][3] is stored like this:
array[0][0], array[0][1], array[0][2], array[1][0], array[1][1], array[1][2]
To access a value in a multidimensional array, the computer treats each subscript as accessing another subarray within the multidimensional array. For example, in the two-by-three grid, the expression array[0] actually refers to the subarray highlighted in the picture above. When we add a second subscript, such as array[0][2], the computer is able to access the correct element by looking up the second subscript within the subarray
For the following 2-D array:
int a[][6] = { {2,5,7,9,14,16}, {3,6,8,10,15,21}, {4,7,9,15,22,35}, {7,8,9,22,40,58} };
We can get several info on that array:
# of elements size: sizeof(a)/sizeof(int) = 24 # of columns: sizeof(*a)/sizeof(int) = 6 # of columns: sizeof(a[0])/sizeof(int) = 6 # of rows: sizeof(a)/sizeof(a[0]) = 4
If we need to determine the dimensions of a multidimensional array at run time, we can use a heap_based array. Just as a single-dimensional dynamically allocated array is accessed through a pointer, a multidimensional dynamically allocated array is also accessed through a pointer.
int** allocateArray(int iDim, int jDim) { int** pArray = new int*[iDim]; // Allocate first dimension for (int i = 0; i < iDim; i++) { pArray[i] = new int[jDim]; // Allocate ith subarray } return pArray; } void releaseArray(int** pArray, int iDim) { for (int i = 0; i < iDim; i++) { delete[] pArray[i]; // Delete ith subarray } delete[] pArray; // Delete first dimension } int main(int argc, char** argv) { int iSize = 5, jSize = 2; int **ptrArray = allocateArray(iSize,jSize); releaseArray(ptrArray, iSize); }
We must start by allocating a single contiguous array for the first subscript dimension of a heap-based array. Each element of that array is actually a pointer to another array that stores the elements for the second subscript dimension. This layout for a five-by-two dynamically allocated array is shown in the picture below.
We can allocate the first dimension array just like a single-dimensional heap-based array. Because the compiler doesn't allocate memory for the subarrays for us, the individual subarrays must be explicitly allocated as shown in the code above.
When a two-dimensional array is passed as an argument, the base address of the caller's array is again sent to the function. However, we cannot leave off the sizes of both of the array dimensions. we can omit the size of the first dimension (the number of rows), but not the second (the number of columns).
This is because what we're passing is a pointer to an array of rows. In other words, in the computer's memory, C++ stores two-dimensional arrays in row order. Thinking of memory as one long line of memory cells, the first row of the array is followed by the second row, which is followed by the third, and so on.
To locate array[1][0] in the picture above, a function that receives array's base address must be able to determine that there are three elements in each row. In other words, function needs to know that the array has three columns. Therefore, the declaration of a parameter must always state the number of columns:
void f(int array[][3]) { ... }
For example, if we pass an array a[5][2] to a function as in the picture below, we do this:
caller
f(a);
The function declaration, then looks like this:
f(int a[5][2]){...}
or since we are passing a pointer to objects that are arrays of 2 integers, we can use the following:
f(int a[][2]){...}
Since a[] is equivalent to (*a), we can also use:
f(int (*a)[2]){...}
By the same argument that applies in the single-dimension case, the function does not have to know how big the array a is. However, it does need to know what a is an array of. It is not enough to know that a is an array of some arrays. The function must know that a is an array of arrays of 2 integers.
So, in multi-dimensional array, only the first dimension may be missing.
Let's look at the array subscription of the following array:
int a[3][5][7];
It declares an array of integers, with 3x5x7 rank. Actually, a is an array of three things: each thing is an array of five arrays; each of the latter arrays is an array of seven integers.
The following expressions are all legal:
- a
- a[i] - an array of 5 arrays of 7 integers
- a[i][j] - an array of 7 integers
- a[i][j][k]
The array subscripting operation is defined so that E1[E2] is identical to *(E1+E2). So, despite its asymmetric appearance, subscripting is a commutative operation. Because of the conversion rules that apply to + and to arrays, if E1 is an array and E2 an integer, the E1[E2] refers to the E2-th member of E. (from K&R;)
So, a[i][j][k] is equivalent to *(a[i][j]+k). The first subexpression a[i][j] is converted to type pointer to array of integers. The addition (k) involves multiplication by the size of an integer. It follows from the rules that arrays are stored by rows and that the first subscript in the declaration helps determine the amount of storage consumed by an array, buy plays no role in subscript calculation. (also from K&R;).
#include <iostream> using namespace std; const int N = 3; void matmul(int a[][N], int b[][N], int c[][N]) { for(int i = 0; i < N; i++) { for(int j = 0; j < N; j++) { for(int k = 0; k < N; k++) { c[i][j] = c[i][j] + a[i][k]*b[k][j]; } } } } int main() { int a[][N] = {{1,2,3}, {4,5,6}, {7,8,9}}; int b[][N] = {{9,8,7}, {6,5,4}, {3,2,1}}; int c[N][N] = {0}; matmul(a, b, c); /* C = {{30,24,18}, {84,69,54}, {138,114,90}} */ return 0; }
Though a multi-dimensional array is actually one-dimensional, we can treat it as a real multi-dimensional. The following example shows how we pass the array to a function:
void matrix_multiplication(const int a[][3], int row_a, const int (*b)[4], int row_b, int c[2][4]) { for(int i = 0; i < row_a; i++) for(int j = 0; j < 4; j++) for(int k = 0; k < row_b; k++) c[i][j] += a[i][k]*b[k][j]; } int main() { int a[2][3]={{6,5,-2},{-1,-4,0}}; int b[][4]={{6,1,7,3},{2,4,-1,5},{8,9,10,11}}; int c[2][4] = {0}; int row_a = 2; int row_b = 3; matrix_multiplication(a, row_a, b, row_b, c); /* c = { 30 8 17 21 -14 -17 -3 -23 } */ return 0; }
Here, we have the resulting array from Visual Studio by typing in ((int*)c),8 into the watch tab window, where c is the pointer and 8 is number of elements. If we only want to see the last 4 elements, we do ((int*)c+4),4.
Just for reference, the following example is using vector instead of array:
; }
- char **argv
argv: a pointer to pointer to char
- int (*ptr)[10]
ptr: a pointer to an array of 10 integers
- int *ptr[10]
ptr: an array of 10 pointers to integer
- void *comp()
comp: a function returning pointer to void
- void (*comp)()
comp: a pointer to function returning void
#include <iostream> #include <vector> #define N 5 using namespace std; int main() { // array of integers int *a = (int*)malloc(sizeof(int)*N); for(int i = 0; i < N; i++) a[i] = i; // array of pointers to integer int arr[] = {0,1,2,3,4}; int **ap = (int**)malloc(sizeof(int*)*N); for(int i = 0; i < N; i++) ap[i] = &arr;[i]; // vector of pointers to integer vector<int *> apv = vector<int *>(N); for(int i = 0; i < N; i++) apv[i] = &arr;[i]; return 0; }
| http://www.bogotobogo.com/cplusplus/pointers3_function_multidimensional_arrays.php | CC-MAIN-2017-34 | refinedweb | 3,962 | 57.81 |
In this section we're going to look at just one method, which should tell you immediately that this is a complicated method. This method is called
createEnemy(), and is responsible for launching either a penguin or a bomb into the air for the player to swipe. That's it – that's all it does. And yet it's going to take quite a lot of code because it takes quite a lot of functionality in order to make the game complete:
It should be obvious that 3) relies on 2) – if you create something on the left edge of the screen, having it move to the left would make the game impossible for players!
An additional complexity is that in the early stages of the game we sometimes want to force a bomb, and sometimes force a penguin, in order to build a smooth learning curve. For example, it wouldn't be fair to make the very first enemy a bomb, because the player would swipe it and lose immediately.
We're going to specify what kind of enemy we want using an enum. You've used enums already (not least in project 2), but you've never created one before. To make
createEnemy() work, we need to declare a new enum that tracks what kind of enemy should be created: should we force a bomb always, should we force a bomb never, or use the default randomization?
Add this above your class definition in GameScene.swift:
enum ForceBomb { case never, always, random }
You can now use those values in your code, for example like this:
if forceBomb == .never { enemyType = 1 } else if forceBomb == .always { enemyType = 0 }
OK, it's time to start looking at the
createEnemy() method. I say "start" because we're going to look at it in three passes: the code required to create bombs, the code to position enemies and set up their physics, and the code required to do everything else. Your code probably won't run until all three parts are in place, so don't worry!
We're going to need to track enemies that are currently active in the scene, so please add this array as a property of your class:
var activeEnemies = [SKSpriteNode]()
And now let's look at the core of the
createEnemy() method. It needs to:
activeEnemiesarray.
That's it. Not too much, I hope. To decide whether to create a bomb or a player, I'll choose a random number from 0 to 6, and consider 0 to mean "bomb". Here's the code:
func createEnemy(forceBomb: ForceBomb = .random) { let enemy: SKSpriteNode var enemyType = Int.random(in: 0...6) if forceBomb == .never { enemyType = 1 } else if forceBomb == .always { enemyType = 0 } if enemyType == 0 { // bomb code goes here } else { enemy = SKSpriteNode(imageNamed: "penguin") run(SKAction.playSoundFileNamed("launch.caf", waitForCompletion: false)) enemy.name = "enemy" } // position code goes here addChild(enemy) activeEnemies.append(enemy) }
Note: Xcode will show you a compiler error for now, but don’t worry – we’re going to fix it.
There's nothing complicated in there, but I have taken out two fairly meaty chunks of code. That
// position code goes here comment masks a lot of missing functionality that really makes the game come alive, so we're going to fill that in now.
I'm going to use numbered comments again so you can see exactly how this code matches up with what it should do. So, here is what that missing position code needs to do:
collisionBitMaskis set to 0 so they don't collide.
The only thing that might catch you out in the actual code is my use of magic numbers, which is what programmers call seemingly random (but actually important) numbers appearing in code. Ideally you don't want these, because it's better to make them constants with names, but then how would I be able to give you any homework?
Turning those five points into code is easy enough – just replace the
// position code goes here with this:
// 1 let randomPosition = CGPoint(x: Int.random(in: 64...960), y: -128) enemy.position = randomPosition // 2 let randomAngularVelocity = CGFloat.random(in: -3...3 ) let randomXVelocity: Int // 3 if randomPosition.x < 256 { randomXVelocity = Int.random(in: 8...15) } else if randomPosition.x < 512 { randomXVelocity = Int.random(in: 3...5) } else if randomPosition.x < 768 { randomXVelocity = -Int.random(in: 3...5) } else { randomXVelocity = -Int.random(in: 8...15) } // 4 let randomYVelocity = Int.random(in: 24...32) // 5 enemy.physicsBody = SKPhysicsBody(circleOfRadius: 64) enemy.physicsBody?.velocity = CGVector(dx: randomXVelocity * 40, dy: randomYVelocity * 40) enemy.physicsBody?.angularVelocity = randomAngularVelocity enemy.physicsBody?.collisionBitMask = 0
The last missing part of the
createEnemy() method is about creating bombs, and I've left it separate because it requires some thinking. A "bomb" node in our game is actually going to be made up of three parts: the bomb image, a bomb fuse particle emitter, and a container that puts the two together so we can move and spin them around together.
The reason we need to keep the bomb image and bomb fuse separate is because tapping on a bomb is a fatal move that causes the player to lose all their lives immediately. If the fuse particle emitter were inside the bomb image, then the user could accidentally tap a stray fuse particle and lose unfairly.
As a reminder, we're going to force the Z position of bombs to be 1, which is higher than the default value of 0. This is so that bombs always appear in front of penguins, because hours of play testing has made it clear to me that it's awful if you don't realize there's a bomb lurking behind something when you swipe it!
Creating a bomb also needs to play a fuse sound, but that has its own complexity. You've already seen that
SKAction has a very simple way to play sounds, but it's so simple that it's not useful here because we want to be able to stop the sound and
SKAction sounds don't let you do that. It would be confusing for the fuse sound to be playing when no bombs are visible, so we need a better solution.
That solution is called
AVAudioPlayer, and it's not a SpriteKit class – it's available to use in your UIKit apps too if you want. We're going to have an
AVAudioPlayer property for our class that will store a sound just for bomb fuses so that we can stop it as needed.
Let's put numbers to the tasks this chunk of code needs to perform:
SKSpriteNodethat will hold the fuse and the bomb image as children, setting its Z position to be 1.
That's all you need to know in order to continue. We need to start by importing the AVFoundation framework, so add this line now next to
import SpriteKit:
import AVFoundation
You'll also need to declare the
bombSoundEffect property, so put this just after the declaration of
isSwooshSoundActive:
var bombSoundEffect: AVAudioPlayer?
Now for the real work. Please replace the
// bomb code goes here comment with this, watching out for my numbered comments to help you match code against meaning:
// 1 enemy = SKSpriteNode() enemy.zPosition = 1 enemy.name = "bombContainer" // 2 let bombImage = SKSpriteNode(imageNamed: "sliceBomb") bombImage.name = "bomb" enemy.addChild(bombImage) // 3 if bombSoundEffect != nil { bombSoundEffect?.stop() bombSoundEffect = nil } // 4 if let path = Bundle.main.url(forResource: "sliceBombFuse", withExtension: "caf") { if let sound = try? AVAudioPlayer(contentsOf: path) { bombSoundEffect = sound sound.play() } } // 5 if let emitter = SKEmitterNode(fileNamed: "sliceFuse") { emitter.position = CGPoint(x: 76, y: 64) enemy.addChild(emitter) }
After all that work, you're almost done with bombs. But there's one small bug that we can either fix now or fix when you can see it, but we might as well fix it now because your brain is thinking about all that bomb code.
The bug is this: we're using
AVAudioPlayer so that we can stop the bomb fuse when bombs are no longer on the screen. But where do we actually stop the sound? Well, we don't yet – but we need to.
To fix the bug, we need to modify the
update() method, which is something we haven't touched before – in fact, so far we’ve just been deleting it! This method is called every frame before it's drawn, and gives you a chance to update your game state as you want. We're going to use this method to count the number of bomb containers that exist in our game, and stop the fuse sound if the answer is 0.
Change your
update() method to this:
override func update(_ currentTime: TimeInterval) { var bombCount = 0 for node in activeEnemies { if node.name == "bombContainer" { bombCount += 1 break } } if bombCount == 0 { // no bombs – stop the fuse sound! bombSoundEffect?.stop() bombSoundEffect = nil } }
Sponsor Hacking with Swift and reach the world's largest Swift community!
Link copied to your pasteboard. | https://www.hackingwithswift.com/read/23/4/enemy-or-bomb-avaudioplayer | CC-MAIN-2020-29 | refinedweb | 1,495 | 63.49 |
Software Architecture 101: What Makes it Good?
Introduction
So what is software architecture and why should you care? In this article, I hope to explore this idea and show you the benefits of good software structure and design. This article is intended for programming students or professionals with experience with game programming. I will be using C# as the demonstration language and Unity will be our reference Game Engine. Strictly speaking, this article will be as agnostic as possible to both—the main objective here is to explain what makes good architecture and what having good architecture can do for you and your projects. Perhaps after learning more about software architecture can even help you transition to becoming a software developer. So let’s get started.
What is Software Architecture
Unity is a fantastic game engine, however, the approach that new developers are encouraged to take does not lend itself well to writing large, flexible, or scalable code bases. This can apply to nearly all the major Game Engines. In particular, the default way that Unity manages dependencies between different game components can often be awkward and error prone. What we can do to prevent this outlines our project scope early on and using what we know from this stage, plan out a software design that will conform to our client and project needs.
One of the best truths I have learned from software development has to be that not even the client will know what they want. Generally, I find I could be given a list of must haves one week and by the following week, half of these might be the latest cuts from a project. Using a software design pattern can help mitigate the effects of drastic code base changes provided you are thinking about the client’s needs and have some grasp on the domain in which you are working.
(Watch: Best Practices in iOS Game Development & Architecture)
What is Good Software
Before we begin worrying about design principles, it would be good to start here and define what it is we are looking for. Software Architecture is pointless if we are not leveraging it to support our goals. And before we can leverage it, we need to know what is good software.
- Good software is functional. If any piece of software isn’t able to execute its core functionality then it’s useless.
- Good software is robust.. Usually, the best measures are how the software can facilitate the business needs. A good measure for a UI is how long does it take to load or react to an interaction.
- Good software is debuggable. This doesn’t mean being able to log everything for the heck of it but being able to bulk dump debug on demand can be very handy.
- Good software is maintainable. A software can be easy to maintain if it has consistent styling, good comments, is modular, etc. In fact, there is a lot of literature on good software design that just focuses on design principles that make it easy to make changes to parts of the software without breaking its functionality.
- Good software is reusable. Generalizing a solution can be hard and time-consuming. Obviously, we are all on deadlines so unless you are absolutely sure that you are going to reuse this piece of functionality elsewhere, you can time-bound the effort of making it reusable.
- Good software is extensible. Usually, the conversation starts with – “but suppose that tomorrow somebody wants to add X here…” software should be written with extension in mind, these extensions should be thought of in the most general of fashion. Like the general copy command on all OSs, it doesn’t care where you’re copying to or from these are extensions to the original program, making it immeasurably more valuable.
Good Class Structure
A. Single Responsibility
A responsibility is a reason to change. A class should have one, and only one reason to change. As an example, consider a class that compiles and prints a report. Imagine such a class. It would be a bad design to couple two things that change for different reasons at different times.
The reason it is important to keep a class focused on a single concern is that it makes the class more robust. Continuing with that example, if there is a change to the report compilation process, there is a greater danger that the printing code will break if it is part of the same class.
B. Interfaces
An interface declares a contract. Implementing an interface enforces that your class will be bound to the contract (by providing the appropriate members). Consequently, everything that relies on that contract can work with your object, too. In other words, if you “Program to an Interface, not an Implementation” then you can inject different objects which share the same interface(type) into the method as an argument. This way, your method code is not coupled with any implementation of another class, which means it’s always open to working with newly created objects of the same interface. This—you will learn later—has major benefits such as conforming to the (Open/Closed) principle.
A common misconception that people new to Interfaces have is extracting interfaces from every class and using those interfaces everywhere instead of using the class directly. The goal is to make the code more loosely coupled, so it’s reasonable to think that being bound to an interface is better than being bound to a concrete class. However, in most cases the various responsibilities of an application have single, specific classes implementing them, so using interfaces in these cases just adds unnecessary maintenance overhead. Also, concrete classes already have an interface defined by their public members. A good rule of thumb instead is to only create interfaces when the class has more than one implementation.
What is Dependency Injection?
Dependency Inject is part of SOLID principles. In basic terms, it means resolving a class’s dependencies as late as possible. So what does that mean? DI isn’t the easiest principal to grasp but it is definitely a big step up in software design once you can understand it. I’m going to try to give a general example then we can look at specific implementations.
Let’s imagine you have a Game class. Let’s also imagine in this Game class you want to send an Email or any such service. A common approach would be to new up an EmailService directly inside the Game class.
public class Game { public void Main() { var emailService = new EmailService(); emailService.DoSomething(); } }
What are the issues here? Are there issues here? The answers to these questions can vary wildly. It all depends on the context in which you are creating your game. If you are rapid prototyping this approach, this is perfectly valid as it does exactly what is needed. But would this suit a long term project, with a full development team? No. The reason this wouldn’t suit is the concreteness with which we have implemented our EmailService into our Game class. Later on during development, if one of the Team members needs to edit the EmailService, they could break the functionality inside the Game class without ever knowing until compile time. We would say the EmailService is tightly coupled to the Game class. I’m going to try and generalize what we’ve discussed here with an implementation example.
When writing an individual class to achieve some functionality, it will likely need to interact with other classes in the system to achieve its goals. One way to do this is to have the class itself create its dependencies, by calling concrete constructors.
public class Foo { IService = _service; public Foo() { _service = new Service(); } public void DoSomething() { _service.DoSomething(); } }
This works fine for small projects, but as your project grows, it starts to get unwieldy. The class Foo is tightly coupled to class ‘Service’. If we decide later that we want to use a different concrete implementation, then we have to go back into the Foo class to change it. After thinking about this, often you come to the realization that ultimately, Foo shouldn’t bother itself with the details of choosing the specific implementation of the service. All Foo should care about is fulfilling its own specific responsibilities. As long as the service fulfills the abstract interface required by Foo, Foo is happy. Our class then becomes:
public class Foo { IService = _service; public Foo(IService service) { _service = service; } public void DoSomething() { _service.DoSomething(); } }
This is better, but now whatever class is creating Foo (let’s call it Bar) has the problem of filling in Foo’s extra dependencies:
public class Bar { public void DoSomething() { var foo = new Foo(new Service()); foo.DoSomething(); } }
And class Bar probably also doesn’t really care about what specific implementation of Service Foo uses. Therefore, we push the dependency up again:
public class Bar { IService _service; public Bar(IService service) { _service = service; } public void DoSomething() { var foo = new Foo(_service); foo.DoSomething(); } }
So we find that it is useful to push the responsibility of deciding which specific implementations of which classes to use further and further up in the ‘object graph’ of the application. Taking this to an extreme, we arrive at the entry point of the application, at which point all dependencies must be satisfied before things start. The dependency injection lingo for this part of the application is called the ‘composition root’. It would normally look like this:
var service = new Service(); var foo = new Foo(service); var bar = new Bar(foo);
Benefits of Dependency Injection
There are many misconceptions about DI, due to the fact that it can be tricky to fully wrap your head around at first. I found it can take time and experience before it fully sinks in. As shown in the example above, DI can be used to easily swap different implementations of a given interface (in the example, this was ISomeService). However, this is only one of the many benefits that DI offers.
- Testability: Writing automated unit tests or user-driven tests becomes very easy because it is just a matter of writing a different ‘composition root’ which wires up the dependencies in a different way. Want to only test one subsystem? Simply create a new composition root.
- Refactorability: When the code is loosely coupled, as is the case when using DI properly, the entire code base is much more resilient to changes. You can completely change parts of the code base without having those changes wreak havoc on other parts.
- Encourages modular code: When using DI, you will naturally follow better design practices because it forces you to think about the contract between classes.
Drawbacks
For small enough projects, I would agree with you that using a global singleton might be easier and less complicated. But as your project grows in size, using global singletons will make your code unwieldy. Good code is basically synonymous with loosely coupled code, and to write loosely coupled code you need to:
(A) actually be aware of the dependencies between classes and
(B) code to interfaces (however, I don’t literally mean to use interfaces everywhere)
In terms of (A), when using global singletons, it’s not obvious at all what depends on what. And over time, your code will become really convoluted, as everything will tend towards depending on everything. There could always be some method somewhere deep in a call stack that does some hail mary request to some other class anywhere in your code base. In terms of (B), you can’t really code to interfaces with global singletons because you’re always referring to a concrete class.
With Dependency Injection, in terms of (A), it’s a bit more work to declare the dependencies you need up-front in your constructor, but this can be a good thing too because it forces you to be aware of the dependencies between classes.
And in terms of (B), it also forces you to code to interfaces. By declaring all your dependencies as constructor parameters, you are basically saying “in order for me to do X, I need these contracts fulfilled”. These constructor parameters might not actually be interfaces or abstract classes, but it doesn’t matter—in an abstract sense, they are still contracts, which isn’t the case when you are creating them within the class or using global singletons.
Then the result will be more loosely coupled code, which will make it 100x easier to refactor, maintain, test, understand, reuse, etc.
Let’s take an Example
Let us suppose we want to take user input into our game, we are currently developing on Desktop/Laptop machine but our game is intended for mobile. As you can probably already see there’s going to be some difficulty here when it comes to testing locally on the development machine and deploying to the mobile build as both are going to have independent ways to getting the user input. On the development machine, we will have a keyboard and mouse but on the mobile device, we might only have the touch screen for user input. How do we get around having two different user inputs and what’s the best way to manage this dual input in code? We might begin our input class like so.
public class UserInput { public float GetHorizontalAxis() { return Input.GetAxis(“Horizontal”); } }
So everything is perfect so far. We have our class that can manage user input that we can inject into our other classes as a dependency if need be. What would happen now if we want to mobile input?
Well from a pragmatic point of view the best option here would be to implement an interface first. We have multiple instances of user input both the Desktop Development case and the mobile case. Our interface could look something like this.
public interface IUserInput { float GetHorizontalAxis(); }
But how do we handle this in the dependent class? Rather than writing to a specific concrete class we can now write to an interface as the functionality is going to be guaranteed by that interface. So we can now change the dependent class to use the interface. So for example.
public class MyGame { private UserInput _userInput; public MyGame(UserInput userInput) { _userInput = userInput; } }
Would become:
public class MyGame { private IUserInput _userInput; public MyGame(IUserInput userInput) { _userInput = userInput; } }
Our previous UserInput class can now be updated like so.
public class DevUserInput : IUserInput { public float GetHorizontalAxis() { return Input.GetAxis(“Horizontal”); } }
Now when it comes to writing our extended functionality for our mobile case, we will have no problem in the implementation. Now that everything is conforming to our IUserInput, contract all we have to do now is let our new mobile user input class implement IUserInput as an interface.
public class MobileUserInput : IUserInput { public float GetHorizontalAxis() { (Specific functionality here...) } }
This is where we can see the true power of polymorphism at work now. We have two independent classes that will handle two very different use cases of the same problem. We have let both these concrete classes implement a common interface, this will now let us change between them without any hassle to the rest of the program. For example, in our game setup, we could do the following:
var myGame = MyGame(MobileUserInput); // For our mobile implementation. var myGame = MyGame(DevUserInput); // For our development implementation.
So what have we learned
Hopefully, this has given you a taste for software design patterns and good principals. And when correctly used, these can help support a large, complex, and collaborative code bases. We looked at how a lot of software developers and programmers using Unity work, we talked about what can go wrong with some of the drawbacks—but also the benefits. As with any system or way of working, you will always encounter trade-offs, it is helpful to fully understand all possible implications before making a move.
Before we could look into architecture, though, we had to talk about good software, we looked at the characteristics of good software and explained how and why these are important. This then gave us a solid basis from which to build our architecture, once we’d established what we are trying to achieve. This fed us into Single Responsibility, the first principle of SOLID. We discussed single responsibility, trying to make it absolutely clear what a single function is and how we can recognize that. We looked at how it can be easy to confuse what we would instinctively see as a single object is actually a group of functionalities. We then looked at why it is better to break classes up by functionality, so change can only impact on a single functional basis. As I showed, this narrows down any single points of failure.
I explained what an Interface is, basically a glorified abstract class. We looked at the contract view of implementing an Interface as well as the benefits this can deliver. We also looked at over interfacing code. If a requirement doesn’t have more than one implementation, writing an interface for that class will only serve to add a code overhead—and this isn’t a pragmatic solution. This led us into Dependency Injection, were first, we looked at the common problems faced by large and complex code bases, most importantly, coupled classes. I explained how by using a DI approach, we can minimize coupling and code rigidity—the benefits this offers but also again the drawbacks. We saw how our code base would become flexible, testable, and refactorable. We also saw how in certain situations; this is actually a drawback.
The key thing is to analyze what your main goals are and to find the best project architecture that will support this. You have to think through the possible routes a project could take during development. If you can correctly identify this, picking a supportive architecture becomes a lot easier. This then, in turn, becomes the underlying structure which will support the convention, code norms, and give direction to any new members of the project. I hope looking at some of the most common approaches has helped you. But mostly, I hope it has helped shape your view of architecture and made it more relevant for your next project.
I hope this tutorial will help you to become a better software developer. You should also consider looking into software development tools and other tutorials to help you in your journey.
Author’s Bio
Stephen is a Software Developer with GameSparks, he has a passion for game development having completed his studies with Pulse College Dublin. He has previously worked to develop solutions for enterprise and is currently studying Information Systems with Trinity College Dublin.
Or Become a Codementor!
Codementor is your live 1:1 expert mentor helping you in real time. | https://www.codementor.io/learn-development/what-makes-good-software-architecture-101 | CC-MAIN-2017-39 | refinedweb | 3,144 | 52.8 |
Time zones¶
Overview¶ user’s wall clock.
Even if your website is available in only one time zone, it’s still good practice to store data in UTC in your database. The main reason is Daylight Saving Time (DST). Many countries have a system of DST, where clocks are moved forward in spring and backward in autumn. If you’re working in local time, you’re likely to encounter errors twice a year, when the transitions happen. (The pytz documentation discusses these issues in greater detail.) This probably doesn’t matter for your blog, but it’s a problem if you over-bill or under-bill your customers by one hour, twice a year, every year. The solution to this problem is to use UTC in the code and use local time only when interacting with end users.
Time zone support is disabled by default. To enable it, set
USE_TZ =
True in your settings file. Time zone support uses pytz, which is
installed when you install Django.
Older versions don’t require
pytz or install it automatically.
Note
The default
settings.py file created by
django-admin
startproject includes
USE_TZ = True
for convenience.
Note
There is also an independent but related
USE_L10N setting that
controls whether Django should activate format localization. See
Format localization for more details.
If you’re wrestling with a particular problem, start with the time zone FAQ.
is_aware() and
is_naive() to determine whether datetimes are
aware or naive.
When time zone support is disabled, Django uses naive datetime objects in local time. This is simple and sufficient for many use cases. In this mode, to obtain the current time, you would write:
import datetime now = datetime.datetime.now()
When time zone support is enabled (
USE_TZ=True), Django uses
time-zone-aware datetime objects. If your code creates datetime objects, they
should be aware too. In this mode, the example above becomes:
from django.utils import timezone now = timezone.now()
Warning
Dealing with aware datetime objects isn’t always intuitive. For instance,
the
tzinfo argument of the standard datetime constructor doesn’t work
reliably for time zones with DST. Using UTC is generally safe; if you’re
using other time zones, you should review the pytz documentation
carefully.
Note
Python’s
datetime.time objects also feature a
tzinfo
attribute, and PostgreSQL has a matching
time with time zone type.
However, as PostgreSQL’s docs put it, this type “exhibits properties which
lead to questionable usefulness”.
Django only supports naive time objects and will raise an exception if you attempt to save an aware time object, as a timezone for a time with no associated date does not make sense.
Interpretation of naive datetime objects¶
When
USE_TZ is
True, Django still accepts naive datetime
objects, in order to preserve backwards-compatibility. When the database layer
receives one, it attempts to make it aware by interpreting it in the
default time zone and raises a warning.
Unfortunately, during DST transitions, some datetimes don’t exist or are ambiguous. In such situations, pytz raises an exception. That’s why you should always create aware datetime objects when time zone support is enabled.
In practice, this is rarely an issue. Django gives you aware datetime objects
in the models and forms, and most often, new datetime objects are created from
existing ones through
timedelta arithmetic. The only
datetime that’s often created in application code is the current time, and
timezone.now() automatically does the
right thing.
Default time zone and current time zone¶
The default time zone is the time zone defined by the
TIME_ZONE
setting.
The current time zone is the time zone that’s used for rendering.
You should set the current time zone to the end user’s actual time zone with
activate(). Otherwise, the default time zone is
used.
Note
As explained in the documentation of
TIME_ZONE, Django sets
environment variables so that its process runs in the default time zone.
This happens regardless of the value of
USE_TZ and of the
current time zone.
When
USE_TZ is
True, this is useful to preserve
backwards-compatibility with applications that still rely on local time.
However, as explained above, this isn’t
entirely reliable, and you should always work with aware datetimes in UTC
in your own code. For instance, use
fromtimestamp()
and set the
tz parameter to
utc.
Selecting the current time zone¶
The current time zone is the equivalent of the current locale for translations. However, there’s no equivalent of the
Accept-Language HTTP header that Django could use to determine the user’s
time zone automatically. Instead, Django provides time zone selection
functions. Use them to build the time zone
selection logic that makes sense for you.
Most websites that care about time zones just ask users in which time zone they live and store this information in the user’s profile. For anonymous users, they use the time zone of their primary audience or UTC. pytz provides helpers, like a list of time zones per country, that you can use to pre-select the most likely choices.
Here’s an example that stores the current timezone in the session. (It skips error handling entirely for the sake of simplicity.)
Add the following middleware to
MIDDLEWARE:
import pytz from django.utils import timezone from django.utils.deprecation import MiddlewareMixin class TimezoneMiddleware(MiddlewareMixin): def process_request(self, request): tzname = request.session.get('django_timezone') if tzname: timezone.activate(pytz.timezone(tzname)) else: timezone.deactivate()
Create a view that can set the current timezone:
from django.shortcuts import redirect, render def set_timezone(request): if request.method == 'POST': request.session['django_timezone'] = request.POST['timezone'] return redirect('/') else: return render(request, 'template.html', {'timezones': pytz.common_timezones})
Include a form in
template.html that will
POST to this view:
{% load tz %} {% get_current_timezone as TIME_ZONE %} <form action="{% url 'set_timezone' %}" method="POST"> {% csrf_token %} <label for="timezone">Time zone:</label> <select name="timezone"> {% for tz in timezones %} <option value="{{ tz }}"{% if tz == TIME_ZONE %} selected{% endif %}>{{ tz }}</option> {% endfor %} </select> <input type="submit" value="Set" /> </form>
Time zone aware input in forms¶
When you enable time zone support, Django interprets datetimes entered in
forms in the current time zone and returns
aware datetime objects in
cleaned_data.
If the current time zone raises an exception for datetimes that don’t exist or are ambiguous because they fall in a DST transition (the timezones provided by pytz do this), such datetimes will be reported as invalid values.
Time zone aware output in templates¶
When you enable time zone support, Django converts aware datetime objects to the current time zone when they’re rendered in templates. This behaves very much like format localization.
Warning
Django doesn’t convert naive datetime objects, because they could be ambiguous, and because your code should never produce naive datetimes when time zone support is enabled. However, you can force conversion with the template filters described below.
Conversion to local time isn’t always appropriate – you may be generating
output for computers rather than for humans. The following filters and tags,
provided by the
tz template tag library, allow you to control the time zone
conversions.
Template filters¶
These filters accept both aware and naive datetimes. For conversion purposes, they assume that naive datetimes are in the default time zone. They always return aware datetimes.
localtime¶
Forces conversion of a single value to the current time zone.
For example:
{% load tz %} {{ value|localtime }}
Migration guide¶
Here’s how to migrate a project that was started before Django supported time zones.
Database¶
PostgreSQL¶
The PostgreSQL backend stores datetimes as
timestamp with time zone. In
practice, this means it converts datetimes from the connection’s time zone to
UTC on storage, and from UTC to the connection’s time zone on retrieval.
As a consequence, if you’re using PostgreSQL, you can switch between
USE_TZ
= False and
USE_TZ = True freely. The database connection’s time zone
will be set to
TIME_ZONE or
UTC respectively, so that Django
obtains correct datetimes in all cases. You don’t need to perform any data
conversions.
Code¶
The first step is to add
USE_TZ = True to your settings
file. At this point, things should mostly work. If you create naive datetime
objects in your code, Django makes them aware when necessary.
However, these conversions may fail around DST transitions, which means you aren’t getting the full benefits of time zone support yet. Also, you’re likely to run into a few problems because it’s impossible to compare a naive datetime with an aware datetime. Since Django now gives you aware datetimes, you’ll get exceptions wherever you compare a datetime that comes from a model or a form with a naive datetime that you’ve created in your code.
So the second step is to refactor your code wherever you instantiate datetime
objects to make them aware. This can be done incrementally.
django.utils.timezone defines some handy helpers for compatibility
code:
now(),
is_aware(),
is_naive(),
make_aware(), and
make_naive().
Finally, in order to help you locate code that needs upgrading, Django raises a warning when you attempt to save a naive datetime to the database:
RuntimeWarning: DateTimeField ModelName.field_name received a naive datetime (2012-01-01 00:00:00) while time zone support is active.
During development, you can turn such warnings into exceptions and get a traceback by adding the following to your settings file:
import warnings warnings.filterwarnings( 'error', r"DateTimeField .* received a naive datetime", RuntimeWarning, r'django\.db\.models\.fields', )
Fixtures¶
When serializing an aware datetime, the UTC offset is included, like this:
"2011-09-01T13:20:30+03:00"
For a naive datetime, it obviously isn’t:
"2011-09-01T13:20:30"
For models with
DateTimeFields, this difference
makes it impossible to write a fixture that works both with and without time
zone support.
Fixtures generated with
USE_TZ = False, or before Django 1.4, use the
“naive” format. If your project contains such fixtures, after you enable time
zone support, you’ll see
RuntimeWarnings when you load them. To get
rid of the warnings, you must convert your fixtures to the “aware” format.
You can regenerate fixtures with
loaddata then
dumpdata.
Or, if they’re small enough, you can simply edit them to add the UTC offset
that matches your
TIME_ZONE to each serialized datetime.
Setup¶
I don’t need multiple time zones. Should I enable time zone support?
Yes. When time zone support is enabled, Django uses a more accurate model of local time. This shields you from subtle and unreproducible bugs around Daylight Saving Time (DST) transitions.
In this regard, time zones are comparable to
unicodein Python. At first it’s hard. You get encoding and decoding errors. Then you learn the rules. And some problems disappear – you never get mangled output again when your application receives non-ASCII input.
When you enable time zone support, you’ll encounter some errors because you’re using naive datetimes where Django expects aware datetimes. Such errors show up when running tests and they’re easy to fix. You’ll quickly learn how to avoid invalid operations.
On the other hand, bugs caused by the lack of time zone support are much harder to prevent, diagnose and fix. Anything that involves scheduled tasks or datetime arithmetic is a candidate for subtle bugs that will bite you only once or twice a year.
For these reasons, time zone support is enabled by default in new projects, and you should keep it unless you have a very good reason not to.
I’ve enabled time zone support. Am I safe?
Maybe. You’re better protected from DST-related bugs, but you can still shoot yourself in the foot by carelessly turning naive datetimes into aware datetimes, and vice-versa.
If your application connects to other systems – for instance, if it queries a Web service – make sure datetimes are properly specified. To transmit datetimes safely, their representation should include the UTC offset, or their values should be in UTC (or both!).
Finally, our calendar system contains interesting traps for computers:
>>> import datetime >>> def one_year_before(value): # DON'T DO THAT! ... return value.replace(year=value.year - 1) >>> one_year_before(datetime.datetime(2012, 3, 1, 10, 0)) datetime.datetime(2011, 3, 1, 10, 0) >>> one_year_before(datetime.datetime(2012, 2, 29, 10, 0)) Traceback (most recent call last): ... ValueError: day is out of range for month
(To implement this function, you must decide whether 2012-02-29 minus one year is 2011-02-28 or 2011-03-01, which depends on your business requirements.)
How do I interact with a database that stores datetimes in local time?
Set the
TIME_ZONEoption to the appropriate time zone for this database in the
DATABASESsetting.
This is useful for connecting to a database that doesn’t support time zones and that isn’t managed by Django when
USE_TZis
True.
Troubleshooting¶
My application crashes with
TypeError: can't compare offset-naive
and offset-aware datetimes– what’s wrong?
Let’s reproduce this error by comparing a naive and an aware datetime:
>>> import datetime >>> from django.utils import timezone >>> naive = datetime.datetime.utcnow() >>> aware = timezone.now() >>> naive == aware Traceback (most recent call last): ... TypeError: can't compare offset-naive and offset-aware datetimes
If you encounter this error, most likely your code is comparing these two things:
- a datetime provided by Django – for instance, a value read from a form or a model field. Since you enabled time zone support, it’s aware.
- a datetime generated by your code, which is naive (or you wouldn’t be reading this).
Generally, the correct solution is to change your code to use an aware datetime instead.
If you’re writing a pluggable application that’s expected to work independently of the value of
USE_TZ, you may find
django.utils.timezone.now()useful. This function returns the current date and time as a naive datetime when
USE_TZ = Falseand as an aware datetime when
USE_TZ = True. You can add or subtract
datetime.timedeltaas needed.
I see lots of
RuntimeWarning: DateTimeField received a naive datetime
(YYYY-MM-DD HH:MM:SS)
while time zone support is active– is that bad?
When time zone support is enabled, the database layer expects to receive only aware datetimes from your code. This warning occurs when it receives a naive datetime. This indicates that you haven’t finished porting your code for time zone support. Please refer to the migration guide for tips on this process.
In the meantime, for backwards compatibility, the datetime is considered to be in the default time zone, which is generally what you expect.
now.date()is yesterday! (or tomorrow)
If you’ve always used naive datetimes, you probably believe that you can convert a datetime to a date by calling its
date()method. You also consider that a
dateis a lot like a
datetime, except that it’s less accurate.
None of this is true in a time zone aware environment:
>>> import datetime >>> import pytz >>> paris_tz = pytz.timezone("Europe/Paris") >>> new_york_tz = pytz.timezone("America/New_York") >>> paris = paris_tz.localize(datetime.datetime(2012, 3, 3, 1, 30)) # This is the correct way to convert between time zones with pytz. >>> new_york = new_york_tz.normalize(paris.astimezone(new_york_tz)) >>> paris == new_york, paris.date() == new_york.date() (True, False) >>> paris - new_york, paris.date() - new_york.date() (datetime.timedelta(0), datetime.timedelta(1)) >>> paris datetime.datetime(2012, 3, 3, 1, 30, tzinfo=<DstTzInfo 'Europe/Paris' CET+1:00:00 STD>) >>> new_york datetime.datetime(2012, 3, 2, 19, 30, tzinfo=<DstTzInfo 'America/New_York' EST-1 day, 19:00:00 STD>)
As this example shows, the same datetime has a different date, depending on the time zone in which it is represented. But the real problem is more fundamental.
A datetime represents a point in time. It’s absolute: it doesn’t depend on anything. On the contrary, a date is a calendaring concept. It’s a period of time whose bounds depend on the time zone in which the date is considered. As you can see, these two concepts are fundamentally different, and converting a datetime to a date isn’t a deterministic operation.
What does this mean in practice?
Generally, you should avoid converting a
datetimeto
date. For instance, you can use the
datetemplate filter to only show the date part of a datetime. This filter will convert the datetime into the current time zone before formatting it, ensuring the results appear correctly.
If you really need to do the conversion yourself, you must ensure the datetime is converted to the appropriate time zone first. Usually, this will be the current timezone:
>>> from django.utils import timezone >>> timezone.activate(pytz.timezone("Asia/Singapore")) # For this example, we just set the time zone to Singapore, but here's how # you would obtain the current time zone in the general case. >>> current_tz = timezone.get_current_timezone() # Again, this is the correct way to convert between time zones with pytz. >>> local = current_tz.normalize(paris.astimezone(current_tz)) >>> local datetime.datetime(2012, 3, 3, 8, 30, tzinfo=<DstTzInfo 'Asia/Singapore' SGT+8:00:00 STD>) >>> local.date() datetime.date(2012, 3, 3)
I get an error “
Are time zone definitions for your database installed?“
If you are using MySQL, see the Time zone definitions section of the MySQL notes for instructions on loading time zone definitions.
Usage¶
I have a string
"2012-02-21 10:28:45"and I know it’s in the
"Europe/Helsinki"time zone. How do I turn that into an aware datetime?
This is exactly what pytz is for.
>>> from django.utils.dateparse import parse_datetime >>> naive = parse_datetime("2012-02-21 10:28:45") >>> import pytz >>> pytz.timezone("Europe/Helsinki").localize(naive, is_dst=None) datetime.datetime(2012, 2, 21, 10, 28, 45, tzinfo=<DstTzInfo 'Europe/Helsinki' EET+2:00:00 STD>)
Note that
localizeis a pytz extension to the
tzinfoAPI. Also, you may want to catch
pytz.InvalidTimeError. The documentation of pytz contains more examples. You should review it before attempting to manipulate aware datetimes.
How can I obtain the local time in the current time zone?
Well, the first question is, do you really need to?
You should only use local time when you’re interacting with humans, and the template layer provides filters and tags to convert datetimes to the time zone of your choice.
Furthermore, Python knows how to compare aware datetimes, taking into account UTC offsets when necessary. It’s much easier (and possibly faster) to write all your model and view code in UTC. So, in most circumstances, the datetime in UTC returned by
django.utils.timezone.now()will be sufficient.
For the sake of completeness, though, if you really want the local time in the current time zone, here’s how you can obtain it:
>>> from django.utils import timezone >>> timezone.localtime(timezone.now()) datetime.datetime(2012, 3, 3, 20, 10, 53, 873365, tzinfo=<DstTzInfo 'Europe/Paris' CET+1:00:00 STD>)
In this example, the current time zone is
"Europe/Paris".
How can I see all available time zones?
pytz provides helpers, including a list of current time zones and a list of all available time zones – some of which are only of historical interest. | https://docs.djangoproject.com/en/dev/topics/i18n/timezones/ | CC-MAIN-2017-04 | refinedweb | 3,179 | 57.27 |
Opened 7 years ago
Closed 7 years ago
#1515 closed defect (fixed)
[with patch] ParametricSurface bug
Description
def f(x,y): return cos(x)*sin(y), sin(x)*sin(y), cos(y)+log(tan(y/2))+0.2*x show(ParametricSurface(f, (srange(0,12.4,0.1), srange(0.1,2,0.1))))
doesn't render. Also
[08:48am] williamstein: This should work but doesn't: [08:48am] williamstein: S = ParametricSurface(lambda (x,y):(cos(x), sin(x), y), domain=(range(10),range(10)))
Attachments (1)
Change History (3)
Changed 7 years ago by robertwb
comment:1 Changed 7 years ago by robertwb
- Owner changed from was to robertwb
- Status changed from new to assigned
- Summary changed from ParametricSurface bug to [with patch] ParametricSurface bug
comment:2 Changed 7 years ago by mabshoff
- Resolution set to fixed
- Status changed from assigned to closed
Merged in 2.9.rc0.
Note: See TracTickets for help on using tickets.
Now the first example works. Also, the second example almost does
(Note the missing ()'s, it expects to arguments, not a tuple). | http://trac.sagemath.org/ticket/1515 | CC-MAIN-2015-11 | refinedweb | 179 | 52.09 |
I have a C# Unity project, and I'm using UnityVS for Visual Studio integration.
The following files were generated by UnityVS:
UnityVS.EoR.sln
UnityVS.EoR.CSharp.csproj
UnityVS.EoR.CSharp.Editor.csproj
I believe ReSharper 8.1 created DotSettings files for the solution and one of the projects:
UnityVS.EoR.sln.DotSettings
UnityVS.EoR.CSharp.csproj.DotSettings
The projects share a directory structure on disk, which means that a folder in the first project points to the same filesystem location as a folder of the same name in the second project. While in Visual Studio, when I view the properties of a folder under either project, and set the "Namespace Provider" to false, the same project DotSettings file is updated. For example, I have an Assets folder that I do not wish to include in the namespace. In both projects, I can view the properties of the Assets folder and see that the "Namespace Provider" setting is false. I can also change the value from either project, and see the "UnityVS.EoR.sln.DotSettings" file get updated. Should there be two seperate DotSettings files, one for each project?
I'm not sure if this is related to the two projects sharing the same DotSettings file, but I have noticed that when I attempt to move a class declaration in the EDITOR project to the appropriate namespace using ReSharper, the Assets folder is included in the namespace path. This does not occur with the non-EDITOR project. How can I configure ReSharper to also apply the modified "Namespace Provider" setting to the EDITOR project so that the Assets folder is skipped?
I found a related issue in the ReSharper bug tracker.
An effective workaround involves duplicating the generated project DotSettings file and renaming it for the second project. Once I created a "UnityVS.EoR.CSharp.Editor.csproj.DotSettings" file, the class declaration in the EDITOR project is now moved to the proper namespace.
Visual Studio doesn't seem to pay attention to the second project DotSettings file, and modifying the folder properties in either project only updates the original file. That said, ReSharper does recognize the second file when determining the proper namespace path. | https://devnet.jetbrains.com/thread/453351 | CC-MAIN-2015-27 | refinedweb | 365 | 54.42 |
An exercise that Stroustrup wants the reader to do is understand the output of the following program.
#include "../../std_lib_facilities.h" struct X { int val; void out(const string& s, int nv) { cerr << this << "->" << s << ":" << val << "(" << nv << ")\n"; } X() { out("X()", 0); val = 0; } X(int v) { out("X(int)", v); val = v; } X(const X& x) { out("X(X&)", x.val); val = x.val; } X& operator = (const X&a) { out("X::operator = ()", a.val); val = a.val; return *this; } ~X() { out("~X()", 0); } }; X glob(2); X copy(X a) {return a;} X copy2(X a) { X aa = a; return aa; } X& ref_to(X& a) {return a;} X* make(int i) { X a(i); return new X(a); } struct XX { X a; X b; }; int main () { X loc(4); X loc2 = loc; loc = X(5); loc2 = copy(loc); loc2 = copy2(loc); X loc3(6); X& r = ref_to(loc); delete make(7); delete make(8); vector<X> v(4); XX loc4; X* p = new X(9); delete p; X* pp = new X[5]; delete[] pp; keep_window_open(); return 0; }
I don't understand this program. It is supposed to have different types of constructors and destructors, variables created and variables destroyed. It prints out whenever a variable is created or destroyed. I don't understand the output. It is a series of memory addresses with an X or an ~X after it. I understand the allocation of memory and the freeing of memory. I assume the net use of memory is zero, representing no memory leak. I can't follow what is going on. I can't cut and paste the output because it is in the console window, but here is a bit of it.
004241A0->X(int):0(2)
0012FF54->X(int):-858993460(4)
0012FF48->X(X&):-858993460(4)
0012FF54->X::operator = ():5(4)
0012FD30->X(X&):969884377(5)
0012FD30->~X():(5)
What does all this mean? Are there tutorials that explain this? Stroustrup has sort of thrown us in the water and left us on our own to find a way to swim. | https://www.daniweb.com/programming/software-development/threads/322386/constructors-and-destructors | CC-MAIN-2017-17 | refinedweb | 344 | 71.55 |
This action might not be possible to undo. Are you sure you want to continue?
Obtaining the publications referred to in this brochure is as easy as dialing a tollfree number or accessing a website. Internal Revenue Service (IRS) IRS Tax Forms 1-800-TAX-FORM (800-829-3676) IRS Tax Information 1-800-TAX-1040 (800-829-1040) You may be able to get free tax help. Volunteer Income Tax Assistance (VITA) provides volunteers in local areas who prepare simple tax returns Dial 1-800-TAX-1040 from February through April 15 to find the nearest VITA site.
A series of informational publications designed to educate taxpayers about the tax impact of significant life events.
Your Money Matters
Tax Information for Survivors of Domestic Abuse Tax Benefits, Credits, and Other Information.
Department of the Treasury Publication 3865 (Rev. 8-2007) Internal Revenue Service Catalog Number 32346J
To apply for Innocent Spouse Relief, fill out IRS Form 8857, Request for Innocent Spouse Relief, within two years after the date the IRS first attempts to collect the tax from you. See IRS Publication 971, Innocent Spouse Relief, for more information. You may also wish to consult a professional advisor for help. If you cannot afford to do so, help may be available through a law school or nonprofit tax clinic in your area. Refer to the free services outlined on the back of this brochure.
While the law requires the IRS to let your spouse (or former spouse) know if you file a Form 8857 for Innocent Spouse Relief, your privacy will be protected. The IRS will not reveal to your spouse your new name, address, employer, phone or fax number, or other information unrelated to a determination of your claim. What tax or financial records should I keep? Keeping records is necessary to prepare a complete and accurate income tax return.
For more information on recordkeeping, get IRS Publication 552, Recordkeeping for Individuals.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
What if I need more time to prepare my tax return? What if I can’t pay the tax I owe? If you cannot file your return by the due date, you may apply for an automatic 6-month extension of time using IRS Form 4868, Application for Automatic Extension of Time to File U.S. Individual Income Tax Return. If you cannot pay the full amount of tax shown on your return (or on a notice you received), you can request an installment agreement using Form 9465, Installment Agreement Request.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
The Child Tax Credit may reduce your tax or increase your refund for each qualifying child. See IRS Publication 17, Your Federal Income Tax, for more information. Finally, the Child and Dependent Care Credit may reduce your tax.
See IRS Publication 503, Child and Dependent Care Expenses, for more information.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
RIGHTS & RESPONSIBILITIES You have the right to: l File a separate return even if you’re married l See and understand the entire tax return (including supporting documents) before signing a joint return l Refuse to sign a joint return l Request an automatic 6-month extension of time to file your tax return l Get copies of prior years’ tax returns from the IRS l Request relief from your spouse’s liability l Obtain independent legal advice You have a responsibility to: l File a timely return if you have income l Include all of your income on your return l Pay the taxes owed l Read and comply with correspondence from the IRS l Notify the IRS of any name or address changes
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
What if I move, get a new job, or go back to school? What if I get separated or divorced? All of these situations can impact your taxes. When you have questions, you should call the IRS, a tax professional, or check out the websites listed on the back of this brochure.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
What if the IRS says I owe more money because of my spouse or former spouse’s earnings? You may be surprised to know that when a husband and wife file a joint return, either spouse can be held liable for any tax later found to be due on the return, even if the parties later divorce or the tax bill arises because of the other spouse’s transactions. This is called “joint and several liability.” The good news is that if the IRS determines that you owe additional tax based on your spouse’s share of the return, you may have a defense based on the “innocent spouse” or “separate liability” provisions of the tax law. You may be eligible for Innocent Spouse Relief from the IRS, if you signed a joint return with your spouse and you thought your spouse had paid the taxes due. You also may be eligible if the IRS increased your taxes because of your spouse’s unreported income or erroneous deductions, but you knew nothing about the unreported or improper items when you signed the return.
Does having children affect my taxes? If you have a qualifying child, there are several tax credits that may reduce the amount of tax you owe. Some credits may also give you a refund, even if you paid little or no tax. But to get the benefit of these credits, you must file a tax return. The Earned Income Tax Credit (EITC) may be available if you are working and your earnings are low. The credit may be larger if you have one or more children living with you. You cannot take this credit if you file as married filing separately, but if your spouse didn’t live in your home at any time during the last six months of the year, you may be able to file as “head of household” and claim the EITC.
See IRS Publication 596, Earned Income Credit, for more information on the credit, and IRS Publication 501 for more on filing as “head of household.”
What other tax issues do I need to think about? If you have questions regarding medical expenses, bad debts, or powers of attorney, the IRS has more resources to help you. Please call the numbers listed in this brochure, or check out the website for more information.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
FREQUENTLY ASKED QUESTIIONS Do I have to file a tax return? Whether you have to file a tax return depends on your filing status, age, and gross income. If you are married, you have the option to file either a joint return or a separate return.
See IRS Publication 501, Exemptions, Standard Deduction, and Filing Information, for more information.
Domestic abuse is not just physical abuse. It often includes economic control. As a survivor of domestic abuse, you can take control of your finances. An important part of managing your finances is understanding your tax rights and responsibilities.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/539552/US-Internal-Revenue-Service-p3865 | CC-MAIN-2016-40 | refinedweb | 1,173 | 60.55 |
!83 2016/03/19 23:00:3119 49 + improve description of tgoto parameters (report by Steffen Nurpmeso). 50 + amend workaround for Solaris line-drawing to restore a special case 51 that maps Unicode line-drawing characters into the acsc string for 52 non-Unicode locales (Debian #816888). 53 54 20160312 55 + modified test/filter.c to illustrate an alternative to getnstr, that 56 polls for input while updating a clock on the right margin as well 57 as responding to window size-changes. 58 59 20160305 60 + omit a redefinition of "inline" when traces are enabled, since this 61 does not work with gcc 5.3.x MinGW cross-compiling (cf: 20150912). 62 63 20160220 64 + modify test/configure script to check for pthread dependency of 65 ncursest or ncursestw library when building ncurses examples, e.g., 66 in case weak symbols are used. 67 + modify configure macro for shared-library rules to use -Wl,-rpath 68 rather than -rpath to work around a bug in scons (FreeBSD #178732, 69 cf: 20061021). 70 + double-width multibyte characters were not counted properly in 71 winsnstr and wins_nwstr (report/example by Eric Pruitt). 72 + update config.guess, config.sub from 73 74 75 20160213 76 + amend fix for _nc_ripoffline from 20091031 to make test/ditto.c work 77 in threaded configuration. 78 + move _nc_tracebits, _tracedump and _tracemouse to curses.priv.h, 79 since they are not part of the suggested ABI6. 80 81 20160206 82 + define WIN32_LEAN_AND_MEAN for MinGW port, making builds faster. 83 + modify test/ditto.c to allow $XTERM_PROG environment variable to 84 override "xterm" as the name of the program to run in the threaded 85 configuration. 86 87 20160130 88 + improve formatting of man/curs_refresh.3x and man/tset.1 manpages 89 + regenerate HTML manpages using newer man2html to eliminate some 90 unwanted blank lines. 91 92 20160123 93 + ifdef'd header-file definition of mouse_trafo() with NCURSES_NOMACROS 94 (report by Corey Minyard). 95 + fix some strict compiler-warnings in traces. 96 97 20160116 98 + tidy up comments about hardcoded 256color palette (report by 99 Leonardo Brondani Schenkel) -TD 100 + add putty-noapp entry, and amend putty entry to use application mode 101 for better consistency with xterm (report by Leonardo Brondani 102 Schenkel) -TD 103 + modify _nc_viscbuf2() and _tracecchar_t2() to trace wide-characters 104 as a whole rather than their multibyte equivalents. 105 + minor fix in wadd_wchnstr() to ensure that each cell has nonzero 106 width. 107 + move PUTC_INIT calls next to wcrtomb calls, to avoid carry-over of 108 error status when processing Unicode values which are not mapped. 109 110 20160102 111 + modify ncurses c/C color test-screens to take advantage of wide 112 screens, reducing the number of lines used for 88- and 256-colors. 113 + minor refinement to check versus ncv to ignore two parameters of 114 SGR 38 and 48 when those come from color-capabilities. 115 116 20151226 117 + add check in tic for use of bold, etc., video attributes in the 118 color capabilities, accounting whether the feature is listed in ncv. 119 + add check in tic for conflict between ritm, rmso, rmul versus sgr0. 120 121 20151219 122 + add a paragraph to curs_getch.3x discussing key naming (discussion 123 with James Crippen). 124 + amend workaround for Solaris vs line-drawing to take the configure 125 check into account. 126 + add a configure check for wcwidth() versus the ncurses line-drawing 127 characters, to use in special-casing systems such as Solaris. 128 129 20151212 130 + improve CF_XOPEN_CURSES macro used in test/configure, to define as 131 needed NCURSES_WIDECHAR for platforms where _XOPEN_SOURCE_EXTENDED 132 does not work. Also modified the test program to ensure that if 133 building with ncurses, that the cchar_t type is checked, since that 134 normally is since 20111030 ifdef'd depending on this test. 135 + improve 20121222 workaround for broken acs, letting Solaris "work" 136 in spite of its misconfigured wcwidth which marks all of the line 137 drawing characters as double-width. 138 139 20151205 140 + update form_cursor.3x, form_post.3x, menu_attributes.3x to list 141 function names in NAME section (patch by Jason McIntyre). 142 + minor fixes to manpage NAME/SYNOPSIS sections to consistently use 143 rule that either all functions which are prototyped in SYNOPSIS are 144 listed in the NAME section, or the manual-page name is the sole item 145 listed in the NAME section. The latter is used to reduce clutter, 146 e.g., for the top-level library manual pages as well as for certain 147 feature-pages such as SP-funcs and threading (prompted by patches by 148 Jason McIntyre). 149 150 20151128 151 + add option to preserve leading whitespace in form fields (patch by 152 Leon Winter). 153 + add missing assignment in lib_getch.c to make notimeout() work 154 (Debian #805618). 155 + add 't' toggle for notimeout() function in test/ncurses.c a/A screens 156 + add viewdata terminal description (Alexandre Montaron). 157 + fix a case in tic/infocmp for formatting capabilities where a 158 backslash at the end of a string was mishandled. 159 + fix some typos in curs_inopts.3x (Benno Schulenberg). 160 161 20151121 162 + fix some inconsistencies in the pccon* entries -TD 163 + add bold to pccon+sgr+acs and pccon-base (Tati Chevron). 164 + add keys f12-f124 to pccon+keys (Tati Chevron). 165 + add test/test_sgr.c program to exercise all combinations of sgr. 166 167 20151107 168 + modify tset's assignment to TERM in its output to reflect the name by 169 which the terminal description is found, rather than the primary 170 name. That was an unnecessary part from the initial conversion of 171 tset from termcap to terminfo. The termcap program in 4.3BSD did 172 this to avoid using the short 2-character name (report by Rich 173 Burridge). 174 + minor fix to configure script to ensure that rules for resulting.map 175 are only generated when needed (cf: 20151101). 176 + modify configure script to handle the case where tic-library is 177 renamed, but the --with-debug option is used by itself without 178 normal or shared libraries (prompted by comment in Debian #803482). 179 180 20151101 181 + amend change for pkg-config which allows build of pc-files when no 182 valid pkg-config library directory was configured to suppress the 183 actual install if it is not overridden to a valid directory at 184 install time (cf: 20150822). 185 + modify editing script which generates resulting.map to work with the 186 clang configuration on recent FreeBSD, which gives an error on an 187 empty "local" section. 188 + fix a spurious "(Part)" message in test/ncurses.c b/B tests due 189 to incorrect attribute-masking. 190 191 20151024 192 + modify MKexpanded.c to update the expansion of a temporary filename 193 to "expanded.c", for use in trace statements. 194 + modify layout of b/B tests in test/ncurses.c to allow for additional 195 annotation on the right margin; some terminals with partial support 196 did not display well. 197 + fix typo in curs_attr.3x (patch by Sven Joachim). 198 + fix typo in INSTALL (patch by Tomas Cech). 199 + improve configure check for setting WILDCARD_SYMS variable; on ppc64 200 the variable is in the Data section rather than Text (patch by Michel 201 Normand, Novell #946048). 202 + using configure option "--without-fallbacks" incorrectly caused 203 FALLBACK_LIST to be set to "no" (patch by Tomas Cech). 204 + updated minitel entries to fix kel problem with emacs, and add 205 minitel1b-nb (Alexandre Montaron). 206 + reviewed/updated nsterm entry Terminal.app in OSX -TD 207 + replace some dead URLs in comments with equivalents from the 208 Internet Archive -TD 209 + update config.guess, config.sub from 210 211 212 20151017 213 + modify ncurses/Makefile.in to sort keys.list in POSIX locale 214 (Debian #801864, patch by Esa Peuha). 215 + remove an early-return from _nc_do_color, which can interfere with 216 data needed by bkgd when ncurses is configured with extended colors 217 (patch by Denis Tikhomirov). 218 > fixes for OS/2 (patches by KO Myung-Hun) 219 + use button instead of kbuf[0] in EMX-specific part of lib_mouse.c 220 + support building with libtool on OS/2 221 + use stdc++ on OS/2 kLIBC 222 + clear cf_XOPEN_SOURCE on OS/2 223 224 20151010 225 + add configure check for openpty to test/configure script, for ditto. 226 + minor fixes to test/view.c in investigating Debian #790847. 227 + update autoconf patch to 2.52.20150926, incorporates a fix for Cdk. 228 + add workaround for breakage of POSIX makefiles by recent binutils 229 change. 230 + improve check for working poll() by using posix_openpt() as a 231 fallback in case there is no valid terminal on the standard input 232 (prompted by discussion on bug-ncurses mailing list, Debian #676461). 233 234 20150926 235 + change makefile rule for removing resulting.map to distclean rather 236 than clean. 237 + add /lib/terminfo to terminfo-dirs in ".deb" test-package. 238 + add note on portability of resizeterm and wresize to manual pages. 239 240 20150919 241 + clarify in resizeterm.3x how KEY_RESIZE is pushed onto the input 242 stream. 243 + clarify in curs_getch.3x that the keypad mode affects ability to 244 read KEY_MOUSE codes, but does not affect KEY_RESIZE. 245 + add overlooked build-fix needed with Cygwin for separate Ada95 246 configure script, cf: 20150606 (report by Nicolas Boulenguez) 247 248 20150912 249 + fixes for configure/build using clang on OSX (prompted by report by 250 William Gallafent). 251 + do not redefine "inline" in ncurses_cfg.h; this was originally to 252 solve a problem with gcc/g++, but is aggravated by clang's misuse 253 of symbols to pretend it is gcc. 254 + add braces to configure script to prevent unwanted add of 255 "-lstdc++" to the CXXLIBS symbol. 256 + improve/update test-program used for checking existence of stdc++ 257 library. 258 + if $CXXLIBS is set, the linkage test uses that in addition to $LIBS 259 260 20150905 261 + add note in curs_addch.3x about line-drawing when it depends upon 262 UTF-8. 263 + add tic -q option for consistency with infocmp, use it to suppress 264 all comments from the "tic -I" output. 265 + modify infocmp -q option to suppress the "Reconstructed from" 266 header. 267 + add infocmp/tic -Q option, which allows one to dump the compiled 268 form of the terminal entry, in hexadecimal or base64. 269 270 20150822 271 + sort options in usage message for infocmp, to make it simpler to 272 see unused letters. 273 + update usage message for tic, adding "-0" option. 274 + documented differences in ESCDELAY versus AIX's implementation. 275 + fix some compiler warnings from ports. 276 + modify --with-pkg-config-libdir option to make it possible to install 277 ".pc" files even if pkg-config is not found (adapted from patch by 278 Joshua Root). 279 280 20150815 281 + disallow "no" as a possible value for "--with-shlib-version" option, 282 overlooked in cleanup-changes for 20000708 (report by Tommy Alex). 283 + update release notes in INSTALL. 284 + regenerate llib-* files to help with review for release notes. 285 286 20150810 287 + workaround for Debian #65617, which was fixed in mawk's upstream 288 releases in 2009 (report by Sven Joachim). See 289 290 291 20150808 6.0 release for upload to 292 293 20150808 294 + build-fix for Ada95 on older platforms without stdint.h 295 + build-fix for Solaris, whose /bin/sh and /usr/bin/sed are non-POSIX. 296 + update release announcement, summarizing more than 800 changes across 297 more than 200 snapshots. 298 + minor fixes to manpages, etc., to simplify linking from announcement 299 page. 300 301 20150725 302 + updated llib-* files. 303 + build-fixes for ncurses library "test_progs" rule. 304 + use alternate workaround for gcc 5.x feature (adapted from patch by 305 Mikhail Peselnik). 306 + add status line to tmux via xterm+sl (patch by Nicholas Marriott). 307 + fixes for st 0.5 from testing with tack -TD 308 + review/improve several manual pages to break up wall-of-text: 309 curs_add_wch.3x, curs_attr.3x, curs_bkgd.3x, curs_bkgrnd.3x, 310 curs_getcchar.3x, curs_getch.3x, curs_kernel.3x, curs_mouse.3x, 311 curs_outopts.3x, curs_overlay.3x, curs_pad.3x, curs_termattrs.3x 312 curs_trace.3x, and curs_window.3x 313 314 20150719 315 + correct an old logic error for %A and %O in tparm (report by "zreed"). 316 + improve documentation for signal handlers by adding section in the 317 curs_initscr.3x page. 318 + modify logic in make_keys.c to not assume anything about the size 319 of strnames and strfnames variables, since those may be functions 320 in the thread- or broken-linker configurations (problem found by 321 Coverity). 322 + modify test/configure script to check for pthreads configuration, 323 e.g., ncursestw library. 324 325 20150711 326 + modify scripts to build/use test-packages for the pthreads 327 configuration of ncurses6. 328 + add references to ttytype and termcap symbols in demo_terminfo.c and 329 demo_termcap.c to ensure that when building ncursest.map, etc., that 330 the corresponding names such as _nc_ttytype are added to the list of 331 versioned symbols (report by Werner Fink) 332 + fix regression from 20150704 (report/patch by Werner Fink). 333 334 20150704 335 + fix a few problems reported by Coverity. 336 + fix comparison against "/usr/include" in misc/gen-pkgconfig.in 337 (report by Daiki Ueno, Debian #790548, cf: 20141213). 338 339 20150627 340 + modify configure script to remove deprecated ABI 5 symbols when 341 building ABI 6. 342 + add symbols _nc_Default_Field, _nc_Default_Form, _nc_has_mouse to 343 map-files, but marked as deprecated so that they can easily be 344 suppressed from ABI 6 builds (Debian #788610). 345 + comment-out "screen.xterm" entry, and inherit screen.xterm-256color 346 from xterm-new (report by Richard Birkett) -TD 347 + modify read_entry.c to set the error-return to -1 if no terminal 348 databases were found, as documented for setupterm. 349 + add test_setupterm.c to demonstrate normal/error returns from the 350 setupterm and restartterm functions. 351 + amend cleanup change from 20110813 which removed redundant definition 352 of ret_error, etc., from tinfo_driver.c, to account for the fact that 353 it should return a bool rather than int (report/analysis by Johannes 354 Schindelin). 355 356 20150613 357 + fix overflow warning for OSX with lib_baudrate.c (cf: 20010630). 358 + modify script used to generate map/sym files to mark 5.9.20150530 as 359 the last "5.9" version, and regenerated the files. That makes the 360 files not use ".current" for the post-5.9 symbols. This also 361 corrects the label for _nc_sigprocmask used in when weak symbols are 362 configured for the ncursest/ncursestw libraries (prompted by 363 discussion with Sven Joachim). 364 + fix typo in NEWS (report by Sven Joachim). 365 366 20150606 pre-release 367 + make ABI 6 the default by updates to dist.mk and VERSION, with the 368 intention that the existing ABI 5 should build as before using the 369 "--with-abi-version=5" option. 370 + regenerate ada- and man-html documentation. 371 + minor fixes to color- and util-manpages. 372 + fix a regression in Ada95/gen/Makefile.in, to handle special case of 373 Cygwin, which uses the broken-linker feature. 374 + amend fix for CF_NCURSES_CONFIG used in test/configure to assume that 375 ncurses package scripts work when present for cross-compiling, as the 376 lessor of two evils (cf: 20150530). 377 + add check in configure script to disallow conflicting options 378 "--with-termlib" and "--enable-term-driver". 379 + move defaults for "--disable-lp64" and "--with-versioned-syms" into 380 CF_ABI_DEFAULTS macro. 381 382 20150530 383 + change private type for Event_Mask in Ada95 binding to work when 384 mmask_t is set to 32-bits. 385 + remove spurious "%;" from st entry (report by Daniel Pitts) -TD 386 + add vte-2014, update vte to use that -TD 387 + modify tic and infocmp to "move" a diagnostic for tparm strings that 388 have a syntax error to tic's "-c" option (report by Daniel Pitts). 389 + fix two problems with configure script macros (Debian #786436, 390 cf: 20150425, cf: 20100529). 391 392 20150523 393 + add 'P' menu item to test/ncurses.c, to show pad in color. 394 + improve discussion in curs_color.3x about color rendering (prompted 395 by comment on Stack Overflow forum): 396 + remove screen-bce.mlterm, since mlterm does not do "bce" -TD 397 + add several screen.XXX entries to support the respective variations 398 for 256 colors -TD 399 + add putty+fnkeys* building-block entries -TD 400 + add smkx/rmkx to capabilities analyzed with infocmp "-i" option. 401 402 20150516 403 + amend change to ".pc" files to only use the extra loader flags which 404 may have rpath options (report by Sven Joachim, cf: 20150502). 405 + change versioning for dpkg's in test-packages for Ada95 and 406 ncurses-examples for consistency with Debian, to work with package 407 updates. 408 + regenerate html manpages. 409 + clarify handling of carriage return in waddch manual page; it was 410 discussed only in the portability section (prompted by comment on 411 Stack Overflow forum): 412 413 20150509 414 + add test-packages for cross-compiling ncurses-examples using the 415 MinGW test-packages. These are only the Debian packages; RPM later. 416 + cleanup format of debian/copyright files 417 + add pc-files to the MinGW cross-compiling test-packages. 418 + correct a couple of places in gen-pkgconfig.in to handle renaming of 419 the tinfo library. 420 421 20150502 422 + modify the configure script to allow different default values 423 for ABI 5 versus ABI 6. 424 + add wgetch-events to test-packages. 425 + add a note on how to build ncurses-examples to test/README. 426 + fix a memory leak in delscreen (report by Daniel Kahn Gillmor, 427 Debian #783486) -TD 428 + remove unnecessary ';' from E3 capabilities -TD 429 + add tmux entry, derived from screen (patch by Nicholas Marriott). 430 + split-out recent change to nsterm-bce as nsterm-build326, and add 431 nsterm-build342 to reflect changes with successive releases of OSX 432 (discussion with Leonardo B Schenkel) 433 + add xon, ich1, il1 to ibm3161 (patch by Stephen Powell, Debian 434 #783806) 435 + add sample "magic" file, to document ext-putwin. 436 + modify gen-pkgconfig.in to add explicit -ltinfo, etc., to the 437 generated ".pc" file when ld option "--as-needed" is used, or when 438 ncurses and tinfo are installed without using rpath (prompted by 439 discussion with Sylvain Bertrand). 440 + modify test-package for ncurses6 to omit rpath feature when installed 441 in /usr. 442 + add OSX's "*.dSYM" to clean-rules in makefiles. 443 + make extra-suffix work for OSX configuration, e.g., for shared 444 libraries. 445 + modify Ada95/configure script to work with pkg-config 446 + move test-package for ncurses6 to /usr, since filename-conflicts have 447 been eliminated. 448 + corrected build rules for Ada95/gen/generate; it does not depend on 449 the ncurses library aside from headers. 450 + reviewed man pages, fixed a few other spelling errors. 451 + fix a typo in curs_util.3x (Sven Joachim). 452 + use extra-suffix in some overlooked shared library dependencies 453 found by 20150425 changes for test-packages. 454 + update config.guess, config.sub from 455 456 457 20150425 458 + expanded description of tgetstr's area pointer in manual page 459 (report by Todd M Lewis). 460 + in-progress changes to modify test-packages to use ncursesw6 rather 461 than ncursesw, with updated configure scripts. 462 + modify CF_NCURSES_CONFIG in Ada95- and test-configure scripts to 463 check for ".pc" files via pkg-config, but add a linkage check since 464 frequently pkg-config configurations are broken. 465 + modify misc/gen-pkgconfig.in to include EXTRA_LDFLAGS, e.g., for the 466 rpath option. 467 + add 'dim' capability to screen entry (report by Leonardo B Schenkel) 468 + add several key definitions to nsterm-bce to match preconfigured 469 keys, e.g., with OSX 10.9 and 10.10 (report by Leonardo B Schenkel) 470 + fix repeated "extra-suffix" in ncurses-config.in (cf: 20150418). 471 + improve term_variables manual page, adding section on the terminfo 472 long-name symbols which are defined in the term.h header. 473 + fix bug in lib_tracebits.c introduced in const-fixes (cf: 20150404). 474 475 20150418 476 + avoid a blank line in output from tabs program by ending it with 477 a carriage return as done in FreeBSD (patch by James Clarke). 478 + build-fix for the "--enable-ext-putwin" feature when not using 479 wide characters (report by Werner Fink). 480 + modify autoconf macros to use scripting improvement from xterm. 481 + add -brtl option to compiler options on AIX 5-7, needed to link 482 with the shared libraries. 483 + add --with-extra-suffix option to help with installing nonconflicting 484 ncurses6 packages, e.g., avoiding header- and library-conflicts. 485 NOTE: as a side-effect, this renames 486 adacurses-config to adacurses5-config and 487 adacursesw-config to adacursesw5-config 488 + modify debian/rules test package to suffix programs with "6". 489 + clarify in curs_inopts.3x that window-specific settings do not 490 inherit into new windows. 491 492 20150404 493 + improve description of start_color() in the manual. 494 + modify several files in ncurses- and progs-directories to allow 495 const data used in internal tables to be put by the linker into the 496 readonly text segment. 497 498 20150329 499 + correct cut/paste error for "--enable-ext-putwin" that made it the 500 same as "--enable-ext-colors" (report by Roumen Petrov) 501 502 20150328 503 + add "-f" option to test/savescreen.c to help with testing/debugging 504 the extended putwin/getwin. 505 + add logic for writing/reading combining characters in the extended 506 putwin/getwin. 507 + add "--enable-ext-putwin" configure option to turn on the extended 508 putwin/getwin. 509 510 20150321 511 + in-progress changes to provide an extended version of putwin and 512 getwin which will be capable of reading screen-dumps between the 513 wide/normal ncurses configurations. These are text files, except 514 for a magic code at the beginning: 515 0 string \210\210 Screen-dump (ncurses) 516 517 20150307 518 + document limitations of getwin in manual page (prompted by discussion 519 with John S Urban). 520 + extend test/savescreen.c to demonstrate that color pair values 521 and graphic characters can be restored using getwin. 522 523 20150228 524 + modify win_driver.c to eliminate the constructor, to make it more 525 usable in an application which may/may not need the console window 526 (report by Grady Martin). 527 528 20150221 529 + capture define's related to -D_XOPEN_SOURCE from the configure check 530 and add those to the *-config and *.pc files, to simplify use for 531 the wide-character libraries. 532 + modify ncurses.spec to accommodate Fedora21's location of pkg-config 533 directory. 534 + correct sense of "--disable-lib-suffixes" configure option (report 535 by Nicolas Boos, cf: 20140426). 536 537 20150214 538 + regenerate html manpages using improved man2html from work on xterm. 539 + regenerated ".map" and ".sym" files using improved script, accounting 540 for the "--enable-weak-symbols" configure option (report by Werner 541 Fink). 542 543 20150131 544 + regenerated ".map" and ".sym" files using improved script, showing 545 the combinations of configure options used at each stage. 546 547 20150124 548 + add configure check to determine if "local: _*;" can be used in the 549 ".map" files to selectively omit symbols beginning with "_". On at 550 least recent FreeBSD, the wildcard applies to all "_" symbols. 551 + remove obsolete/conflicting rule for ncurses.map from 552 ncurses/Makefile.in (cf: 20130706). 553 554 20150117 555 + improve description in INSTALL of the --with-versioned-syms option. 556 + add combination of --with-hashed-db and --with-ticlib to 557 configurations for ".map" files (report by Werner Fink). 558 559 20150110 560 + add a step to generating ".map" files, to declare any remaining 561 symbols beginning with "_" as local, at the last version node. 562 + improve configure checks for pkg-config, addressing a variant found 563 with FreeBSD ports. 564 + modify win_driver.c to provide characters for special keys, like 565 ansi.sys, when keypad mode is off, rather than returning nothing at 566 all (discussion with Eli Zaretskii). 567 + add "broken_linker" and "hashed-db" configure options to combinations 568 use for generating the ".map" and ".sym" files. 569 + avoid using "ld" directly when creating shared library, to simplify 570 cross-compiles. Also drop "-Bsharable" option from shared-library 571 rules for FreeBSD and DragonFly (FreeBSD #196592). 572 + fix a memory leak in form library Free_RegularExpression_Type() 573 (report by Pavel Balaev). 574 575 20150103 576 + modify_nc_flush() to retry if interrupted (patch by Stian Skjelstad). 577 + change map files to make _nc_freeall a global, since it may be used 578 via the Ada95 binding when checking for memory leaks. 579 + improve sed script used in 20141220 to account for wide-, threaded- 580 variations in ABI 6. 581 582 20141227 583 + regenerate ".map" files, using step overlooked in 20141213 to use 584 the same patch-dates across each file to match ncurses.map (report by 585 Sven Joachim). 586 587 20141221 588 + fix an incorrect variable assignment in 20141220 changes (report by 589 Sven Joachim). 590 591 20141220 592 + updated Ada95/configure with macro changes from 20141213 593 + tie configure options --with-abi-version and --with-versioned-syms 594 together, so that ABI 6 libraries have distinct symbol versions from 595 the ABI 5 libraries. 596 + replace obsolete/nonworking link to man2html with current one, 597 regenerate html-manpages. 598 599 20141213 600 + modify misc/gen-pkgconfig.in to add -I option for include-directory 601 when using both --prefix and --disable-overwrite (report by Misty 602 De Meo). 603 + add configure option --with-pc-suffix to allow minor renaming of 604 ".pc" files and the corresponding library. Use this in the test 605 package for ncurses6. 606 + modify configure script so that if pkg-config is not installed, it 607 is still possible to install ".pc" files (report by Misty De Meo). 608 + updated ".sym" files, removing symbols which are marked as "local" 609 in the corresponding ".map" files. 610 + updated ".map" files to reflect move of comp_captab and comp_hash 611 from tic-library to tinfo-library in 20090711 (report by Sven 612 Joachim). 613 614 20141206 615 + updated ".map" files so that each symbol that may be shared across 616 the different library configurations has the same label. Some 617 review is needed to ensure these are really compatible. 618 + modify MKlib_gen.sh to work around change in development version of 619 gcc introduced here: 620 621 622 (reports by Marcus Shawcroft, Maohui Lei). 623 + improved configure macro CF_SUBDIR_PATH, from lynx changes. 624 625 20141129 626 + improved ".map" files by generating them with a script that builds 627 ncurses with several related configurations and merges the results. 628 A further refinement is planned, to make the tic- and tinfo-library 629 symbols use the same versions across each of the four configurations 630 which are represented (reports by Sven Joachim, Werner Fink). 631 632 20141115 633 + improve description of limits for color values and color pairs in 634 curs_color.3x (prompted by patch by Tim van der Molen). 635 + add VERSION file, using first field in that to record the ABI version 636 used for configure --with-libtool --disable-libtool-version 637 + add configure options for applying the ".map" and ".sym" files to 638 the ncurses, form, menu and panel libraries. 639 + add ".map" and ".sym" files to show exported symbols, e.g., for 640 symbol-versioning. 641 642 20141101 643 + improve strict compiler-warnings by adding a cast in TRACE_RETURN 644 and making a new TRACE_RETURN1 macro for cases where the cast does 645 not apply. 646 647 20141025 648 + in-progress changes to integrate the win32 console driver with the 649 msys2 configuration. 650 651 20141018 652 + reviewed terminology 0.6.1, add function key definitions. None of 653 the vt100-compatibility issues were improved -TD 654 + improve infocmp conversion of extended capabilities to termcap by 655 correcting the limit check against parametrized[], as well as filling 656 in a check if the string happens to have parameters, e.g., "xm" 657 in recent changes. 658 + add check for zero/negative dimensions for resizeterm and resize_term 659 (report by Mike Gran). 660 661 20141011 662 + add experimental support for xterm's 1005 mouse mode, to use in a 663 demonstration of its limitations. 664 + add experimental support for "%u" format to terminfo. 665 + modify test/ncurses.c to also show position reports in 'a' test. 666 + minor formatting fixes to _nc_trace_mmask_t, make this function 667 exported to help with debugging mouse changes. 668 + improve behavior of wheel-mice for xterm protocol, noting that there 669 are only button-presses for buttons "4" and "5", so there is no need 670 to wait to combine events into double-clicks (report/analysis by 671 Greg Field). 672 + provide examples xterm-1005 and xterm-1006 terminfo entries -TD 673 + implement decoder for xterm SGR 1006 mouse mode. 674 675 20140927 676 + implement curs_set in win_driver.c 677 + implement flash in win_driver.c 678 + fix an infinite loop in win_driver.c if the command-window loses 679 focus. 680 + improve the non-buffered mode, i.e., NCURSES_CONSOLE2, of 681 win_driver.c by temporarily changing the buffer-size to match the 682 window-size to eliminate the scrollback. Also enforce a minimum 683 screen-size of 24x80 in the non-buffered mode. 684 + modify generated misc/Makefile to suppress install.data from the 685 dependencies if the --disable-db-install option is used, compensating 686 for the top-level makefile changes used to add ncurses*-config in the 687 20140920 changes (report by Steven Honeyman). 688 689 20140920 690 + add ncurses*-config to bin-directory of sample package-scripts. 691 + add check to ensure that getopt is available; this is a problem in 692 some older cross-compiler environments. 693 + expanded on the description of --disable-overwrite in INSTALL 694 (prompted by reports by Joakim Tjernlund, Thomas Klausner). 695 See Gentoo #522586 and NetBSD #49200 for examples. 696 which relates to the clarified guidelines. 697 + remove special logic from CF_INCLUDE_DIRS which adds the directory 698 for the --includedir from the build (report by Joakim Tjernlund). 699 + add case for Unixware to CF_XOPEN_SOURCE, from lynx changes. 700 + update config.sub from 701 702 703 20140913 704 + add a configure check to ignore some of the plethora of non-working 705 C++ cross-compilers. 706 + build-fixes for Ada95 with gnat 4.9 707 708 20140906 709 + build-fix and other improvements for port of ncurses-examples to 710 NetBSD. 711 + minor compiler-warning fixes. 712 713 20140831 714 + modify test/demo_termcap.c and test/demo_terminfo.c to make their 715 options more directly comparable, and add "-i" option to specify 716 a terminal description filename to parse for names to lookup. 717 718 20140823 719 + fix special case where double-width character overwrites a single- 720 width character in the first column (report by Egmont Koblinger, 721 cf: 20050813). 722 723 20140816 724 + fix colors in ncurses 'b' test which did not work after changing 725 it to put the test-strings in subwindows (cf: 20140705). 726 + merge redundant SEE-ALSO sections in form and menu manpages. 727 728 20140809 729 + modify declarations for user-data pointers in C++ binding to use 730 reinterpret_cast to facilitate converting typed pointers to void* 731 in user's application (patch by Adam Jiang). 732 + regenerated html manpages. 733 + add note regarding cause and effect for TERM in ncurses manpage, 734 having noted clueless verbiage in Terminal.app's "help" file 735 which reverses cause/effect. 736 + remove special fallback definition for NCURSES_ATTR_T, since macros 737 have resolved type-mismatches using casts (cf: 970412). 738 + fixes for win_driver.c: 739 + handle repainting on endwin/refresh combination. 740 + implement beep(). 741 + minor cleanup. 742 743 20140802 744 + minor portability fixes for MinGW: 745 + ensure WINVER is defined in makefiles rather than using headers 746 + add check for gnatprep "-T" option 747 + work around bug introduced by gcc 4.8.1 in MinGW which breaks 748 "trace" feature: 749 750 + fix most compiler warnings for Cygwin ncurses-examples. 751 + restore "redundant" -I options in test/Makefile.in, since they are 752 typically needed when building the derived ncurses-examples package 753 (cf: 20140726). 754 755 20140726 756 + eliminate some redundant -I options used for building libraries, and 757 ensure that ${srcdir} is added to the include-options (prompted by 758 discussion with Paul Gilmartin). 759 + modify configure script to work with Minix3.2 760 + add form library extension O_DYNAMIC_JUSTIFY option which can be 761 used to override the different treatment of justification for static 762 versus dynamic fields (adapted from patch by Leon Winter). 763 + add a null pointer check in test/edit_field.c (report/analysis by 764 Leon Winter, cf: 20130608). 765 766 20140719 767 + make workarounds for compiling test-programs with NetBSD curses. 768 + improve configure macro CF_ADD_LIBS, to eliminate repeated -l/-L 769 options, from xterm changes. 770 771 20140712 772 + correct Charable() macro check for A_ALTCHARSET in wide-characters. 773 + build-fix for position-debug code in tty_update.c, to work with or 774 without sp-funcs. 775 776 20140705 777 + add w/W toggle to ncurses.c 'B' test, to demonstrate permutation of 778 video-attributes and colors with double-width character strings. 779 780 20140629 781 + correct check in win_driver.c for saving screen contents, e.g., when 782 NCURSES_CONSOLE2 is set (cf: 20140503). 783 + reorganize b/B menu items in ncurses.c, putting the test-strings into 784 subwindows. This is needed for a planned change to use Unicode 785 fullwidth characters in the test-screens. 786 + correct update to form status for _NEWTOP, broken by fixes for 787 compiler warnings (patch by Leon Winter, cf: 20120616). 788 789 20140621 790 + change shared-library suffix for AIX 5 and 6 to ".so", avoiding 791 conflict with the static library (report by Ben Lentz). 792 + document RPATH_LIST in INSTALLATION file, as part of workarounds for 793 upgrading an ncurses library using the "--with-shared" option. 794 + modify test/ncurses.c c/C tests to cycle through subsets of the 795 total number of colors, to better illustrate 8/16/88/256-colors by 796 providing directly comparable screens. 797 + add test/dots_curses.c, for comparison with the low-level examples. 798 799 20140614 800 + fix dereference before null check found by Coverity in tic.c 801 (cf: 20140524). 802 + fix sign-extension bug in read_entry.c which prevented "toe" from 803 reading empty "screen+italics" entry. 804 + modify sgr for screen.xterm-new to support dim capability -TD 805 + add dim capability to nsterm+7 -TD 806 + cancel dim capability for iterm -TD 807 + add dim, invis capabilities to vte-2012 -TD 808 + add sitm/ritm to konsole-base and mlterm3 -TD 809 810 20140609 811 > fix regression in screen terminfo entries (reports by Christian 812 Ebert, Gabriele Balducci) -TD 813 + revert the change to screen; see notes for why this did not work -TD 814 + cancel sitm/ritm for entries which extend "screen", to work around 815 screen's hardcoded behavior for SGR 3 -TD 816 817 20140607 818 + separate masking for sgr in vidputs from sitm/ritm, which do not 819 overlap with sgr functionality. 820 + remove unneeded -i option from adacurses-config; put -a in the -I 821 option for consistency (patch by Pascal Pignard). 822 + update xterm-new terminfo entry to xterm patch #305 -TD 823 + change format of test-scripts for Debian Ada95 and ncurses-examples 824 packages to quilted to work around Debian #700177 (cf: 20130907). 825 + build fix for form_driver_w.c as part of ncurses-examples package for 826 older ncurses than 20131207. 827 + add Hello World example to adacurses-config manpage. 828 + remove unused --enable-pc-files option from Ada95/configure. 829 + add --disable-gnat-projects option for testing. 830 + revert changes to Ada95 project-files configuration (cf: 20140524). 831 + corrected usage message in adacurses-config. 832 833 20140524 834 + fix typo in ncurses manpage for the NCURSES_NO_MAGIC_COOKIE 835 environment variable. 836 + improve discussion of input-echoing in curs_getch.3x 837 + clarify discussion in curs_addch.3x of wrapping. 838 + modify parametrized.h to make fln non-padded. 839 + correct several entries which had termcap-style padding used in 840 terminfo: adm21, aj510, alto-h19, att605-pc, x820 -TD 841 + correct syntax for padding in some entries: dg211, h19 -TD 842 + correct ti924-8 which had confused padding versus octal escapes -TD 843 + correct padding in sbi entry -TD 844 + fix an old bug in the termcap emulation; "%i" was ignored in tparm() 845 because the parameters to be incremented were already on the internal 846 stack (report by Corinna Vinschen). 847 + modify tic's "-c" option to take into account the "-C" option to 848 activate additional checks which compare the results from running 849 tparm() on the terminfo expressions versus the translated termcap 850 expressions. 851 + modify tic to allow it to read from FIFOs (report by Matthieu Fronton, 852 cf: 20120324). 853 > patches by Nicolas Boulenguez: 854 + explicit dereferences to suppress some style warnings. 855 + when c_varargs_to_ada.c includes its header, use double quotes 856 instead of <>. 857 + samples/ncurses2-util.adb: removed unused with clause. The warning 858 was removed by an obsolete pragma. 859 + replaced Unreferenced pragmas with Warnings (Off). The latter, 860 available with older GNATs, needs no configure test. This also 861 replaces 3 untested Unreferenced pragmas. 862 + simplified To_C usage in trace handling. Using two parameters allows 863 some basic formatting, and avoids a warning about security with some 864 compiler flags. 865 + for generated Ada sources, replace many snippets with one pure 866 package. 867 + removed C_Chtype and its conversions. 868 + removed C_AttrType and its conversions. 869 + removed conversions between int, Item_Option_Set, Menu_Option_Set. 870 + removed int, Field_Option_Set, Item_Option_Set conversions. 871 + removed C_TraceType, Attribute_Option_Set conversions. 872 + replaced C.int with direct use of Eti_Error, now enumerated. As it 873 was used in a case statement, values were tested by the Ada compiler 874 to be consecutive anyway. 875 + src/Makefile.in: remove duplicate stanza 876 + only consider using a project for shared libraries. 877 + style. Silent gnat-4.9 warning about misplaced "then". 878 + generate shared library project to honor ADAFLAGS, LDFLAGS. 879 880 20140510 881 + cleanup recently introduced compiler warnings for MingW port. 882 + workaround for ${MAKEFLAGS} configure check versus GNU make 4.0, 883 which introduces more than one gratuitous incompatibility. 884 885 20140503 886 + add vt520ansi terminfo entry (patch by Mike Gran) 887 + further improve MinGW support for the scenario where there is an 888 ANSI-escapes handler such as ansicon running in the console window 889 (patch by Juergen Pfeifer). 890 891 20140426 892 + add --disable-lib-suffixes option (adapted from patch by Juergen 893 Pfeifer). 894 + merge some changes from Juergen Pfeifer's work with MSYS2, to 895 simplify later merging: 896 + use NC_ISATTY() macro for isatty() in library 897 + add _nc_mingw_isatty() and related functions to windows-driver 898 + rename terminal driver entrypoints to simplify grep's 899 + remove a check in the sp-funcs flavor of newterm() which allowed only 900 the first call to newterm() to succeed (report by Thomas Beierlein, 901 cf: 20090927). 902 903 20140419 904 + update config.guess, config.sub from 905 906 907 20140412 908 + modify configure script: 909 + drop the -no-gcc option from Intel compiler, from lynx changes. 910 + extend the --with-hashed-db configure option to simplify building 911 with different versions of Berkeley database using FreeBSD ports. 912 + improve initialization for MinGW port (Juergen Pfeifer): 913 + enforce Windows-style path-separator if cross-compiling, 914 + add a driver-name method to each of the drivers, 915 + allow the Windows driver name to match "unknown", ignoring case, 916 + lengthen the built-in name for the Windows console driver to 917 "#win32console", and 918 + move the comparison of driver-names allowing abbreviation, e.g., 919 to "#win32con" into the Windows console driver. 920 921 20140329 922 + add check in tic for mismatch between ccc and initp/initc 923 + cancel ccc in putty-256color and konsole-256color for consistency 924 with the cancelled initc capability (patch by Sven Zuhlsdorf). 925 + add xterm+256setaf building block for various terminals which only 926 get the 256-color feature half-implemented -TD 927 + updated "st" entry (leaving the 0.1.1 version as "simpleterm") to 928 0.4.1 -TD 929 930 20140323 931 + fix typo in "mlterm" entry (report by Gabriele Balducci) -TD 932 933 20140322 934 + use types from <stdint.h> in sample build-scripts for chtype, etc. 935 + modify configure script and curses.h.in to allow the types specified 936 using --with-chtype and related options to be defined in <stdint.h> 937 + add terminology entry -TD 938 + add mlterm3 entry, use that as "mlterm" -TD 939 + inherit mlterm-256color from mlterm -TD 940 941 20140315 942 + modify _nc_New_TopRow_and_CurrentItem() to ensure that the menu's 943 top-row is adjusted as needed to ensure that the current item is 944 on the screen (patch by Johann Klammer). 945 + add wgetdelay() to retrieve _delay member of WINDOW if it happens to 946 be opaque, e.g., in the pthread configuration (prompted by patch by 947 Soren Brinkmann). 948 949 20140308 950 + modify ifdef in read_entry.c to handle the case where 951 NCURSES_USE_DATABASE is not defined (patch by Xin Li). 952 + add cast in form_driver_w() to fix ARM build (patch by Xin Li). 953 + add logic to win_driver.c to save/restore screen contents when not 954 allocating a console-buffer (cf: 20140215). 955 956 20140301 957 + clarify error-returns from newwin (report by Ruslan Nabioullin). 958 959 20140222 960 + fix some compiler warnings in win_driver.c 961 + updated notes for wsvt25 based on tack and vttest -TD 962 + add teken entry to show actual properties of FreeBSD's "xterm" 963 console -TD 964 965 20140215 966 + in-progress changes to win_driver.c to implement output without 967 allocating a console-buffer. This uses a pre-existing environment 968 variable NCGDB used by Juergen Pfeifer for debugging (prompted by 969 discussion with Erwin Waterlander regarding Console2, which hangs 970 when reading in an allocated console-buffer). 971 + add -t option to gdc.c, and modify to accept "S" to step through the 972 scrolling-stages. 973 + regenerate NCURSES-Programming-HOWTO.html to fix some of the broken 974 html emitted by docbook. 975 976 20140209 977 + modify CF_XOPEN_SOURCE macro to omit followup check to determine if 978 _XOPEN_SOURCE can/should be defined. g++ 4.7.2 built on Solaris 10 979 has some header breakage due to its own predefinition of this symbol 980 (report by Jean-Pierre Flori, Sage #15796). 981 982 20140201 983 + add/use symbol NCURSES_PAIRS_T like NCURSES_COLOR_T, to illustrate 984 which "short" types are for color pairs and which are color values. 985 + fix build for s390x, by correcting field bit offsets in generated 986 representation clauses when int=32 long=64 and endian=big, or at 987 least on s390x (patch by Nicolas Boulenguez). 988 + minor cleanup change to test/form_driver_w.c (patch by Gaute Hope). 989 990 20140125 991 + remove unnecessary ifdef's in Ada95/gen/gen.c, which reportedly do 992 not work as is with gcc 4.8 due to fixes using chtype cast made for 993 new compiler warnings by gcc 4.8 in 20130824 (Debian #735753, patch 994 by Nicolas Boulenguez). 995 996 20140118 997 + apply includesubdir variable which was introduced in 20130805 to 998 gen-pkgconfig.in (Debian #735782). 999 1000 20131221 1001 + further improved man2html, used this to fix broken links in html 1002 manpages. See 1003 1004 1005 20131214 1006 + modify configure-script/ifdef's to allow OLD_TTY feature to be 1007 suppressed if the type of ospeed is configured using the option 1008 --with-ospeed to not be a short. By default, it is a short for 1009 termcap-compatibility (adapted from suggestion by Christian 1010 Weisgerber). 1011 + correct a typo in _nc_baudrate() (patch by Christian Weisgerber, 1012 cf: 20061230). 1013 + fix a few -Wlogical-op warnings. 1014 + updated llib-l* files. 1015 1016 20131207 1017 + add form_driver_w() entrypoint to wide-character forms library, as 1018 well as test program form_driver_w (adapted from patch by Gaute 1019 Hope). 1020 1021 20131123 1022 + minor fix for CF_GCC_WARNINGS to special-case options which are not 1023 recognized by clang. 1024 1025 20131116 1026 + add special case to configure script to move _XOPEN_SOURCE_EXTENDED 1027 definition from CPPFLAGS to CFLAGS if it happens to be needed for 1028 Solaris, because g++ errors with that definition (report by 1029 Jean-Pierre Flori, Sage #15268). 1030 + correct logic in infocmp's -i option which was intended to ignore 1031 strings which correspond to function-keys as candidates for piecing 1032 together initialization- or reset-strings. The problem dates to 1033 1.9.7a, but was overlooked until changes in -Wlogical-op warnings for 1034 gcc 4.8 (report by David Binderman). 1035 + updated CF_GCC_WARNINGS to documented options for gcc 4.9.0, moving 1036 checks for -Wextra and -Wdeclaration-after-statement into the macro, 1037 and adding checks for -Wignored-qualifiers, -Wlogical-op and 1038 -Wvarargs 1039 + updated CF_CURSES_UNCTRL_H and CF_SHARED_OPTS macros from ongoing 1040 work on cdk. 1041 + update config.sub from 1042 1043 1044 20131110 1045 + minor cleanup of terminfo.tail 1046 1047 20131102 1048 + use TS extension to describe xterm's title-escapes -TD 1049 + modify terminator and nsterm-s to use xterm+sl-twm building block -TD 1050 + update hurd.ti, add xenl to reflect 2011-03-06 change in 1051 1052 (Debian #727119). 1053 + simplify pfkey expression in ansi.sys -TD 1054 1055 20131027 1056 + correct/simplify ifdef's for cur_term versus broken-linker and 1057 reentrant options (report by Jean-Pierre Flori, cf: 20090530). 1058 + modify release/version combinations in test build-scripts to make 1059 them more consistent with other packages. 1060 1061 20131019 1062 + add nc_mingw.h to installed headers for MinGW port; needed for 1063 compiling ncurses-examples. 1064 + add rpm-script for testing cross-compile of ncurses-examples. 1065 1066 20131014 1067 + fix new typo in CF_ADA_INCLUDE_DIRS macro (report by Roumen Petrov). 1068 1069 20131012 1070 + fix a few compiler warnings in progs and test. 1071 + minor fix to package/debian-mingw/rules, do not strip dll's. 1072 + minor fixes to configure script for empty $prefix, e.g., when doing 1073 cross-compiles to MinGW. 1074 + add script for building test-packages of binaries cross-compiled to 1075 MinGW using NSIS. 1076 1077 20131005 1078 + minor fixes for ncurses-example package and makefile. 1079 + add scripts for test-builds of cross-compiler packages for ncurses6 1080 to MinGW. 1081 1082 20130928 1083 + some build-fixes for ncurses-examples with NetBSD-6.0 curses, though 1084 it lacks some common functions such as use_env() which is not yet 1085 addressed. 1086 + build-fix and some compiler warning fixes for ncurses-examples with 1087 OpenBSD 5.3 1088 + fix a possible null-pointer reference in a trace message from newterm. 1089 + quiet a few warnings from NetBSD 6.0 namespace pollution by 1090 nonstandard popcount() function in standard strings.h header. 1091 + ignore g++ 4.2.1 warnings for "-Weffc++" in c++/cursesmain.cc 1092 + fix a few overlooked places for --enable-string-hacks option. 1093 1094 20130921 1095 + fix typo in curs_attr.3x (patch by Sven Joachim, cf: 20130831). 1096 + build-fix for --with-shared option for DragonFly and FreeBSD (report 1097 by Rong-En Fan, cf: 20130727). 1098 1099 20130907 1100 + build-fixes for MSYS for two test-programs (patches by Ray Donnelly, 1101 Alexey Pavlov). 1102 + revert change to two of the dpkg format files, to work with dpkg 1103 before/after Debian #700177. 1104 + fix gcc -Wconversion warning in wattr_get() macro. 1105 + add msys and msysdll to known host/configuration types (patch by 1106 Alexey Pavlov). 1107 + modify CF_RPATH_HACK configure macro to not rely upon "-u" option 1108 of sort, improving portability. 1109 + minor improvements for test-programs from reviewing Solaris port. 1110 + update config.guess, config.sub from 1111 1112 1113 20130831 1114 + modify test/ncurses.c b/B tests to display lines only for the 1115 attributes which a given terminal supports, to make room for an 1116 italics test. 1117 + completed ncv table in terminfo.tail; it did not list the wide 1118 character codes listed in X/Open Curses issue 7. 1119 + add A_ITALIC extension (prompted by discussion with Egmont Koblinger). 1120 1121 20130824 1122 + fix some gcc 4.8 -Wconversion warnings. 1123 + change format of dpkg test-scripts to quilted to work around bug 1124 introduced by Debian #700177. 1125 + discard cached keyname() values if meta() is changed after a value 1126 was cached using (report by Kurban Mallachiev). 1127 1128 20130816 1129 + add checks in tic to warn about terminals which lack cursor 1130 addressing, capabilities or having those, are marked as hard_copy or 1131 generic_type. 1132 + use --without-progs in mingw-ncurses rpm. 1133 + split out _nc_init_termtype() from alloc_entry.c to use in MinGW 1134 port when tic and other programs are not needed. 1135 1136 20130805 1137 + minor fixes to the --disable-overwrite logic, to ensure that the 1138 configured $(includedir) is not cancelled by the mingwxx-filesystem 1139 rpm macros. 1140 + add --disable-db-install configure option, to simplify building 1141 cross-compile support packages. 1142 + add mingw-ncurses.spec file, for testing cross-compiles. 1143 1144 20130727 1145 + improve configure macros from ongoing work on cdk, dialog, xterm: 1146 + CF_ADD_LIB_AFTER - fix a problem with -Wl options 1147 + CF_RPATH_HACK - add missing result-message 1148 + CF_SHARED_OPTS - modify to use $rel_builddir in cygwin and mingw 1149 dll symbols (which can be overridden) rather than explicit "../". 1150 + CF_SHARED_OPTS - modify NetBSD and DragonFly symbols to use ${CC} 1151 rather than ${LD} to improve rpath support. 1152 + CF_SHARED_OPTS - add a symbol to denote the temporary files that 1153 are created by the macro, to simplify clean-rules. 1154 + CF_X_ATHENA - trim extra libraries to work with -Wl,--as-needed 1155 + fix a regression in hashed-database support for NetBSD, which uses 1156 the key-size differently from other implementations (cf: 20121229). 1157 1158 20130720 1159 + further improvements for setupterm manpage, clarifying the 1160 initialization of cur_term. 1161 1162 20130713 1163 + improve manpages for initscr and setupterm. 1164 + minor compiler-warning fixes 1165 1166 20130706 1167 + add fallback defs for <inttypes.h> and <stdint.h> (cf: 20120225). 1168 + add check for size of wchar_t, use that to suppress a chunk of 1169 wcwidth.h in MinGW port. 1170 + quiet linker warnings for MinGW cross-compile with dll's using the 1171 --enable-auto-import flag. 1172 + add ncurses.map rule to ncurses/Makefile to help diagnose symbol 1173 table issues. 1174 1175 20130622 1176 + modify the clear program to take into account the E3 extended 1177 capability to clear the terminal's scrollback buffer (patch by 1178 Miroslav Lichvar, Redhat #815790). 1179 + clarify in resizeterm manpage that LINES and COLS are updated. 1180 + updated ansi example in terminfo.tail, correct misordered example 1181 of sgr. 1182 + fix other doclifter warnings for manpages 1183 + remove unnecessary ".ta" in terminfo.tail, add missing ".fi" 1184 (patch by Eric Raymond). 1185 1186 20130615 1187 + minor changes to some configure macros to make them more reusable. 1188 + fixes for tabs program (prompted by report by Nick Andrik). 1189 + corrected logic in command-line parsing of -a and -c predefined 1190 tab-lists options. 1191 + allow "-0" and "-8" options to be combined with others, e.g.,"-0d". 1192 + make warning messages more consistent with the other utilities by 1193 not printing the full pathname of the program. 1194 + add -V option for consistency with other utilities. 1195 + fix off-by-one in columns for tabs program when processing an option 1196 such as "-5" (patch by Nick Andrik). 1197 1198 20130608 1199 + add to test/demo_forms.c examples of using the menu-hooks as well 1200 as showing how the menu item user-data can be used to pass a callback 1201 function pointer. 1202 + add test/dots_termcap.c 1203 + remove setupterm call from test/demo_termcap.c 1204 + build-fix if --disable-ext-funcs configure option is used. 1205 + modified test/edit_field.c and test/demo_forms.c to move the lengths 1206 into a user-data structure, keeping the original string for later 1207 expansion to free-format input/out demo. 1208 + modified test/demo_forms.c to load data from file. 1209 + added note to clarify Terminal.app's non-emulation of the various 1210 terminal types listed in the preferences dialog -TD 1211 + fix regression in error-reporting in lib_setup.c (Debian #711134, 1212 cf: 20121117). 1213 + build-fix for a case where --enable-broken_linker and 1214 --enable-reentrant options are combined (report by George R Goffe). 1215 1216 20130525 1217 + modify mvcur() to distinguish between internal use by the ncurses 1218 library, and external callers, preventing it from reading the content 1219 of the screen which is only nonblank when curses calls have updated 1220 it. This makes test/dots_mvcur.c avoid painting colored cells in 1221 the left margin of the display. 1222 + minor fix to test/dots_mvcur.c 1223 + move configured symbols USE_DATABASE and USE_TERMCAP to term.h as 1224 NCURSES_USE_DATABASE and NCURSES_USE_TERMCAP to allow consistent 1225 use of these symbols in term_entry.h 1226 1227 20130518 1228 + corrected ifdefs in test/testcurs.c to allow comparison of mouse 1229 interface versus pdcurses (cf: 20130316). 1230 + add pow() to configure-check for math library, needed since 1231 20121208 for test/hanoi (Debian #708056). 1232 + regenerated html manpages. 1233 + update doctype used for html documentation. 1234 1235 20130511 1236 + move nsterm-related entries out of "obsolete" section to more 1237 plausible "ansi consoles" -TD 1238 + additional cleanup of table-of-contents by reordering -TD 1239 + revise fix for check for 8-bit value in _nc_insert_ch(); prior fix 1240 prevented inserts when video attributes were attached to the data 1241 (cf: 20121215) (Redhat #959534). 1242 1243 20130504 1244 + fixes for issues found by Coverity: 1245 + correct FNKEY() macro in progs/dump_entry.c, allowing kf11-kf63 to 1246 display when infocmp's -R option is used for HP or AIX subsets. 1247 + fix dead-code issue with test/movewindow.c 1248 + improve limited-checking in _nc_read_termtype(). 1249 1250 20130427 1251 + fix clang 3.2 warning in progs/dump_entry.c 1252 + drop AC_TYPE_SIGNAL check; ncurses relies on c89 and later. 1253 1254 20130413 1255 + add MinGW to cases where ncurses installs by default into /usr 1256 (prompted by discussion with Daniel Silva Ferreira). 1257 + add -D option to infocmp's usage-message (patch by Miroslav Lichvar). 1258 + add a missing 'int' type for main function in configure check for 1259 type of bool variable, to work with clang 3.2 (report by Dmitri 1260 Gribenko). 1261 + improve configure check for static_cast, to work with clang 3.2 1262 (report by Dmitri Gribenko). 1263 + re-order rule for demo.o and macros defining header dependencies in 1264 c++/Makefile.in to accommodate gmake (report by Dmitri Gribenko). 1265 1266 20130406 1267 + improve parameter checking in copywin(). 1268 + modify configure script to work around OS X's "libtool" program, to 1269 choose glibtool instead. At the same time, chance the autoconf macro 1270 to look for a "tool" rather than a "prog", to help with potential use 1271 in cross-compiling. 1272 + separate the rpath usage for c++ library from demo program 1273 (Redhat #911540) 1274 + update/correct header-dependencies in c++ makefile (report by Werner 1275 Fink). 1276 + add --with-cxx-shared to dpkg-script, as done for rpm-script. 1277 1278 20130324 1279 + build-fix for libtool configuration (reports by Daniel Silva Ferreira 1280 and Roumen Petrov). 1281 1282 20130323 1283 + build-fix for OS X, to handle changes for --with-cxx-shared feature 1284 (report by Christian Ebert). 1285 + change initialization for vt220, similar entries for consistency 1286 with cursor-key strings (NetBSD #47674) -TD 1287 + further improvements to linux-16color (Benjamin Sittler) 1288 1289 20130316 1290 + additional fix for tic.c, to allocate missing buffer space. 1291 + eliminate configure-script warnings for gen-pkgconfig.in 1292 + correct typo in sgr string for sun-color, 1293 add bold for consistency with sgr, 1294 change smso for consistency with sgr -TD 1295 + correct typo in sgr string for terminator -TD 1296 + add blink to the attributes masked by ncv in linux-16color (report 1297 by Benjamin Sittler) 1298 + improve warning message from post-load checking for missing "%?" 1299 operator by tic/infocmp by showing the entry name and capability. 1300 + minor formatting improvement to tic/infocmp -f option to ensure 1301 line split after "%;". 1302 + amend scripting for --with-cxx-shared option to handle the debug 1303 library "libncurses++_g.a" (report by Sven Joachim). 1304 1305 20130309 1306 + amend change to toe.c for reading from /dev/zero, to ensure that 1307 there is a buffer for the temporary filename (cf: 20120324). 1308 + regenerated html manpages. 1309 + fix typo in terminfo.head (report by Sven Joachim, cf: 20130302). 1310 + updated some autoconf macros: 1311 + CF_ACVERSION_CHECK, from byacc 1.9 20130304 1312 + CF_INTEL_COMPILER, CF_XOPEN_SOURCE from luit 2.0-20130217 1313 + add configure option --with-cxx-shared to permit building 1314 libncurses++ as a shared library when using g++, e.g., the same 1315 limitations as libtool but better integrated with the usual build 1316 configuration (Redhat #911540). 1317 + modify MKkey_defs.sh to filter out build-path which was unnecessarily 1318 shown in curses.h (Debian #689131). 1319 1320 20130302 1321 + add section to terminfo manpage discussing user-defined capabilities. 1322 + update manpage description of NCURSES_NO_SETBUF, explaining why it 1323 is obsolete. 1324 + add a check in waddch_nosync() to ensure that tab characters are 1325 treated as control characters; some broken locales claim they are 1326 printable. 1327 + add some traces to the Windows console driver. 1328 + initialize a temporary array in _nc_mbtowc, needed for some cases 1329 of raw input in MinGW port. 1330 1331 20130218 1332 + correct ifdef on change to lib_twait.c (report by Werner Fink). 1333 + update config.guess, config.sub 1334 1335 20130216 1336 + modify test/testcurs.c to work with mouse for ncurses as it does for 1337 pdcurses. 1338 + modify test/knight.c to work with mouse for pdcurses as it does for 1339 ncurses. 1340 + modify internal recursion in wgetch() which handles cooked mode to 1341 check if the call to wgetnstr() returned an error. This can happen 1342 when both nocbreak() and nodelay() are set, for instance (report by 1343 Nils Christopher Brause) (cf: 960418). 1344 + fixes for issues found by Coverity: 1345 + add a check for valid position in ClearToEOS() 1346 + fix in lib_twait.c when --enable-wgetch-events is used, pointer 1347 use after free. 1348 + improve a limit-check in make_hash.c 1349 + fix a memory leak in hashed_db.c 1350 1351 20130209 1352 + modify test/configure script to make it simpler to override names 1353 of curses-related libraries, to help with linking with pdcurses in 1354 MinGW environment. 1355 + if the --with-terminfo-dirs configure option is not used, there is 1356 no corresponding compiled-in value for that. Fill in "no default 1357 value" for that part of the manpage substitution. 1358 1359 20130202 1360 + correct initialization in knight.c which let it occasionally make 1361 an incorrect move (cf: 20001028). 1362 + improve documentation of the terminfo/termcap search path. 1363 1364 20130126 1365 + further fixes to mvcur to pass callback function (cf: 20130112), 1366 needed to make test/dots_mvcur work. 1367 + reduce calls to SetConsoleActiveScreenBuffer in win_driver.c, to 1368 help reduce flicker. 1369 + modify configure script to omit "+b" from linker options for very 1370 old HP-UX systems (report by Dennis Grevenstein) 1371 + add HP-UX workaround for missing EILSEQ on old HP-UX systems (patch 1372 by Dennis Grevenstein). 1373 + restore memmove/strdup support for antique systems (request by 1374 Dennis Grevenstein). 1375 + change %l behavior in tparm to push the string length onto the stack 1376 rather than saving the formatted length into the output buffer 1377 (report by Roy Marples, cf: 980620). 1378 1379 20130119 1380 + fixes for issues found by Coverity: 1381 + fix memory leak in safe_sprintf.c 1382 + add check for return-value in tty_update.c 1383 + correct initialization for -s option in test/view.c 1384 + add check for numeric overflow in lib_instr.c 1385 + improve error-checking in copywin 1386 + add advice in infocmp manpage for termcap users (Debian #698469). 1387 + add "-y" option to test/demo_termcap and test/demo_terminfo to 1388 demonstrate behavior with/without extended capabilities. 1389 + updated termcap manpage to document legacy termcap behavior for 1390 matching capability names. 1391 + modify name-comparison for tgetstr, etc., to accommodate legacy 1392 applications as well as to improve compatbility with BSD 4.2 1393 termcap implementations (Debian #698299) (cf: 980725). 1394 1395 20130112 1396 + correct prototype in manpage for vid_puts. 1397 + drop ncurses/tty/tty_display.h, ncurses/tty/tty_input.h, since they 1398 are unused in the current driver model. 1399 + modify mvcur to use stdout except when called within the ncurses 1400 library. 1401 + modify vidattr and vid_attr to use stdout as documented in manpage. 1402 + amend changes made to buffering in 20120825 so that the low-level 1403 putp() call uses stdout rather than ncurses' internal buffering. 1404 The putp_sp() call does the same, for consistency (Redhat #892674). 1405 1406 20130105 1407 + add "-s" option to test/view.c to allow it to start in single-step 1408 mode, reducing size of trace files when it is used for debugging 1409 MinGW changes. 1410 + revert part of 20121222 change to tinfo_driver.c 1411 + add experimental logic in win_driver.c to improve optimization of 1412 screen updates. This does not yet work with double-width characters, 1413 so it is ifdef'd out for the moment (prompted by report by Erwin 1414 Waterlander regarding screen flicker). 1415 1416 20121229 1417 + fix coverity warnings regarding copying into fixed-size buffers. 1418 + add throw-declarations in the c++ binding per Coverity warning. 1419 + minor changes to new-items for consistent reference to bug-report 1420 numbers. 1421 1422 20121222 1423 + add *.dSYM directories to clean-rule in ncurses directory makefile, 1424 for Mac OS builds. 1425 + add a configure check for gcc option -no-cpp-precomp, which is not 1426 available in all Mac OS X configurations (report by Andras Salamon, 1427 cf: 20011208). 1428 + improve 20021221 workaround for broken acs, handling a case where 1429 that ACS_xxx character is not in the acsc string but there is a known 1430 wide-character which can be used. 1431 1432 20121215 1433 + fix several warnings from clang 3.1 --analyze, includes correcting 1434 a null-pointer check in _nc_mvcur_resume. 1435 + correct display of double-width characters with MinGW port (report 1436 by Erwin Waterlander). 1437 + replace MinGW's wcrtomb(), fixing a problem with _nc_viscbuf 1438 > fixes based on Coverity report: 1439 + correct coloring in test/bs.c 1440 + correct check for 8-bit value in _nc_insert_ch(). 1441 + remove dead code in progs/tset.c, test/linedata.h 1442 + add null-pointer checks in lib_tracemse.c, panel.priv.h, and some 1443 test-programs. 1444 1445 20121208 1446 + modify test/knight.c to show the number of choices possible for 1447 each position in automove option, e.g., to allow user to follow 1448 Warnsdorff's rule to solve the puzzle. 1449 + modify test/hanoi.c to show the minimum number of moves possible for 1450 the given number of tiles (prompted by patch by Lucas Gioia). 1451 > fixes based on Coverity report: 1452 + remove a few redundant checks. 1453 + correct logic in test/bs.c, when randomly placing a specific type of 1454 ship. 1455 + check return value from remove/unlink in tic. 1456 + check return value from sscanf in test/ncurses.c 1457 + fix a null dereference in c++/cursesw.cc 1458 + fix two instances of uninitialized variables when configuring for the 1459 terminal driver. 1460 + correct scope of variable used in SetSafeOutcWrapper macro. 1461 + set umask when calling mkstemp in tic. 1462 + initialize wbkgrndset() temporary variable when extended-colors are 1463 used. 1464 1465 20121201 1466 + also replace MinGW's wctomb(), fixing a problem with setcchar(). 1467 + modify test/view.c to load UTF-8 when built with MinGW by using 1468 regular win32 API because the MinGW functions mblen() and mbtowc() 1469 do not work. 1470 1471 20121124 1472 + correct order of color initialization versus display in some of the 1473 test-programs, e.g., test_addstr.c 1474 > fixes based on Coverity report: 1475 + delete windows on exit from some of the test-programs. 1476 1477 20121117 1478 > fixes based on Coverity report: 1479 + add missing braces around FreeAndNull in two places. 1480 + various fixes in test/ncurses.c 1481 + improve limit-checks in tinfo/make_hash.c, tinfo/read_entry.c 1482 + correct malloc size in progs/infocmp.c 1483 + guard against negative array indices in test/knight.c 1484 + fix off-by-one limit check in test/color_name.h 1485 + add null-pointer check in progs/tabs.c, test/bs.c, test/demo_forms.c, 1486 test/inchs.c 1487 + fix memory-leak in tinfo/lib_setup.c, progs/toe.c, 1488 test/clip_printw.c, test/demo_menus.c 1489 + delete unused windows in test/chgat.c, test/clip_printw.c, 1490 test/insdelln.c, test/newdemo.c on error-return. 1491 1492 20121110 1493 + modify configure macro CF_INCLUDE_DIRS to put $CPPFLAGS after the 1494 local -I include options in case someone has set conflicting -I 1495 options in $CPPFLAGS (prompted by patch for ncurses/Makefile.in by 1496 Vassili Courzakis). 1497 + modify the ncurses*-config scripts to eliminate relative paths from 1498 the RPATH_LIST variable, e.g., "../lib" as used in installing shared 1499 libraries or executables. 1500 1501 20121102 1502 + realign these related pages: 1503 curs_add_wchstr.3x 1504 curs_addchstr.3x 1505 curs_addstr.3x 1506 curs_addwstr.3x 1507 and fix a long-ago error in curs_addstr.3x which said that a -1 1508 length parameter would only write as much as fit onto one line 1509 (report by Reuben Thomas). 1510 + remove obsolete fallback _nc_memmove() for memmove()/bcopy(). 1511 + remove obsolete fallback _nc_strdup() for strdup(). 1512 + cancel any debug-rpm in package/ncurses.spec 1513 + reviewed vte-2012, reverted most of the change since it was incorrect 1514 based on testing with tack -TD 1515 + un-cancel the initc in vte-256color, since this was implemented 1516 starting with version 0.20 in 2009 -TD 1517 1518 20121026 1519 + improve malloc/realloc checking (prompted by discussion in Redhat 1520 #866989). 1521 + add ncurses test-program as "ncurses6" to the rpm- and dpkg-scripts. 1522 + updated configure macros CF_GCC_VERSION and CF_WITH_PATHLIST. The 1523 first corrects pattern used for Mac OS X's customization of gcc. 1524 1525 20121017 1526 + fix change to _nc_scroll_optimize(), which incorrectly freed memory 1527 (Redhat #866989). 1528 1529 20121013 1530 + add vte-2012, gnome-2012, making these the defaults for vte/gnome 1531 (patch by Christian Persch). 1532 1533 20121006 1534 + improve CF_GCC_VERSION to work around Debian's customization of gcc 1535 --version message. 1536 + improve configure macros as done in byacc: 1537 + drop 2.13 compatibility; use 2.52.xxxx version only since EMX port 1538 has used that for a while. 1539 + add 3rd parameter to AC_DEFINE's to allow autoheader to run, i.e., 1540 for experimental use. 1541 + remove unused configure macros. 1542 + modify configure script and makefiles to quiet new autoconf warning 1543 for LIBS_TO_MAKE variable. 1544 + modify configure script to show $PATH_SEPARATOR variable. 1545 + update config.guess, config.sub 1546 1547 20120922 1548 + modify setupterm to set its copy of TERM to "unknown" if configured 1549 for the terminal driver and TERM was null or empty. 1550 + modify treatment of TERM variable for MinGW port to allow explicit 1551 use of the windows console driver by checking if $TERM is set to 1552 "#win32con" or an abbreviation of that. 1553 + undo recent change to fallback definition of vsscanf() to build with 1554 older Solaris compilers (cf: 20120728). 1555 1556 20120908 1557 + add test-screens to test/ncurses to show 256-characters at a time, 1558 to help with MinGW port. 1559 1560 20120903 1561 + simplify varargs logic in lib_printw.c; va_copy is no longer needed 1562 there. 1563 + modifications for MinGW port to make wide-character display usable. 1564 1565 20120902 1566 + regenerate configure script (report by Sven Joachim, cf: 20120901). 1567 1568 20120901 1569 + add a null-pointer check in _nc_flush (cf: 20120825). 1570 + fix a case in _nc_scroll_optimize() where the _oldnums_list array 1571 might not be allocated. 1572 + improve comparisons in configure.in for unset shell variables. 1573 1574 20120826 1575 + increase size of ncurses' output-buffer, in case of very small 1576 initial screen-sizes. 1577 + fix evaluation of TERMINFO and TERMINFO_DIRS default values as needed 1578 after changes to use --datarootdir (reports by Gabriele Balducci, 1579 Roumen Petrov). 1580 1581 20120825 1582 + change output buffering scheme, using buffer maintained by ncurses 1583 rather than stdio, to avoid problems with SIGTSTP handling (report 1584 by Brian Bloniarz). 1585 1586 20120811 1587 + update autoconf patch to 2.52.20120811, adding --datarootdir 1588 (prompted by discussion with Erwin Waterlander). 1589 + improve description of --enable-reentrant option in README and the 1590 INSTALL file. 1591 + add nsterm-256color, make this the default nsterm -TD 1592 + remove bw from nsterm-bce, per testing with tack -TD 1593 1594 20120804 1595 + update test/configure, adding check for tinfo library. 1596 + improve limit-checks for the getch fifo (report by Werner Fink). 1597 + fix a remaining mismatch between $with_echo and the symbols updated 1598 for CF_DISABLE_ECHO affecting parameters for mk-2nd.awk (report by 1599 Sven Joachim, cf: 20120317). 1600 + modify followup check for pkg-config's library directory in the 1601 --enable-pc-files option to validate syntax (report by Sven Joachim, 1602 cf: 20110716). 1603 1604 20120728 1605 + correct path for ncurses_mingw.h in include/headers, in case build 1606 is done outside source-tree (patch by Roumen Petrov). 1607 + modify some older xterm entries to align with xterm source -TD 1608 + separate "xterm-old" alias from "xterm-r6" -TD 1609 + add E3 extended capability to xterm-basic and putty -TD 1610 + parenthesize parameters of other macros in curses.h -TD 1611 + parenthesize parameter of COLOR_PAIR and PAIR_NUMBER in curses.h 1612 in case it happens to be a comma-expression, etc. (patch by Nick 1613 Black). 1614 1615 20120721 1616 + improved form_request_by_name() and menu_request_by_name(). 1617 + eliminate two fixed-size buffers in toe.c 1618 + extend use_tioctl() to have expected behavior when use_env(FALSE) and 1619 use_tioctl(TRUE) are called. 1620 + modify ncurses test-program, adding -E and -T options to demonstrate 1621 use_env() versus use_tioctl(). 1622 1623 20120714 1624 + add use_tioctl() function (adapted from patch by Werner Fink, 1625 Novell #769788): 1626 1627 20120707 1628 + add ncurses_mingw.h to installed headers (prompted by patch by 1629 Juergen Pfeifer). 1630 + clarify return-codes from wgetch() in response to SIGWINCH (prompted 1631 by Novell #769788). 1632 + modify resizeterm() to always push a KEY_RESIZE onto the fifo, even 1633 if screensize is unchanged. Modify _nc_update_screensize() to push a 1634 KEY_RESIZE if there was a SIGWINCH, even if it does not call 1635 resizeterm(). These changes eliminate the case where a SIGWINCH is 1636 received, but ERR returned from wgetch or wgetnstr because the screen 1637 dimensions did not change (Novell #769788). 1638 1639 20120630 1640 + add --enable-interop to sample package scripts (suggested by Juergen 1641 Pfeifer). 1642 + update CF_PATH_SYNTAX macro, from mawk changes. 1643 + modify mk-0th.awk to allow for generating llib-ltic, etc., though 1644 some work is needed on cproto to work with lib_gen.c to update 1645 llib-lncurses. 1646 + remove redundant getenv() cal in database-iterator leftover from 1647 cleanup in 20120622 changes (report by Sven Joachim). 1648 1649 20120622 1650 + add -d, -e and -q options to test/demo_terminfo and test/demo_termcap 1651 + fix caching of environment variables in database-iterator (patch by 1652 Philippe Troin, Redhat #831366). 1653 1654 20120616 1655 + add configure check to distinguish clang from gcc to eliminate 1656 warnings about unused command-line parameters when compiler warnings 1657 are enabled. 1658 + improve behavior when updating terminfo entries which are hardlinked 1659 by allowing for the possibility that an alias has been repurposed to 1660 a new primary name. 1661 + fix some strict compiler warnings based on package scripts. 1662 + further fixes for configure check for working poll (Debian #676461). 1663 1664 20120608 1665 + fix an uninitialized variable in -c/-n logic for infocmp changes 1666 (cf: 20120526). 1667 + corrected fix for building c++ binding with clang 3.0 (report/patch 1668 by Richard Yao, Gentoo #417613, cf: 20110409) 1669 + correct configure check for working poll, fixing the case where stdin 1670 is redirected, e.g., in rpm/dpkg builds (Debian #676461). 1671 + add rpm- and dpkg-scripts, to test those build-environments. 1672 The resulting packages are used only for testing. 1673 1674 20120602 1675 + add kdch1 aka "Remove" to vt220 and vt220-8 entries -TD 1676 + add kdch1, etc., to qvt108 -TD 1677 + add dl1/il1 to some entries based on dl/il values -TD 1678 + add dl to simpleterm -TD 1679 + add consistency-checks in tic for insert-line vs delete-line 1680 controls, and insert/delete-char keys 1681 + correct no-leaks logic in infocmp when doing comparisons, fixing 1682 duplicate free of entries given via the command-line, and freeing 1683 entries loaded from the last-but-one of files specified on the 1684 command-line. 1685 + add kdch1 to wsvt25 entry from NetBSD CVS (reported by David Lord, 1686 analysis by Martin Husemann). 1687 + add cnorm/civis to wsvt25 entry from NetBSD CVS (report/analysis by 1688 Onno van der Linden). 1689 1690 20120526 1691 + extend -c and -n options of infocmp to allow comparing more than two 1692 entries. 1693 + correct check in infocmp for number of terminal names when more than 1694 two are given. 1695 + correct typo in curs_threads.3x (report by Yanhui Shen on 1696 freebsd-hackers mailing list). 1697 1698 20120512 1699 + corrected 'op' for bterm (report by Samuel Thibault) -TD 1700 + modify test/background.c to demonstrate a background character 1701 holding a colored ACS_HLINE. The behavior differs from SVr4 due to 1702 the thick- and double-line extension (cf: 20091003). 1703 + modify handling of acs characters in PutAttrChar to avoid mapping an 1704 unmapped character to a space with A_ALTCHARSET set. 1705 + rewrite vt520 entry based on vt420 -TD 1706 1707 20120505 1708 + remove p6 (bold) from opus3n1+ for consistency -TD 1709 + remove acs stuff from env230 per clues in Ingres termcap -TD 1710 + modify env230 sgr/sgr0 to match other capabilities -TD 1711 + modify smacs/rmacs in bq300-8 to match sgr/sgr0 -TD 1712 + make sgr for dku7202 agree with other caps -TD 1713 + make sgr for ibmpc agree with other caps -TD 1714 + make sgr for tek4107 agree with other caps -TD 1715 + make sgr for ndr9500 agree with other caps -TD 1716 + make sgr for sco-ansi agree with other caps -TD 1717 + make sgr for d410 agree with other caps -TD 1718 + make sgr for d210 agree with other caps -TD 1719 + make sgr for d470c, d470c-7b agree with other caps -TD 1720 + remove redundant AC_DEFINE for NDEBUG versus Makefile definition. 1721 + fix a back-link in _nc_delink_entry(), which is needed if ncurses is 1722 configured with --enable-termcap and --disable-getcap. 1723 1724 20120428 1725 + fix some inconsistencies between vt320/vt420, e.g., cnorm/civis -TD 1726 + add eslok flag to dec+sl -TD 1727 + dec+sl applies to vt320 and up -TD 1728 + drop wsl width from xterm+sl -TD 1729 + reuse xterm+sl in putty and nsca-m -TD 1730 + add ansi+tabs to vt520 -TD 1731 + add ansi+enq to vt220-vt520 -TD 1732 + fix a compiler warning in example in ncurses-intro.doc (Paul Waring). 1733 + added paragraph in keyname manpage telling how extended capabilities 1734 are interpreted as key definitions. 1735 + modify tic's check of conflicting key definitions to include extended 1736 capability strings in addition to the existing check on predefined 1737 keys. 1738 1739 20120421 1740 + improve cleanup of temporary files in tic using atexit(). 1741 + add msgr to vt420, similar DEC vtXXX entries -TD 1742 + add several missing vt420 capabilities from vt220 -TD 1743 + factor out ansi+pp from several entries -TD 1744 + change xterm+sl and xterm+sl-twm to include only the status-line 1745 capabilities and not "use=xterm", making them more generally useful 1746 as building-blocks -TD 1747 + add dec+sl building block, as example -TD 1748 1749 20120414 1750 + add XT to some terminfo entries to improve usefulness for other 1751 applications than screen, which would like to pretend that xterm's 1752 title is a status-line. -TD 1753 + change use-clauses in ansi-mtabs, hp2626, and hp2622 based on review 1754 of ordering and overrides -TD 1755 + add consistency check in tic for screen's "XT" capability. 1756 + add section in terminfo.src summarizing the user-defined capabilities 1757 used in that file -TD 1758 1759 20120407 1760 + fix an inconsistency between tic/infocmp "-x" option; tic omits all 1761 non-standard capabilities, while infocmp was ignoring only the user 1762 definable capabilities. 1763 + improve special case in tic parsing of description to allow it to be 1764 followed by terminfo capabilities. Previously the description had to 1765 be the last field on an input line to allow tic to distinguish 1766 between termcap and terminfo format while still allowing commas to be 1767 embedded in the description. 1768 + correct variable name in gen_edit.sh which broke configurability of 1769 the --with-xterm-kbs option. 1770 + revert 2011-07-16 change to "linux" alias, return to "linux2.2" -TD 1771 + further amend 20110910 change, providing for configure-script 1772 override of the "linux" terminfo entry to install and changing the 1773 default for that to "linux2.2" (Debian #665959). 1774 1775 20120331 1776 + update Ada95/configure to use CF_DISABLE_ECHO (cf: 20120317). 1777 + correct order of use-clauses in st-256color -TD 1778 + modify configure script to look for gnatgcc if the Ada95 binding 1779 is built, in preference to the default gcc/cc (suggested by 1780 Nicolas Boulenguez). 1781 + modify configure script to ensure that the same -On option used for 1782 the C compiler in CFLAGS is used for ADAFLAGS rather than simply 1783 using "-O3" (suggested by Nicolas Boulenguez) 1784 1785 20120324 1786 + amend an old fix so that next_char() exits properly for empty files, 1787 e.g., from reading /dev/null (cf: 20080804). 1788 + modify tic so that it can read from the standard input, or from 1789 a character device. Because tic uses seek's, this requires writing 1790 the data to a temporary file first (prompted by remark by Sven 1791 Joachim) (cf: 20000923). 1792 1793 20120317 1794 + correct a check made in lib_napms.c, so that terminfo applications 1795 can again use napms() (cf: 20110604). 1796 + add a note in tic.h regarding required casts for ABSENT_BOOLEAN 1797 (cf: 20040327). 1798 + correct scripting for --disable-echo option in test/configure. 1799 + amend check for missing c++ compiler to work when no error is 1800 reported, and no variables set (cf: 20021206). 1801 + add/use configure macro CF_DISABLE_ECHO. 1802 1803 20120310 1804 + fix some strict compiler warnings for abi6 and 64-bits. 1805 + use begin_va_copy/end_va_copy macros in lib_printw.c (cf: 20120303). 1806 + improve a limit-check in infocmp.c (Werner Fink): 1807 1808 20120303 1809 + minor tidying of terminfo.tail, clarify reason for limitation 1810 regarding mapping of \0 to \200 1811 + minor improvement to _nc_copy_termtype(), using memcpy to replace 1812 loops. 1813 + fix no-leaks checking in test/demo_termcap.c to account for multiple 1814 calls to setupterm(). 1815 + modified the libgpm change to show previous load as a problem in the 1816 debug-trace. 1817 > merge some patches from OpenSUSE rpm (Werner Fink): 1818 + ncurses-5.7-printw.dif, fixes for varargs handling in lib_printw.c 1819 + ncurses-5.7-gpm.dif, do not dlopen libgpm if already loaded by 1820 runtime linker 1821 + ncurses-5.6-fallback.dif, do not free arrays and strings from static 1822 fallback entries 1823 1824 20120228 1825 + fix breakage in tic/infocmp from 20120225 (report by Werner Fink). 1826 1827 20120225 1828 + modify configure script to allow creating dll's for MinGW when 1829 cross-compiling. 1830 + add --enable-string-hacks option to control whether strlcat and 1831 strlcpy may be used. The same issue applies to OpenBSD's warnings 1832 about snprintf, noting that this function is weakly standardized. 1833 + add configure checks for strlcat, strlcpy and snprintf, to help 1834 reduce bogus warnings with OpenBSD builds. 1835 + build-fix for OpenBSD 4.9 to supply consistent intptr_t declaration 1836 (cf:20111231) 1837 + update config.guess, config.sub 1838 1839 20120218 1840 + correct CF_ETIP_DEFINES configure macro, making it exit properly on 1841 the first success (patch by Pierre Labastie). 1842 + improve configure macro CF_MKSTEMP by moving existence-check for 1843 mkstemp out of the AC_TRY_RUN, to help with cross-compiles. 1844 + improve configure macro CF_FUNC_POLL from luit changes to detect 1845 broken implementations, e.g., with Mac OS X. 1846 + add configure option --with-tparm-arg 1847 + build-fix for MinGW cross-compiling, so that make_hash does not 1848 depend on TTY definition (cf: 20111008). 1849 1850 20120211 1851 + make sgr for xterm-pcolor agree with other caps -TD 1852 + make sgr for att5425 agree with other caps -TD 1853 + make sgr for att630 agree with other caps -TD 1854 + make sgr for linux entries agree with other caps -TD 1855 + make sgr for tvi9065 agree with other caps -TD 1856 + make sgr for ncr260vt200an agree with other caps -TD 1857 + make sgr for ncr160vt100pp agree with other caps -TD 1858 + make sgr for ncr260vt300an agree with other caps -TD 1859 + make sgr for aaa-60-dec-rv, aaa+dec agree with other caps -TD 1860 + make sgr for cygwin, cygwinDBG agree with other caps -TD 1861 + add configure option --with-xterm-kbs to simplify configuration for 1862 Linux versus most other systems. 1863 1864 20120204 1865 + improved tic -D option, avoid making target directory and provide 1866 better diagnostics. 1867 1868 20120128 1869 + add mach-gnu (Debian #614316, patch by Samuel Thibault) 1870 + add mach-gnu-color, tweaks to mach-gnu terminfo -TD 1871 + make sgr for sun-color agree with smso -TD 1872 + make sgr for prism9 agree with other caps -TD 1873 + make sgr for icl6404 agree with other caps -TD 1874 + make sgr for ofcons agree with other caps -TD 1875 + make sgr for att5410v1, att4415, att620 agree with other caps -TD 1876 + make sgr for aaa-unk, aaa-rv agree with other caps -TD 1877 + make sgr for avt-ns agree with other caps -TD 1878 + amend fix intended to separate fixups for acsc to allow "tic -cv" to 1879 give verbose warnings (cf: 20110730). 1880 + modify misc/gen-edit.sh to make the location of the tabset directory 1881 consistent with misc/Makefile.in, i.e., using ${datadir}/tabset 1882 (Debian #653435, patch by Sven Joachim). 1883 1884 20120121 1885 + add --with-lib-prefix option to allow configuring for old/new flavors 1886 of OS/2 EMX. 1887 + modify check for gnat version to allow for year, as used in FreeBSD 1888 port. 1889 + modify check_existence() in db_iterator.c to simply check if the 1890 path is a directory or file, according to the need. Checking for 1891 directory size also gives no usable result with OS/2 (cf: 20120107). 1892 + support OS/2 kLIBC (patch by KO Myung-Hun). 1893 1894 20120114 1895 + several improvements to test/movewindow.c (prompted by discussion on 1896 Linux Mint forum): 1897 + modify movement commands to make them continuous 1898 + rewrote the test for mvderwin 1899 + rewrote the test for recursive mvwin 1900 + split-out reusable CF_WITH_NCURSES_ETC macro in test/configure.in 1901 + updated configure macro CF_XOPEN_SOURCE, build-fixes for Mac OS X 1902 and OpenBSD. 1903 + regenerated html manpages. 1904 1905 20120107 1906 + various improvments for MinGW (Juergen Pfeifer): 1907 + modify stat() calls to ignore the st_size member 1908 + drop mk-dlls.sh script. 1909 + change recommended regular expression library. 1910 + modify rain.c to allow for threaded configuraton. 1911 + modify tset.c to allow for case when size-change logic is not used. 1912 1913 20111231 1914 + modify toe's report when -a and -s options are combined, to add 1915 a column showing which entries belong to a given database. 1916 + add -s option to toe, to sort its output. 1917 + modify progs/toe.c, simplifying use of db-iterator results to use 1918 caching improvements from 20111001 and 20111126. 1919 + correct generation of pc-files when ticlib or termlib options are 1920 given to rename the corresponding tic- or tinfo-libraries (report 1921 by Sven Joachim). 1922 1923 20111224 1924 + document a portability issue with tput, i.e., that scripts which work 1925 with ncurses may fail in other implementations that do no parameter 1926 analysis. 1927 + add putty-sco entry -TD 1928 1929 20111217 1930 + review/fix places in manpages where --program-prefix configure option 1931 was not being used. 1932 + add -D option to infocmp, to show the database locations that it 1933 could use. 1934 + fix build for the special case where term-driver, ticlib and termlib 1935 are all enabled. The terminal driver depends on a few features in 1936 the base ncurses library, so tic's dependencies include both ncurses 1937 and termlib. 1938 + fix build work for term-driver when --enable-wgetch-events option is 1939 enabled. 1940 + use <stdint.h> types to fix some questionable casts to void*. 1941 1942 20111210 1943 + modify configure script to check if thread library provides 1944 pthread_mutexattr_settype(), e.g., not provided by Solaris 2.6 1945 + modify configure script to suppress check to define _XOPEN_SOURCE 1946 for IRIX64, since its header files have a conflict versus 1947 _SGI_SOURCE. 1948 + modify configure script to add ".pc" files for tic- and 1949 tinfo-libraries, which were omitted in recent change (cf: 20111126). 1950 + fix inconsistent checks on $PKG_CONFIG variable in configure script. 1951 1952 20111203 1953 + modify configure-check for etip.h dependencies, supplying a temporary 1954 copy of ncurses_dll.h since it is a generated file (prompted by 1955 Debian #646977). 1956 + modify CF_CPP_PARAM_INIT "main" function to work with current C++. 1957 1958 20111126 1959 + correct database iterator's check for duplicate entries 1960 (cf: 20111001). 1961 + modify database iterator to ignore $TERMCAP when it is not an 1962 absolute pathname. 1963 + add -D option to tic, to show the database locations that it could 1964 use. 1965 + improve description of database locations in tic manpage. 1966 + modify the configure script to generate a list of the ".pc" files to 1967 generate, rather than deriving the list from the libraries which have 1968 been built (patch by Mike Frysinger). 1969 + use AC_CHECK_TOOLS in preference to AC_PATH_PROGS when searching for 1970 ncurses*-config, e.g., in Ada95/configure and test/configure (adapted 1971 from patch by Mike Frysinger). 1972 1973 20111119 1974 + remove obsolete/conflicting fallback definition for _POSIX_SOURCE 1975 from curses.priv.h, fixing a regression with IRIX64 and Tru64 1976 (cf: 20110416) 1977 + modify _nc_tic_dir() to ensure that its return-value is nonnull, 1978 i.e., the database iterator was not initialized. This case is needed 1979 to when tic is translating to termcap, rather than loading the 1980 database (cf: 20111001). 1981 1982 20111112 1983 + add pccon entries for OpenBSD console (Alexei Malinin). 1984 + build-fix for OpenBSD 4.9 with gcc 4.2.1, setting _XOPEN_SOURCE to 1985 600 to work around inconsistent ifdef'ing of wcstof between C and 1986 C++ header files. 1987 + modify capconvert script to accept more than exact match on "xterm", 1988 e.g., the "xterm-*" variants, to exclude from the conversion (patch 1989 by Robert Millan). 1990 + add -lc_r as alternative for -lpthread, allows build of threaded code 1991 in older FreeBSD machines. 1992 + build-fix for MirBSD, which fails when either _XOPEN_SOURCE or 1993 _POSIX_SOURCE are defined. 1994 + fix a typo misc/Makefile.in, used in uninstalling pc-files. 1995 1996 20111030 1997 + modify make_db_path() to allow creating "terminfo.db" in the same 1998 directory as an existing "terminfo" directory. This fixes a case 1999 where switching between hashed/filesystem databases would cause the 2000 new hashed database to be installed in the next best location - 2001 root's home directory. 2002 + add variable cf_cv_prog_gnat_correct to those passed to 2003 config.status, fixing a problem with Ada95 builds (cf: 20111022). 2004 + change feature test from _XPG5 to _XOPEN_SOURCE in two places, to 2005 accommodate broken implementations for _XPG6. 2006 + eliminate usage of NULL symbol from etip.h, to reduce header 2007 interdependencies. 2008 + add configure check to decide when to add _XOPEN_SOURCE define to 2009 compiler options, i.e., for Solaris 10 and later (cf: 20100403). 2010 This is a workaround for gcc 4.6, which fails to build the c++ 2011 binding if that symbol is defined by the application, due to 2012 incorrectly combining the corresponding feature test macros 2013 (report by Peter Kruse). 2014 2015 20111022 2016 + correct logic for discarding mouse events, retaining the partial 2017 events used to build up click, double-click, etc, until needed 2018 (cf: 20110917). 2019 + fix configure script to avoid creating unused Ada95 makefile when 2020 gnat does not work. 2021 + cleanup width-related gcc 3.4.3 warnings for 64-bit platform, for the 2022 internal functions of libncurses. The external interface of courses 2023 uses bool, which still produces these warnings. 2024 2025 20111015 2026 + improve description of --disable-tic-depends option to make it 2027 clear that it may be useful whether or not the --with-termlib 2028 option is also given (report by Sven Joachim). 2029 + amend termcap equivalent for set_pglen_inch to use the X/Open 2030 "YI" rather than the obsolete Solaris 2.5 "sL" (cf: 990109). 2031 + improve manpage for tgetent differences from termcap library. 2032 2033 20111008 2034 + moved static data from db_iterator.c to lib_data.c 2035 + modify db_iterator.c for memory-leak checking, fix one leak. 2036 + modify misc/gen-pkgconfig.in to use Requires.private for the parts 2037 of ncurses rather than Requires, as well as Libs.private for the 2038 other library dependencies (prompted by Debian #644728). 2039 2040 20111001 2041 + modify tic "-K" option to only set the strict-flag rather than force 2042 source-output. That allows the same flag to control the parser for 2043 input and output of termcap source. 2044 + modify _nc_getent() to ignore backslash at the end of a comment line, 2045 making it consistent with ncurses' parser. 2046 + restore a special-case check for directory needed to make termcap 2047 text files load as if they were databases (cf: 20110924). 2048 + modify tic's resolution/collision checking to attempt to remove the 2049 conflicting alias from the second entry in the pair, which is 2050 normally following in the source file. Also improved the warning 2051 message to make it simpler to see which alias is the problem. 2052 + improve performance of the database iterator by caching search-list. 2053 2054 20110925 2055 + add a missing "else" in changes to _nc_read_tic_entry(). 2056 2057 20110924 2058 + modify _nc_read_tic_entry() so that hashed-database is checked before 2059 filesystem. 2060 + updated CF_CURSES_LIBS check in test/configure script. 2061 + modify configure script and makefiles to split TIC_ARGS and 2062 TINFO_ARGS into pieces corresponding to LDFLAGS and LIBS variables, 2063 to help separate searches for tic- and tinfo-libraries (patch by Nick 2064 Alcock aka "Nix"). 2065 + build-fix for lib_mouse.c changes (cf: 20110917). 2066 2067 20110917 2068 + fix compiler warning for clang 2.9 2069 + improve merging of mouse events (integrated patch by Damien 2070 Guibouret). 2071 + correct mask-check used in lib_mouse for wheel mouse buttons 4/5 2072 (patch by Damien Guibouret). 2073 2074 20110910 2075 + modify misc/gen_edit.sh to select a "linux" entry which works with 2076 the current kernel rather than assuming it is always "linux3.0" 2077 (cf: 20110716). 2078 + revert a change to getmouse() which had the undesirable side-effect 2079 of suppressing button-release events (report by Damien Guibouret, 2080 cf: 20100102). 2081 + add xterm+kbs fragment from xterm #272 -TD 2082 + add configure option --with-pkg-config-libdir to provide control over 2083 the actual directory into which pc-files are installed, do not use 2084 the pkg-config environment variables (discussion with Frederic L W 2085 Meunier). 2086 + add link to mailing-list archive in announce.html.in, as done in 2087 FAQ (prompted by question by Andrius Bentkus). 2088 + improve manpage install by adjusting the "#include" examples to 2089 show the ncurses-subdirectory used when --disable-overwrite option 2090 is used. 2091 + install an alias for "curses" to the ncurses manpage, tied to the 2092 --with-curses-h configure option (suggested by Reuben Thomas). 2093 2094 20110903 2095 + propagate error-returns from wresize, i.e., the internal 2096 increase_size and decrease_size functions through resize_term (report 2097 by Tim van der Molen, cf: 20020713). 2098 + fix typo in tset manpage (patch by Sven Joachim). 2099 2100 20110820 2101 + add a check to ensure that termcap files which might have "^?" do 2102 not use the terminfo interpretation as "\177". 2103 + minor cleanup of X-terminal emulator section of terminfo.src -TD 2104 + add terminator entry -TD 2105 + add simpleterm entry -TD 2106 + improve wattr_get macros by ensuring that if the window pointer is 2107 null, then the attribute and color values returned will be zero 2108 (cf: 20110528). 2109 2110 20110813 2111 + add substitution for $RPATH_LIST to misc/ncurses-config.in 2112 + improve performance of tic with hashed-database by caching the 2113 database connection, using atexit() to cleanup. 2114 + modify treatment of 2-character aliases at the beginning of termcap 2115 entries so they are not counted in use-resolution, since these are 2116 guaranteed to be unique. Also ignore these aliases when reporting 2117 the primary name of the entry (cf: 20040501) 2118 + double-check gn (generic) flag in terminal descriptions to 2119 accommodate old/buggy termcap databases which misused that feature. 2120 + minor fixes to _nc_tgetent(), ensure buffer is initialized even on 2121 error-return. 2122 2123 20110807 2124 + improve rpath fix from 20110730 by ensuring that the new $RPATH_LIST 2125 variable is defined in the makefiles which use it. 2126 + build-fix for DragonFlyBSD's pkgsrc in test/configure script. 2127 + build-fixes for NetBSD 5.1 with termcap support enabled. 2128 + corrected k9 in dg460-ansi, add other features based on manuals -TD 2129 + improve trimming of whitespace at the end of terminfo/termcap output 2130 from tic/infocmp. 2131 + when writing termcap source, ensure that colons in the description 2132 field are translated to a non-delimiter, i.e., "=". 2133 + add "-0" option to tic/infocmp, to make the termcap/terminfo source 2134 use a single line. 2135 + add a null-pointer check when handling the $CC variable. 2136 2137 20110730 2138 + modify configure script and makefiles in c++ and progs to allow the 2139 directory used for rpath option to be overridden, e.g., to work 2140 around updates to the variables used by tic during an install. 2141 + add -K option to tic/infocmp, to provide stricter BSD-compatibility 2142 for termcap output. 2143 + add _nc_strict_bsd variable in tic library which controls the 2144 "strict" BSD termcap compatibility from 20110723, plus these 2145 features: 2146 + allow escapes such as "\8" and "\9" when reading termcap 2147 + disallow "\a", "\e", "\l", "\s" and "\:" escapes when reading 2148 termcap files, passing through "a", "e", etc. 2149 + expand "\:" as "\072" on output. 2150 + modify _nc_get_token() to reset the token's string value in case 2151 there is a string-typed token lacking the "=" marker. 2152 + fix a few memory leaks in _nc_tgetent. 2153 + fix a few places where reading from a termcap file could refer to 2154 freed memory. 2155 + add an overflow check when converting terminfo/termcap numeric 2156 values, since terminfo stores those in a short, and they must be 2157 positive. 2158 + correct internal variables used for translating to termcap "%>" 2159 feature, and translating from termcap %B to terminfo, needed by 2160 tctest (cf: 19991211). 2161 + amend a minor fix to acsc when loading a termcap file to separate it 2162 from warnings needed for tic (cf: 20040710) 2163 + modify logic in _nc_read_entry() and _nc_read_tic_entry() to allow 2164 a termcap file to be handled via TERMINFO_DIRS. 2165 + modify _nc_infotocap() to include non-mandatory padding when 2166 translating to termcap. 2167 + modify _nc_read_termcap_entry(), passing a flag in the case where 2168 getcap is used, to reduce interactive warning messages. 2169 2170 20110723 2171 + add a check in start_color() to limit color-pairs to 256 when 2172 extended colors are not supported (patch by David Benjamin). 2173 + modify setcchar to omit no-longer-needed OR'ing of color pair in 2174 the SetAttr() macro (patch by David Benjamin). 2175 + add kich1 to sun terminfo entry (Yuri Pankov) 2176 + use bold rather than reverse for smso in sun-color terminfo entry 2177 (Yuri Pankov). 2178 + improve generation of termcap using tic/infocmp -C option, e.g., 2179 to correspond with 4.2BSD (prompted by discussion with Yuri Pankov 2180 regarding Schilling's test program): 2181 + translate %02 and %03 to %2 and %3 respectively. 2182 + suppress string capabilities which use %s, not supported by tgoto 2183 + use \040 rather than \s 2184 + expand null characters as \200 rather than \0 2185 + modify configure script to support shared libraries for DragonFlyBSD. 2186 2187 20110716 2188 + replace an assert() in _nc_Free_Argument() with a regular null 2189 pointer check (report/analysis by Franjo Ivancic). 2190 + modify configure --enable-pc-files option to take into account the 2191 PKG_CONFIG_PATH variable (report by Frederic L W Meunier). 2192 + add/use xterm+tmux chunk from xterm #271 -TD 2193 + resync xterm-new entry from xterm #271 -TD 2194 + add E3 extended capability to linux-basic (Miroslav Lichvar) 2195 + add linux2.2, linux2.6, linux3.0 entries to give context for E3 -TD 2196 + add SI/SO change to linux2.6 entry (Debian #515609) -TD 2197 + fix inconsistent tabset path in pcmw (Todd C. Miller). 2198 + remove a backslash which continued comment, obscuring altos3 2199 definition with OpenBSD toolset (Nicholas Marriott). 2200 2201 20110702 2202 + add workaround from xterm #271 changes to ensure that compiler flags 2203 are not used in the $CC variable. 2204 + improve support for shared libraries, tested with AIX 5.3, 6.1 and 2205 7.1 with both gcc 4.2.4 and cc. 2206 + modify configure checks for AIX to include release 7.x 2207 + add loader flags/libraries to libtool options so that dynamic loading 2208 works properly, adapted from ncurses-5.7-ldflags-with-libtool.patch 2209 at gentoo prefix repository (patch by Michael Haubenwallner). 2210 2211 20110626 2212 + move include of nc_termios.h out of term_entry.h, since the latter 2213 is installed, e.g., for tack while the former is not (report by 2214 Sven Joachim). 2215 2216 20110625 2217 + improve cleanup() function in lib_tstp.c, using _exit() rather than 2218 exit() and checking for SIGTERM rather than SIGQUIT (prompted by 2219 comments forwarded by Nicholas Marriott). 2220 + reduce name pollution from term.h, moving fallback #define's for 2221 tcgetattr(), etc., to new private header nc_termios.h (report by 2222 Sergio NNX). 2223 + two minor fixes for tracing (patch by Vassili Courzakis). 2224 + improve trace initialization by starting it in use_env() and 2225 ripoffline(). 2226 + review old email, add details for some changelog entries. 2227 2228 20110611 2229 + update minix entry to minix 3.2 (Thomas Cort). 2230 + fix a strict compiler warning in change to wattr_get (cf: 20110528). 2231 2232 20110604 2233 + fixes for MirBSD port: 2234 + set default prefix to /usr. 2235 + add support for shared libraries in configure script. 2236 + use S_ISREG and S_ISDIR consistently, with fallback definitions. 2237 + add a few more checks based on ncurses/link_test. 2238 + modify MKlib_gen.sh to handle sp-funcs renaming of NCURSES_OUTC type. 2239 2240 20110528 2241 + add case to CF_SHARED_OPTS for Interix (patch by Markus Duft). 2242 + used ncurses/link_test to check for behavior when the terminal has 2243 not been initialized and when an application passes null pointers 2244 to the library. Added checks to cover this (prompted by Redhat 2245 #707344). 2246 + modify MKlib_gen.sh to make its main() function call each function 2247 with zero parameters, to help find inconsistent checking for null 2248 pointers, etc. 2249 2250 20110521 2251 + fix warnings from clang 2.7 "--analyze" 2252 2253 20110514 2254 + compiler-warning fixes in panel and progs. 2255 + modify CF_PKG_CONFIG macro, from changes to tin -TD 2256 + modify CF_CURSES_FUNCS configure macro, used in test directory 2257 configure script: 2258 + work around (non-optimizer) bug in gcc 4.2.1 which caused 2259 test-expression to be omitted from executable. 2260 + force the linker to see a link-time expression of a symbol, to 2261 help work around weak-symbol issues. 2262 2263 20110507 2264 + update discussion of MKfallback.sh script in INSTALL; normally the 2265 script is used automatically via the configured makefiles. However 2266 there are still occasions when it might be used directly by packagers 2267 (report by Gunter Schaffler). 2268 + modify misc/ncurses-config.in to omit the "-L" option from the 2269 "--libs" output if the library directory is /usr/lib. 2270 + change order of tests for curses.h versus ncurses.h headers in the 2271 configure scripts for Ada95 and test-directories, to look for 2272 ncurses.h, from fixes to tin -TD 2273 + modify ncurses/tinfo/access.c to account for Tandem's root uid 2274 (report by Joachim Schmitz). 2275 2276 20110430 2277 + modify rules in Ada95/src/Makefile.in to ensure that the PIC option 2278 is not used when building a static library (report by Nicolas 2279 Boulenguez): 2280 + Ada95 build-fix for big-endian architectures such as sparc. This 2281 undoes one of the fixes from 20110319, which added an "Unused" member 2282 to representation clauses, replacing that with pragmas to suppress 2283 warnings about unused bits (patch by Nicolas Boulenguez). 2284 2285 20110423 2286 + add check in test/configure for use_window, use_screen. 2287 + add configure-checks for getopt's variables, which may be declared 2288 as different types on some Unix systems. 2289 + add check in test/configure for some legacy curses types of the 2290 function pointer passed to tputs(). 2291 + modify init_pair() to accept -1's for color value after 2292 assume_default_colors() has been called (Debian #337095). 2293 + modify test/background.c, adding commmand-line options to demonstrate 2294 assume_default_colors() and use_default_colors(). 2295 2296 20110416 2297 + modify configure script/source-code to only define _POSIX_SOURCE if 2298 the checks for sigaction and/or termios fail, and if _POSIX_C_SOURCE 2299 and _XOPEN_SOURCE are undefined (report by Valentin Ochs). 2300 + update config.guess, config.sub 2301 2302 20110409 2303 + fixes to build c++ binding with clang 3.0 (patch by Alexander 2304 Kolesen). 2305 + add check for unctrl.h in test/configure, to work around breakage in 2306 some ncurses packages. 2307 + add "--disable-widec" option to test/configure script. 2308 + add "--with-curses-colr" and "--with-curses-5lib" options to the 2309 test/configure script to address testing with very old machines. 2310 2311 20110404 5.9 release for upload to 2312 2313 20110402 2314 + various build-fixes for the rpm/dpkg scripts. 2315 + add "--enable-rpath-link" option to Ada95/configure, to allow 2316 packages to suppress the rpath feature which is normally used for 2317 the in-tree build of sample programs. 2318 + corrected definition of libdir variable in Ada95/src/Makefile.in, 2319 needed for rpm script. 2320 + add "--with-shared" option to Ada95/configure script, to allow 2321 making the C-language parts of the binding use appropriate compiler 2322 options if building a shared library with gnat. 2323 2324 20110329 2325 > portability fixes for Ada95 binding: 2326 + add configure check to ensure that SIGINT works with gnat. This is 2327 needed for the "rain" sample program. If SIGINT does not work, omit 2328 that sample program. 2329 + correct typo in check of $PKG_CONFIG variable in Ada95/configure 2330 + add ncurses_compat.c, to supply functions used in the Ada95 binding 2331 which were added in 5.7 and later. 2332 + modify sed expression in CF_NCURSES_ADDON to eliminate a dependency 2333 upon GNU sed. 2334 2335 20110326 2336 + add special check in Ada95/configure script for ncurses6 reentrant 2337 code. 2338 + regen Ada html documentation. 2339 + build-fix for Ada shared libraries versus the varargs workaround. 2340 + add rpm and dpkg scripts for Ada95 and test directories, for test 2341 builds. 2342 + update test/configure macros CF_CURSES_LIBS, CF_XOPEN_SOURCE and 2343 CF_X_ATHENA_LIBS. 2344 + add configure check to determine if gnat's project feature supports 2345 libraries, i.e., collections of .ali files. 2346 + make all dereferences in Ada95 samples explicit. 2347 + fix typo in comment in lib_add_wch.c (patch by Petr Pavlu). 2348 + add configure check for, ifdef's for math.h which is in a separate 2349 package on Solaris and potentially not installed (report by Petr 2350 Pavlu). 2351 > fixes for Ada95 binding (Nicolas Boulenguez): 2352 + improve type-checking in Ada95 by eliminating a few warning-suppress 2353 pragmas. 2354 + suppress unreferenced warnings. 2355 + make all dereferences in binding explicit. 2356 2357 20110319 2358 + regen Ada html documentation. 2359 + change order of -I options from ncurses*-config script when the 2360 --disable-overwrite option was used, so that the subdirectory include 2361 is listed first. 2362 + modify the make-tar.sh scripts to add a MANIFEST and NEWS file. 2363 + modify configure script to provide value for HTML_DIR in 2364 Ada95/gen/Makefile.in, which depends on whether the Ada95 binding is 2365 distributed separately (report by Nicolas Boulenguez). 2366 + modify configure script to add "-g" and/or "-O3" to ADAFLAGS if the 2367 CFLAGS for the build has these options. 2368 + amend change from 20070324, to not add 1 to the result of getmaxx 2369 and getmaxy in the Ada binding (report by Nicolas Boulenguez for 2370 thread in comp.lang.ada). 2371 + build-fix Ada95/samples for gnat 4.5 2372 + spelling fixes for Ada95/samples/explain.txt 2373 > fixes for Ada95 binding (Nicolas Boulenguez): 2374 + add item in Trace_Attribute_Set corresponding to TRACE_ATTRS. 2375 + add workaround for binding to set_field_type(), which uses varargs. 2376 The original binding from 990220 relied on the prevalent 2377 implementation of varargs which did not support or need va_copy(). 2378 + add dependency on gen/Makefile.in needed for *-panels.ads 2379 + add Library_Options to library.gpr 2380 + add Languages to library.gpr, for gprbuild 2381 2382 20110307 2383 + revert changes to limit-checks from 20110122 (Debian #616711). 2384 > minor type-cleanup of Ada95 binding (Nicolas Boulenguez): 2385 + corrected a minor sign error in a field of Low_Level_Field_Type, to 2386 conform to form.h. 2387 + replaced C_Int by Curses_Bool as return type for some callbacks, see 2388 fieldtype(3FORM). 2389 + modify samples/sample-explain.adb to provide explicit message when 2390 explain.txt is not found. 2391 2392 20110305 2393 + improve makefiles for Ada95 tree (patch by Nicolas Boulenguez). 2394 + fix an off-by-one error in _nc_slk_initialize() from 20100605 fixes 2395 for compiler warnings (report by Nicolas Boulenguez). 2396 + modify Ada95/gen/gen.c to declare unused bits in generated layouts, 2397 needed to compile when chtype is 64-bits using gnat 4.4.5 2398 2399 20110226 5.8 release for upload to 2400 2401 20110226 2402 + update release notes, for 5.8. 2403 + regenerated html manpages. 2404 + change open() in _nc_read_file_entry() to fopen() for consistency 2405 with write_file(). 2406 + modify misc/run_tic.in to create parent directory, in case this is 2407 a new install of hashed database. 2408 + fix typo in Ada95/mk-1st.awk which causes error with original awk. 2409 2410 20110220 2411 + configure script rpath fixes from xterm #269. 2412 + workaround for cygwin's non-functional features.h, to force ncurses' 2413 configure script to define _XOPEN_SOURCE_EXTENDED when building 2414 wide-character configuration. 2415 + build-fix in run_tic.sh for OS/2 EMX install 2416 + add cons25-debian entry (patch by Brian M Carlson, Debian #607662). 2417 2418 20110212 2419 + regenerated html manpages. 2420 + use _tracef() in show_where() function of tic, to work correctly with 2421 special case of trace configuration. 2422 2423 20110205 2424 + add xterm-utf8 entry as a demo of the U8 feature -TD 2425 + add U8 feature to denote entries for terminal emulators which do not 2426 support VT100 SI/SO when processing UTF-8 encoding -TD 2427 + improve the NCURSES_NO_UTF8_ACS feature by adding a check for an 2428 extended terminfo capability U8 (prompted by mailing list 2429 discussion). 2430 2431 20110122 2432 + start documenting interface changes for upcoming 5.8 release. 2433 + correct limit-checks in derwin(). 2434 + correct limit-checks in newwin(), to ensure that windows have nonzero 2435 size (report by Garrett Cooper). 2436 + fix a missing "weak" declaration for pthread_kill (patch by Nicholas 2437 Alcock). 2438 + improve documentation of KEY_ENTER in curs_getch.3x manpage (prompted 2439 by discussion with Kevin Martin). 2440 2441 20110115 2442 + modify Ada95/configure script to make the --with-curses-dir option 2443 work without requiring the --with-ncurses option. 2444 + modify test programs to allow them to be built with NetBSD curses. 2445 + document thick- and double-line symbols in curs_add_wch.3x manpage. 2446 + document WACS_xxx constants in curs_add_wch.3x manpage. 2447 + fix some warnings for clang 2.6 "--analyze" 2448 + modify Ada95 makefiles to make html-documentation with the project 2449 file configuration if that is used. 2450 + update config.guess, config.sub 2451 2452 20110108 2453 + regenerated html manpages. 2454 + minor fixes to enable lint when trace is not enabled, e.g., with 2455 clang --analyze. 2456 + fix typo in man/default_colors.3x (patch by Tim van der Molen). 2457 + update ncurses/llib-lncurses* 2458 2459 20110101 2460 + fix remaining strict compiler warnings in ncurses library ABI=5, 2461 except those dealing with function pointers, etc. 2462 2463 20101225 2464 + modify nc_tparm.h, adding guards against repeated inclusion, and 2465 allowing TPARM_ARG to be overridden. 2466 + fix some strict compiler warnings in ncurses library. 2467 2468 20101211 2469 + suppress ncv in screen entry, allowing underline (patch by Alejandro 2470 R Sedeno). 2471 + also suppress ncv in konsole-base -TD 2472 + fixes in wins_nwstr() and related functions to ensure that special 2473 characters, i.e., control characters are handled properly with the 2474 wide-character configuration. 2475 + correct a comparison in wins_nwstr() (Redhat #661506). 2476 + correct help-messages in some of the test-programs, which still 2477 referred to quitting with 'q'. 2478 2479 20101204 2480 + add special case to _nc_infotocap() to recognize the setaf/setab 2481 strings from xterm+256color and xterm+88color, and provide a reduced 2482 version which works with termcap. 2483 + remove obsolete emacs "Local Variables" section from documentation 2484 (request by Sven Joachim). 2485 + update doc/html/index.html to include NCURSES-Programming-HOWTO.html 2486 (report by Sven Joachim). 2487 2488 20101128 2489 + modify test/configure and test/Makefile.in to handle this special 2490 case of building within a build-tree (Debian #34182): 2491 mkdir -p build && cd build && ../test/configure && make 2492 2493 20101127 2494 + miscellaneous build-fixes for Ada95 and test-directories when built 2495 out-of-tree. 2496 + use VPATH in makefiles to simplify out-of-tree builds (Debian #34182). 2497 + fix typo in rmso for tek4106 entry -Goran Weinholt 2498 2499 20101120 2500 + improve checks in test/configure for X libraries, from xterm #267 2501 changes. 2502 + modify test/configure to allow it to use the build-tree's libraries 2503 e.g., when using that to configure the test-programs without the 2504 rpath feature (request by Sven Joachim). 2505 + repurpose "gnome" terminfo entries as "vte", retaining "gnome" items 2506 for compatibility, but generally deprecating those since the VTE 2507 library is what actually defines the behavior of "gnome", etc., 2508 since 2003 -TD 2509 2510 20101113 2511 + compiler warning fixes for test programs. 2512 + various build-fixes for test-programs with pdcurses. 2513 + updated configure checks for X packages in test/configure from xterm 2514 #267 changes. 2515 + add configure check to gnatmake, to accommodate cygwin. 2516 2517 20101106 2518 + correct list of sub-directories needed in Ada95 tree for building as 2519 a separate package. 2520 + modify scripts in test-directory to improve builds as a separate 2521 package. 2522 2523 20101023 2524 + correct parsing of relative tab-stops in tabs program (report by 2525 Philip Ganchev). 2526 + adjust configure script so that "t" is not added to library suffix 2527 when weak-symbols are used, allowing the pthread configuration to 2528 more closely match the non-thread naming (report by Werner Fink). 2529 + modify configure check for tic program, used for fallbacks, to a 2530 warning if not found. This makes it simpler to use additonal 2531 scripts to bootstrap the fallbacks code using tic from the build 2532 tree (report by Werner Fink). 2533 + fix several places in configure script using ${variable-value} form. 2534 + modify configure macro CF_LDFLAGS_STATIC to accommodate some loaders 2535 which do not support selectively linking against static libraries 2536 (report by John P. Hartmann) 2537 + fix an unescaped dash in man/tset.1 (report by Sven Joachim). 2538 2539 20101009 2540 + correct comparison used for setting 16-colors in linux-16color 2541 entry (Novell #644831) -TD 2542 + improve linux-16color entry, using "dim" for color-8 which makes it 2543 gray rather than black like color-0 -TD 2544 + drop misc/ncu-indent and misc/jpf-indent; they are provided by an 2545 external package "cindent". 2546 2547 20101002 2548 + improve linkages in html manpages, adding references to the newer 2549 pages, e.g., *_variables, curs_sp_funcs, curs_threads. 2550 + add checks in tic for inconsistent cursor-movement controls, and for 2551 inconsistent printer-controls. 2552 + fill in no-parameter forms of cursor-movement where a parameterized 2553 form is available -TD 2554 + fill in missing cursor controls where the form of the controls is 2555 ANSI -TD 2556 + fix inconsistent punctuation in form_variables manpage (patch by 2557 Sven Joachim). 2558 + add parameterized cursor-controls to linux-basic (report by Dae) -TD 2559 > patch by Juergen Pfeifer: 2560 + document how to build 32-bit libraries in README.MinGW 2561 + fixes to filename computation in mk-dlls.sh.in 2562 + use POSIX locale in mk-dlls.sh.in rather than en_US (report by Sven 2563 Joachim). 2564 + add a check in mk-dlls.sh.in to obtain the size of a pointer to 2565 distinguish between 32-bit and 64-bit hosts. The result is stored 2566 in mingw_arch 2567 2568 20100925 2569 + add "XT" capability to entries for terminals that support both 2570 xterm-style mouse- and title-controls, for "screen" which 2571 special-cases TERM beginning with "xterm" or "rxvt" -TD 2572 > patch by Juergen Pfeifer: 2573 + use 64-Bit MinGW toolchain (recommended package from TDM, see 2574 README.MinGW). 2575 + support pthreads when using the TDM MinGW toolchain 2576 2577 20100918 2578 + regenerated html manpages. 2579 + minor fixes for symlinks to curs_legacy.3x and curs_slk.3x manpages. 2580 + add manpage for sp-funcs. 2581 + add sp-funcs to test/listused.sh, for documentation aids. 2582 2583 20100911 2584 + add manpages for summarizing public variables of curses-, terminfo- 2585 and form-libraries. 2586 + minor fixes to manpages for consistency (patch by Jason McIntyre). 2587 + modify tic's -I/-C dump to reformat acsc strings into canonical form 2588 (sorted, unique mapping) (cf: 971004). 2589 + add configure check for pthread_kill(), needed for some old 2590 platforms. 2591 2592 20100904 2593 + add configure option --without-tests, to suppress building test 2594 programs (request by Frederic L W Meunier). 2595 2596 20100828 2597 + modify nsterm, xnuppc and tek4115 to make sgr/sgr0 consistent -TD 2598 + add check in terminfo source-reader to provide more informative 2599 message when someone attempts to run tic on a compiled terminal 2600 description (prompted by Debian #593920). 2601 + note in infotocap and captoinfo manpages that they read terminal 2602 descriptions from text-files (Debian #593920). 2603 + improve acsc string for vt52, show arrow keys (patch by Benjamin 2604 Sittler). 2605 2606 20100814 2607 + document in manpages that "mv" functions first use wmove() to check 2608 the window pointer and whether the position lies within the window 2609 (suggested by Poul-Henning Kamp). 2610 + fixes to curs_color.3x, curs_kernel.3x and wresize.3x manpages (patch 2611 by Tim van der Molen). 2612 + modify configure script to transform library names for tic- and 2613 tinfo-libraries so that those build properly with Mac OS X shared 2614 library configuration. 2615 + modify configure script to ensure that it removes conftest.dSYM 2616 directory leftover on checks with Mac OS X. 2617 + modify configure script to cleanup after check for symbolic links. 2618 2619 20100807 2620 + correct a typo in mk-1st.awk (patch by Gabriele Balducci) 2621 (cf: 20100724) 2622 + improve configure checks for location of tic and infocmp programs 2623 used for installing database and for generating fallback data, 2624 e.g., for cross-compiling. 2625 + add Markus Kuhn's wcwidth function for compiling MinGW 2626 + add special case to CF_REGEX for cross-compiling to MinGW target. 2627 2628 20100731 2629 + modify initialization check for win32con driver to eliminate need for 2630 special case for TERM "unknown", using terminal database if available 2631 (prompted by discussion with Roumen Petrov). 2632 + for MinGW port, ensure that terminal driver is setup if tgetent() 2633 is called (patch by Roumen Petrov). 2634 + document tabs "-0" and "-8" options in manpage. 2635 + fix Debian "lintian" issues with manpages reported in 2636 2637 2638 20100724 2639 + add a check in tic for missing set_tab if clear_all_tabs given. 2640 + improve use of symbolic links in makefiles by using "-f" option if 2641 it is supported, to eliminate temporary removal of the target 2642 (prompted by) 2643 + minor improvement to test/ncurses.c, reset color pairs in 'd' test 2644 after exit from 'm' main-menu command. 2645 + improved ncu-indent, from mawk changes, allows more than one of 2646 GCC_NORETURN, GCC_PRINTFLIKE and GCC_SCANFLIKE on a single line. 2647 2648 20100717 2649 + add hard-reset for rs2 to wsvt25 to help ensure that reset ends 2650 the alternate character set (patch by Nicholas Marriott) 2651 + remove tar-copy.sh and related configure/Makefile chunks, since the 2652 Ada95 binding is now installed using rules in Ada95/src. 2653 2654 20100703 2655 + continue integrating changes to use gnatmake project files in Ada95 2656 + add/use configure check to turn on project rules for Ada95/src. 2657 + revert the vfork change from 20100130, since it does not work. 2658 2659 20100626 2660 + continue integrating changes to use gnatmake project files in Ada95 2661 + old gnatmake (3.15) does not produce libraries using project-file; 2662 work around by adding script to generate alternate makefile. 2663 2664 20100619 2665 + continue integrating changes to use gnatmake project files in Ada95 2666 + add configure --with-ada-sharedlib option, for the test_make rule. 2667 + move Ada95-related logic into aclocal.m4, since additional checks 2668 will be needed to distinguish old/new implementations of gnat. 2669 2670 20100612 2671 + start integrating changes to use gnatmake project files in Ada95 tree 2672 + add test_make / test_clean / test_install rules in Ada95/src 2673 + change install-path for adainclude directory to /usr/share/ada (was 2674 /usr/lib/ada). 2675 + update Ada95/configure. 2676 + add mlterm+256color entry, for mlterm 3.0.0 -TD 2677 + modify test/configure to use macros to ensure consistent order 2678 of updating LIBS variable. 2679 2680 20100605 2681 + change search order of options for Solaris in CF_SHARED_OPTS, to 2682 work with 64-bit compiles. 2683 + correct quoting of assignment in CF_SHARED_OPTS case for aix 2684 (cf: 20081227) 2685 2686 20100529 2687 + regenerated html documentation. 2688 + modify test/configure to support pkg-config for checking X libraries 2689 used by PDCurses. 2690 + add/use configure macro CF_ADD_LIB to force consistency of 2691 assignments to $LIBS, etc. 2692 + fix configure script for combining --with-pthread 2693 and --enable-weak-symbols options. 2694 2695 20100522 2696 + correct cross-compiling configure check for CF_MKSTEMP macro, by 2697 adding a check cache variable set by AC_CHECK_FUNC (report by 2698 Pierre Labastie). 2699 + simplify include-dependencies of make_hash and make_keys, to reduce 2700 the need for setting BUILD_CPPFLAGS in cross-compiling when the 2701 build- and target-machines differ. 2702 + repair broken-linker configuration by restoring a definition of SP 2703 variable to curses.priv.h, and adjusting for cases where sp-funcs 2704 are used. 2705 + improve configure macro CF_AR_FLAGS, allowing ARFLAGS environment 2706 variable to override (prompted by report by Pablo Cazallas). 2707 2708 20100515 2709 + add configure option --enable-pthreads-eintr to control whether the 2710 new EINTR feature is enabled. 2711 + modify logic in pthread configuration to allow EINTR to interrupt 2712 a read operation in wgetch() (Novell #540571, patch by Werner Fink). 2713 + drop mkdirs.sh, use "mkdir -p". 2714 + add configure option --disable-libtool-version, to use the 2715 "-version-number" feature which was added in libtool 1.5 (report by 2716 Peter Haering). The default value for the option uses the newer 2717 feature, which makes libraries generated using libtool compatible 2718 with the standard builds of ncurses. 2719 + updated test/configure to match configure script macros. 2720 + fixes for configure script from lynx changes: 2721 + improve CF_FIND_LINKAGE logic for the case where a function is 2722 found in predefined libraries. 2723 + revert part of change to CF_HEADER (cf: 20100424) 2724 2725 20100501 2726 + correct limit-check in wredrawln, accounting for begy/begx values 2727 (patch by David Benjamin). 2728 + fix most compiler warnings from clang. 2729 + amend build-fix for OpenSolaris, to ensure that a system header is 2730 included in curses.h before testing feature symbols, since they 2731 may be defined by that route. 2732 2733 20100424 2734 + fix some strict compiler warnings in ncurses library. 2735 + modify configure macro CF_HEADER_PATH to not look for variations in 2736 the predefined include directories. 2737 + improve configure macros CF_GCC_VERSION and CF_GCC_WARNINGS to work 2738 with gcc 4.x's c89 alias, which gives warning messages for cases 2739 where older versions would produce an error. 2740 2741 20100417 2742 + modify _nc_capcmp() to work with cancelled strings. 2743 + correct translation of "^" in _nc_infotocap(), used to transform 2744 terminfo to termcap strings 2745 + add configure --disable-rpath-hack, to allow disabling the feature 2746 which adds rpath options for libraries in unusual places. 2747 + improve CF_RPATH_HACK_2 by checking if the rpath option for a given 2748 directory was already added. 2749 + improve CF_RPATH_HACK_2 by using ldd to provide a standard list of 2750 directories (which will be ignored). 2751 2752 20100410 2753 + improve win_driver.c handling of mouse: 2754 + discard motion events 2755 + avoid calling _nc_timed_wait when there is a mouse event 2756 + handle 4th and "rightmost" buttons. 2757 + quote substitutions in CF_RPATH_HACK_2 configure macro, needed for 2758 cases where there are embedded blanks in the rpath option. 2759 2760 20100403 2761 + add configure check for exctags vs ctags, to work around pkgsrc. 2762 + simplify logic in _nc_get_screensize() to make it easier to see how 2763 environment variables may override system- and terminfo-values 2764 (prompted by discussion with Igor Bujna). 2765 + make debug-traces for COLOR_PAIR and PAIR_NUMBER less verbose. 2766 + improve handling of color-pairs embedded in attributes for the 2767 extended-colors configuration. 2768 + modify MKlib_gen.sh to build link_test with sp-funcs. 2769 + build-fixes for OpenSolaris aka Solaris 11, for wide-character 2770 configuration as well as for rpath feature in *-config scripts. 2771 2772 20100327 2773 + refactor CF_SHARED_OPTS configure macro, making CF_RPATH_HACK more 2774 reusable. 2775 + improve configure CF_REGEX, similar fixes. 2776 + improve configure CF_FIND_LINKAGE, adding add check between system 2777 (default) and explicit paths, where we can find the entrypoint in the 2778 given library. 2779 + add check if Gpm_Open() returns a -2, e.g., for "xterm". This is 2780 normally suppressed but can be overridden using $NCURSES_GPM_TERMS. 2781 Ensure that Gpm_Close() is called in this case. 2782 2783 20100320 2784 + rename atari and st52 terminfo entries to atari-old, st52-old, use 2785 newer entries from FreeMiNT by Guido Flohr (from patch/report by Alan 2786 Hourihane). 2787 2788 20100313 2789 + modify install-rule for manpages so that *-config manpages will 2790 install when building with --srcdir (report by Sven Joachim). 2791 + modify CF_DISABLE_LEAKS configure macro so that the --enable-leaks 2792 option is not the same as --disable-leaks (GenToo #305889). 2793 + modify #define's for build-compiler to suppress cchar_t symbol from 2794 compile of make_hash and make_keys, improving cross-compilation of 2795 ncursesw (report by Bernhard Rosenkraenzer). 2796 + modify CF_MAN_PAGES configure macro to replace all occurrences of 2797 TPUT in tput.1's manpage (Debian #573597, report/analysis by Anders 2798 Kaseorg). 2799 2800 20100306 2801 + generate manpages for the *-config scripts, adapted from help2man 2802 (suggested by Sven Joachim). 2803 + use va_copy() in _nc_printf_string() to avoid conflicting use of 2804 va_list value in _nc_printf_length() (report by Wim Lewis). 2805 2806 20100227 2807 + add Ada95/configure script, to use in tar-file created by 2808 Ada95/make-tar.sh 2809 + fix typo in wresize.3x (patch by Tim van der Molen). 2810 + modify screen-bce.XXX entries to exclude ech, since screen's color 2811 model does not clear with color for that feature -TD 2812 2813 20100220 2814 + add make-tar.sh scripts to Ada95 and test subdirectories to help with 2815 making those separately distributable. 2816 + build-fix for static libraries without dlsym (Debian #556378). 2817 + fix a syntax error in man/form_field_opts.3x (patch by Ingo 2818 Schwarze). 2819 2820 20100213 2821 + add several screen-bce.XXX entries -TD 2822 2823 20100206 2824 + update mrxvt terminfo entry -TD 2825 + modify win_driver.c to support mouse single-clicks. 2826 + correct name for termlib in ncurses*-config, e.g., if it is renamed 2827 to provide a single file for ncurses/ncursesw libraries (patch by 2828 Miroslav Lichvar). 2829 2830 20100130 2831 + use vfork in test/ditto.c if available (request by Mike Frysinger). 2832 + miscellaneous cleanup of manpages. 2833 + fix typo in curs_bkgd.3x (patch by Tim van der Molen). 2834 + build-fix for --srcdir (patch by Miroslav Lichvar). 2835 2836 20100123 2837 + for term-driver configuration, ensure that the driver pointer is 2838 initialized in setupterm so that terminfo/termcap programs work. 2839 + amend fix for Debian #542031 to ensure that wattrset() returns only 2840 OK or ERR, rather than the attribute value (report by Miroslav 2841 Lichvar). 2842 + reorder WINDOWLIST to put WINDOW data after SCREEN pointer, making 2843 _nc_screen_of() compatible between normal/wide libraries again (patch 2844 by Miroslav Lichvar) 2845 + review/fix include-dependencies in modules files (report by Miroslav 2846 Lichvar). 2847 2848 20100116 2849 + modify win_driver.c to initialize acs_map for win32 console, so 2850 that line-drawing works. 2851 + modify win_driver.c to initialize TERMINAL struct so that programs 2852 such as test/lrtest.c and test/ncurses.c which test string 2853 capabilities can run. 2854 + modify term-driver modules to eliminate forward-reference 2855 declarations. 2856 2857 20100109 2858 + modify configure macro CF_XOPEN_SOURCE, etc., to use CF_ADD_CFLAGS 2859 consistently to add new -D's while removing duplicates. 2860 + modify a few configure macros to consistently put new options 2861 before older in the list. 2862 + add tiparm(), based on review of X/Open Curses Issue 7. 2863 + minor documentation cleanup. 2864 + update config.guess, config.sub from 2865 2866 (caveat - its maintainer put 2010 copyright date on files dated 2009) 2867 2868 20100102 2869 + minor improvement to tic's checking of similar SGR's to allow for the 2870 most common case of SGR 0. 2871 + modify getmouse() to act as its documentation implied, returning on 2872 each call the preceding event until none are left. When no more 2873 events remain, it will return ERR. 2874 2875 20091227 2876 + change order of lookup in progs/tput.c, looking for terminfo data 2877 first. This fixes a confusion between termcap "sg" and terminfo 2878 "sgr" or "sgr0", originally from 990123 changes, but exposed by 2879 20091114 fixes for hashing. With this change, only "dl" and "ed" are 2880 ambiguous (Mandriva #56272). 2881 2882 20091226 2883 + add bterm terminfo entry, based on bogl 0.1.18 -TD 2884 + minor fix to rxvt+pcfkeys terminfo entry -TD 2885 + build-fixes for Ada95 tree for gnat 4.4 "style". 2886 2887 20091219 2888 + remove old check in mvderwin() which prevented moving a derived 2889 window whose origin happened to coincide with its parent's origin 2890 (report by Katarina Machalkova). 2891 + improve test/ncurses.c to put mouse droppings in the proper window. 2892 + update minix terminfo entry -TD 2893 + add bw (auto-left-margin) to nsterm* entries (Benjamin Sittler) 2894 2895 20091212 2896 + correct transfer of multicolumn characters in multirow 2897 field_buffer(), which stopped at the end of the first row due to 2898 filling of unused entries in a cchar_t array with nulls. 2899 + updated nsterm* entries (Benjamin Sittler, Emanuele Giaquinta) 2900 + modify _nc_viscbuf2() and _tracecchar_t2() to show wide-character 2901 nulls. 2902 + use strdup() in set_menu_mark(), restore .marklen struct member on 2903 failure. 2904 + eliminate clause 3 from the UCB copyrights in read_termcap.c and 2905 tset.c per 2906 2907 (patch by Nicholas Marriott). 2908 + replace a malloc in tic.c with strdup, checking for failure (patch by 2909 Nicholas Marriott). 2910 + update config.guess, config.sub from 2911 2912 2913 20091205 2914 + correct layout of working window used to extract data in 2915 wide-character configured by set_field_buffer (patch by Rafael 2916 Garrido Fernandez) 2917 + improve some limit-checks related to filename length in reading and 2918 writing terminfo entries. 2919 + ensure that filename is always filled in when attempting to read 2920 a terminfo entry, so that infocmp can report the filename (patch 2921 by Nicholas Marriott). 2922 2923 20091128 2924 + modify mk-1st.awk to allow tinfo library to be built when term-driver 2925 is enabled. 2926 + add error-check to configure script to ensure that sp-funcs is 2927 enabled if term-driver is, since some internal interfaces rely upon 2928 this. 2929 2930 20091121 2931 + fix case where progs/tput is used while sp-funcs is configure; this 2932 requires save/restore of out-character function from _nc_prescreen 2933 rather than the SCREEN structure (report by Charles Wilson). 2934 + fix typo in man/curs_trace.3x which caused incorrect symbolic links 2935 + improved configure macros CF_GCC_ATTRIBUTES, CF_PROG_LINT. 2936 2937 20091114 2938 2939 + updated man/curs_trace.3x 2940 + limit hashing for termcap-names to 2-characters (Ubuntu #481740). 2941 + change a variable name in lib_newwin.c to make it clearer which 2942 value is being freed on error (patch by Nicholas Marriott). 2943 2944 20091107 2945 + improve test/ncurses.c color-cycling test by reusing attribute- 2946 and color-cycling logic from the video-attributes screen. 2947 + add ifdef'd with NCURSES_INTEROP_FUNCS experimental bindings in form 2948 library which help make it compatible with interop applications 2949 (patch by Juergen Pfeifer). 2950 + add configure option --enable-interop, for integrating changes 2951 for generic/interop support to form-library by Juergen Pfeifer 2952 2953 20091031 2954 + modify use of $CC environment variable which is defined by X/Open 2955 as a curses feature, to ignore it if it is not a single character 2956 (prompted by discussion with Benjamin C W Sittler). 2957 + add START_TRACE in slk_init 2958 + fix a regression in _nc_ripoffline which made test/ncurses.c not show 2959 soft-keys, broken in 20090927 merging. 2960 + change initialization of "hidden" flag for soft-keys from true to 2961 false, broken in 20090704 merging (Ubuntu #464274). 2962 + update nsterm entries (patch by Benjamin C W Sittler, prompted by 2963 discussion with Fabian Groffen in GenToo #206201). 2964 + add test/xterm-256color.dat 2965 2966 20091024 2967 + quiet some pedantic gcc warnings. 2968 + modify _nc_wgetch() to check for a -1 in the fifo, e.g., after a 2969 SIGWINCH, and discard that value, to avoid confusing application 2970 (patch by Eygene Ryabinkin, FreeBSD #136223). 2971 2972 20091017 2973 + modify handling of $PKG_CONFIG_LIBDIR to use only the first item in 2974 a possibly colon-separated list (Debian #550716). 2975 2976 20091010 2977 + supply a null-terminator to buffer in _nc_viswibuf(). 2978 + fix a sign-extension bug in unget_wch() (report by Mike Gran). 2979 + minor fixes to error-returns in default function for tputs, as well 2980 as in lib_screen.c 2981 2982 20091003 2983 + add WACS_xxx definitions to wide-character configuration for thick- 2984 and double-lines (discussion with Slava Zanko). 2985 + remove unnecessary kcan assignment to ^C from putty (Sven Joachim) 2986 + add ccc and initc capabilities to xterm-16color -TD 2987 > patch by Benjamin C W Sittler: 2988 + add linux-16color 2989 + correct initc capability of linux-c-nc end-of-range 2990 + similar change for dg+ccc and dgunix+ccc 2991 2992 20090927 2993 + move leak-checking for comp_captab.c into _nc_leaks_tinfo() since 2994 that module since 20090711 is in libtinfo. 2995 + add configure option --enable-term-driver, to allow compiling with 2996 terminal-driver. That is used in MinGW port, and (being somewhat 2997 more complicated) is an experimental alternative to the conventional 2998 termlib internals. Currently, it requires the sp-funcs feature to 2999 be enabled. 3000 + completed integrating "sp-funcs" by Juergen Pfeifer in ncurses 3001 library (some work remains for forms library). 3002 3003 20090919 3004 + document return code from define_key (report by Mike Gran). 3005 + make some symbolic links in the terminfo directory-tree shorter 3006 (patch by Daniel Jacobowitz, forwarded by Sven Joachim).). 3007 + fix some groff warnings in terminfo.5, etc., from recent Debian 3008 changes. 3009 + change ncv and op capabilities in sun-color terminfo entry to match 3010 Sun's entry for this (report by Laszlo Peter). 3011 + improve interix smso terminfo capability by using reverse rather than 3012 bold (report by Kristof Zelechovski). 3013 3014 20090912 3015 + add some test programs (and make these use the same special keys 3016 by sharing linedata.h functions): 3017 test/test_addstr.c 3018 test/test_addwstr.c 3019 test/test_addchstr.c 3020 test/test_add_wchstr.c 3021 + correct internal _nc_insert_ch() to use _nc_insert_wch() when 3022 inserting wide characters, since the wins_wch() function that it used 3023 did not update the cursor position (report by Ciprian Craciun). 3024 3025 20090906 3026 + fix typo s/is_timeout/is_notimeout/ which made "man is_notimeout" not 3027 work. 3028 + add null-pointer checks to other opaque-functions. 3029 + add is_pad() and is_subwin() functions for opaque access to WINDOW 3030 (discussion with Mark Dickinson). 3031 + correct merge to lib_newterm.c, which broke when sp-funcs was 3032 enabled. 3033 3034 20090905 3035 + build-fix for building outside source-tree (report by Sven Joachim). 3036 + fix Debian lintian warning for man/tabs.1 by making section number 3037 agree with file-suffix (report by Sven Joachim). 3038 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3039 3040 20090829 3041 + workaround for bug in g++ 4.1-4.4 warnings for wattrset() macro on 3042 amd64 (Debian #542031). 3043 + fix typo in curs_mouse.3x (Debian #429198). 3044 3045 20090822 3046 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3047 3048 20090815 3049 + correct use of terminfo capabilities for initializing soft-keys, 3050 broken in 20090510 merging. 3051 + modify wgetch() to ensure it checks SIGWINCH when it gets an error 3052 in non-blocking mode (patch by Clemens Ladisch). 3053 + use PATH_SEPARATOR symbol when substituting into run_tic.sh, to 3054 help with builds on non-Unix platforms such as OS/2 EMX. 3055 + modify scripting for misc/run_tic.sh to test configure script's 3056 $cross_compiling variable directly rather than comparing host/build 3057 compiler names (prompted by comment in GenToo #249363). 3058 + fix configure script option --with-database, which was coded as an 3059 enable-type switch. 3060 + build-fixes for --srcdir (report by Frederic L W Meunier). 3061 3062 20090808 3063 + separate _nc_find_entry() and _nc_find_type_entry() from 3064 implementation details of hash function. 3065 3066 20090803 3067 + add tabs.1 to man/man_db.renames 3068 + modify lib_addch.c to compensate for removal of wide-character test 3069 from unctrl() in 20090704 (Debian #539735). 3070 3071 20090801 3072 + improve discussion in INSTALL for use of system's tic/infocmp for 3073 cross-compiling and building fallbacks. 3074 + modify test/demo_termcap.c to correspond better to options in 3075 test/demo_terminfo.c 3076 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3077 + fix logic for 'V' in test/ncurses.c tests f/F. 3078 3079 20090728 3080 + correct logic in tigetnum(), which caused tput program to treat all 3081 string capabilities as numeric (report by Rajeev V Pillai, 3082 cf: 20090711). 3083 3084 20090725 3085 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3086 3087 20090718 3088 + fix a null-pointer check in _nc_format_slks() in lib_slk.c, from 3089 20090704 changes. 3090 + modify _nc_find_type_entry() to use hashing. 3091 + make CCHARW_MAX value configurable, noting that changing this would 3092 change the size of cchar_t, and would be ABI-incompatible. 3093 + modify test-programs, e.g,. test/view.c, to address subtle 3094 differences between Tru64/Solaris and HPUX/AIX getcchar() return 3095 values. 3096 + modify length returned by getcchar() to count the trailing null 3097 which is documented in X/Open (cf: 20020427). 3098 + fixes for test programs to build/work on HPUX and AIX, etc. 3099 3100 20090711 3101 + improve performance of tigetstr, etc., by using hashing code from tic. 3102 + minor fixes for memory-leak checking. 3103 + add test/demo_terminfo, for comparison with demo_termcap 3104 3105 20090704 3106 + remove wide-character checks from unctrl() (patch by Clemens Ladisch). 3107 + revise wadd_wch() and wecho_wchar() to eliminate dependency on 3108 unctrl(). 3109 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3110 3111 20090627 3112 + update llib-lncurses[wt] to use sp-funcs. 3113 + various code-fixes to build/work with --disable-macros configure 3114 option. 3115 + add several new files from Juergen Pfeifer which will be used when 3116 integration of "sp-funcs" is complete. This includes a port to 3117 MinGW. 3118 3119 20090613 3120 + move definition for NCURSES_WRAPPED_VAR back to ncurses_dll.h, to 3121 make includes of term.h without curses.h work (report by "Nix"). 3122 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3123 3124 20090607 3125 + fix a regression in lib_tputs.c, from ongoing merges. 3126 3127 20090606 3128 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3129 3130 20090530 3131 + fix an infinite recursion when adding a legacy-coding 8-bit value 3132 using insch() (report by Clemens Ladisch). 3133 + free home-terminfo string in del_curterm() (patch by Dan Weber). 3134 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3135 3136 20090523 3137 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3138 3139 20090516 3140 + work around antique BSD game's manipulation of stdscr, etc., versus 3141 SCREEN's copy of the pointer (Debian #528411). 3142 + add a cast to wattrset macro to avoid compiler warning when comparing 3143 its result against ERR (adapted from patch by Matt Kraii, Debian 3144 #528374). 3145 3146 20090510 3147 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3148 3149 20090502 3150 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3151 + add vwmterm terminfo entry (patch by Bryan Christ). 3152 3153 20090425 3154 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3155 3156 20090419 3157 + build fix for _nc_free_and_exit() change in 20090418 (report by 3158 Christian Ebert). 3159 3160 20090418 3161 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3162 3163 20090411 3164 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3165 This change finishes merging for menu and panel libraries, does 3166 part of the form library. 3167 3168 20090404 3169 + suppress configure check for static/dynamic linker flags for gcc on 3170 Darwin (report by Nelson Beebe). 3171 3172 20090328 3173 + extend ansi.sys pfkey capability from kf1-kf10 to kf1-kf48, moving 3174 function key definitions from emx-base for consistency -TD 3175 + correct missing final 'p' in pfkey capability of ansi.sys-old (report 3176 by Kalle Olavi Niemitalo). 3177 + improve test/ncurses.c 'F' test, show combining characters in color. 3178 + quiet a false report by cppcheck in c++/cursesw.cc by eliminating 3179 a temporary variable. 3180 + use _nc_doalloc() rather than realloc() in a few places in ncurses 3181 library to avoid leak in out-of-memory condition (reports by William 3182 Egert and Martin Ettl based on cppcheck tool). 3183 + add --with-ncurses-wrap-prefix option to test/configure (discussion 3184 with Charles Wilson). 3185 + use ncurses*-config scripts if available for test/configure. 3186 + update test/aclocal.m4 and test/configure 3187 > patches by Charles Wilson: 3188 + modify CF_WITH_LIBTOOL configure check to allow unreleased libtool 3189 version numbers (e.g. which include alphabetic chars, as well as 3190 digits, after the final '.'). 3191 + improve use of -no-undefined option for libtool by setting an 3192 intermediate variable LT_UNDEF in the configure script, and then 3193 using that in the libtool link-commands. 3194 + fix an missing use of NCURSES_PUBLIC_VAR() in tinfo/MKcodes.awk 3195 from 20090321 changes. 3196 + improve mk-1st.awk script by writing separate cases for the 3197 LIBTOOL_LINK command, depending on which library (ncurses, ticlib, 3198 termlib) is to be linked. 3199 + modify configure.in to allow broken-linker configurations, not just 3200 enable-reentrant, to set public wrap prefix. 3201 3202 20090321 3203 + add TICS_LIST and SHLIB_LIST to allow libtool 2.2.6 on Cygwin to 3204 build with tic and term libraries (patch by Charles Wilson). 3205 + add -no-undefined option to libtool for Cygwin, MinGW, U/Win and AIX 3206 (report by Charles Wilson). 3207 + fix definition for c++/Makefile.in's SHLIB_LIST, which did not list 3208 the form, menu or panel libraries (patch by Charles Wilson). 3209 + add configure option --with-wrap-prefix to allow setting the prefix 3210 for functions used to wrap global variables to something other than 3211 "_nc_" (discussion with Charles Wilson). 3212 3213 20090314 3214 + modify scripts to generate ncurses*-config and pc-files to add 3215 dependency for tinfo library (patch by Charles Wilson). 3216 + improve comparison of program-names when checking for linked flavors 3217 such as "reset" by ignoring the executable suffix (reports by Charles 3218 Wilson, Samuel Thibault and Cedric Bretaudeau on Cygwin mailing 3219 list). 3220 + suppress configure check for static/dynamic linker flags for gcc on 3221 Solaris 10, since gcc is confused by absence of static libc, and 3222 does not switch back to dynamic mode before finishing the libraries 3223 (reports by Joel Bertrand, Alan Pae). 3224 + minor fixes to Intel compiler warning checks in configure script. 3225 + modify _nc_leaks_tinfo() so leak-checking in test/railroad.c works. 3226 + modify set_curterm() to make broken-linker configuration work with 3227 changes from 20090228 (report by Charles Wilson). 3228 3229 20090228 3230 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3231 + modify declaration of cur_term when broken-linker is used, but 3232 enable-reentrant is not, to match pre-5.7 (report by Charles Wilson). 3233 3234 20090221 3235 + continue integrating "sp-funcs" by Juergen Pfeifer (incomplete). 3236 3237 20090214 3238 + add configure script --enable-sp-funcs to enable the new set of 3239 extended functions. 3240 + start integrating patches by Juergen Pfeifer: 3241 + add extended functions which specify the SCREEN pointer for several 3242 curses functions which use the global SP (these are incomplete; 3243 some internals work is needed to complete these). 3244 + add special cases to configure script for MinGW port. 3245 3246 20090207 3247 + update several configure macros from lynx changes 3248 + append (not prepend) to CFLAGS/CPPFLAGS 3249 + change variable from PATHSEP to PATH_SEPARATOR 3250 + improve install-rules for pc-files (patch by Miroslav Lichvar). 3251 + make it work with $DESTDIR 3252 + create the pkg-config library directory if needed. 3253 3254 20090124 3255 + modify init_pair() to allow caller to create extra color pairs beyond 3256 the color_pairs limit, which use default colors (request by Emanuele 3257 Giaquinta). 3258 + add misc/terminfo.tmp and misc/*.pc to "sources" rule. 3259 + fix typo "==" where "=" is needed in ncurses-config.in and 3260 gen-pkgconfig.in files (Debian #512161). 3261 3262 20090117 3263 + add -shared option to MK_SHARED_LIB when -Bsharable is used, for 3264 *BSD's, without which "main" might be one of the shared library's 3265 dependencies (report/analysis by Ken Dickey). 3266 + modify waddch_literal(), updating line-pointer after a multicolumn 3267 character is found to not fit on the current row, and wrapping is 3268 done. Since the line-pointer was not updated, the wrapped 3269 multicolumn character was written to the beginning of the current row 3270 (cf: 20041023, reported by "Nick" regarding problem with ncmpc 3271). 3272 3273 20090110 3274 + add screen.Eterm terminfo entry (GenToo #124887) -TD 3275 + modify adacurses-config to look for ".ali" files in the adalib 3276 directory. 3277 + correct install for Ada95, which omitted libAdaCurses.a used in 3278 adacurses-config 3279 + change install for adacurses-config to provide additional flavors 3280 such as adacursesw-config, for ncursesw (GenToo #167849). 3281 3282 20090105 3283 + remove undeveloped feature in ncurses-config.in for setting 3284 prefix variable. 3285 + recent change to ncurses-config.in did not take into account the 3286 --disable-overwrite option, which sets $includedir to the 3287 subdirectory and using just that for a -I option does not work - fix 3288 (report by Frederic L W Meunier). 3289 3290 20090104 3291 + modify gen-pkgconfig.in to eliminate a dependency on rpath when 3292 deciding whether to add $LIBS to --libs output; that should be shown 3293 for the ncurses and tinfo libraries without taking rpath into 3294 account. 3295 + fix an overlooked change from $AR_OPTS to $ARFLAGS in mk-1st.awk, 3296 used in static libraries (report by Marty Jack). 3297 3298 20090103 3299 + add a configure-time check to pick a suitable value for 3300 CC_SHARED_OPTS for Solaris (report by Dagobert Michelsen). 3301 + add configure --with-pkg-config and --enable-pc-files options, along 3302 with misc/gen-pkgconfig.in which can be used to generate ".pc" files 3303 for pkg-config (request by Jan Engelhardt). 3304 + use $includedir symbol in misc/ncurses-config.in, add --includedir 3305 option. 3306 + change makefiles to use $ARFLAGS rather than $AR_OPTS, provide a 3307 configure check to detect whether a "-" is needed before "ar" 3308 options. 3309 + update config.guess, config.sub from 3310 3311 3312 20081227 3313 + modify mk-1st.awk to work with extra categories for tinfo library. 3314 + modify configure script to allow building shared libraries with gcc 3315 on AIX 5 or 6 (adapted from patch by Lital Natan). 3316 3317 20081220 3318 + modify to omit the opaque-functions from lib_gen.o when 3319 --disable-ext-funcs is used. 3320 + add test/clip_printw.c to illustrate how to use printw without 3321 wrapping. 3322 + modify ncurses 'F' test to demo wborder_set() with colored lines. 3323 + modify ncurses 'f' test to demo wborder() with colored lines. 3324 3325 20081213 3326 + add check for failure to open hashed-database needed for db4.6 3327 (GenToo #245370). 3328 + corrected --without-manpages option; previous change only suppressed 3329 the auxiliary rules install.man and uninstall.man 3330 + add case for FreeMINT to configure macro CF_XOPEN_SOURCE (patch from 3331 GenToo #250454). 3332 + fixes from NetBSD port at 3333 3334 patch-ac (build-fix for DragonFly) 3335 patch-ae (use INSTALL_SCRIPT for installing misc/ncurses*-config). 3336 + improve configure script macros CF_HEADER_PATH and CF_LIBRARY_PATH 3337 by adding CFLAGS, CPPFLAGS and LDFLAGS, LIBS values to the 3338 search-lists. 3339 + correct title string for keybound manpage (patch by Frederic Culot, 3340 OpenBSD documentation/6019), 3341 3342 20081206 3343 + move del_curterm() call from _nc_freeall() to _nc_leaks_tinfo() to 3344 work for progs/clear, progs/tabs, etc. 3345 + correct buffer-size after internal resizing of wide-character 3346 set_field_buffer(), broken in 20081018 changes (report by Mike Gran). 3347 + add "-i" option to test/filter.c to tell it to use initscr() rather 3348 than newterm(), to investigate report on comp.unix.programmer that 3349 ncurses would clear the screen in that case (it does not - the issue 3350 was xterm's alternate screen feature). 3351 + add check in mouse-driver to disable connection if GPM returns a 3352 zero, indicating that the connection is closed (Debian #506717, 3353 adapted from patch by Samuel Thibault). 3354 3355 20081129 3356 + improve a workaround in adding wide-characters, when a control 3357 character is found. The library (cf: 20040207) uses unctrl() to 3358 obtain a printable version of the control character, but was not 3359 passing color or video attributes. 3360 + improve test/ncurses.c 'a' test, using unctrl() more consistently to 3361 display meta-characters. 3362 + turn on _XOPEN_CURSES definition in curses.h 3363 + add eterm-color entry (report by Vincent Lefevre) -TD 3364 + correct use of key_name() in test/ncurses.c 'A' test, which only 3365 displays wide-characters, not key-codes since 20070612 (report by 3366 Ricardo Cantu). 3367 3368 20081122 3369 + change _nc_has_mouse() to has_mouse(), reflect its use in C++ and 3370 Ada95 (patch by Juergen Pfeifer). 3371 + document in TO-DO an issue with Cygwin's package for GNAT (report 3372 by Mike Dennison). 3373 + improve error-checking of command-line options in "tabs" program. 3374 3375 20081115 3376 + change several terminfo entries to make consistent use of ANSI 3377 clear-all-tabs -TD 3378 + add "tabs" program (prompted by Debian #502260). 3379 + add configure --without-manpages option (request by Mike Frysinger). 3380 3381 20081102 5.7 release for upload to 3382 3383 20081025 3384 + add a manpage to discuss memory leaks. 3385 + add support for shared libraries for QNX (other than libtool, which 3386 does not work well on that platform). 3387 + build-fix for QNX C++ binding. 3388 3389 20081018 3390 + build-fixes for OS/2 EMX. 3391 + modify form library to accept control characters such as newline 3392 in set_field_buffer(), which is compatible with Solaris (report by 3393 Nit Khair). 3394 + modify configure script to assume --without-hashed-db when 3395 --disable-database is used. 3396 + add "-e" option in ncurses/Makefile.in when generating source-files 3397 to force earlier exit if the build environment fails unexpectedly 3398 (prompted by patch by Adrian Bunk). 3399 + change configure script to use CF_UTF8_LIB, improved variant of 3400 CF_LIBUTF8. 3401 3402 20081012 3403 + add teraterm4.59 terminfo entry, use that as primary teraterm entry, rename 3404 original to teraterm2.3 -TD 3405 + update "gnome" terminfo to 2.22.3 -TD 3406 + update "konsole" terminfo to 1.6.6, needs today's fix for tic -TD 3407 + add "aterm" terminfo -TD 3408 + add "linux2.6.26" terminfo -TD 3409 + add logic to tic for cancelling strings in user-defined capabilities, 3410 overlooked til now. 3411 3412 20081011 3413 + regenerated html documentation. 3414 + add -m and -s options to test/keynames.c and test/key_names.c to test 3415 the meta() function with keyname() or key_name(), respectively. 3416 + correct return value of key_name() on error; it is null. 3417 + document some unresolved issues for rpath and pthreads in TO-DO. 3418 + fix a missing prototype for ioctl() on OpenBSD in tset.c 3419 + add configure option --disable-tic-depends to make explicit whether 3420 tic library depends on ncurses/ncursesw library, amends change from 3421 20080823 (prompted by Debian #501421). 3422 3423 20081004 3424 + some build-fixes for configure --disable-ext-funcs (incomplete, but 3425 works for C/C++ parts). 3426 + improve configure-check for awks unable to handle large strings, e.g. 3427 AIX 5.1 whose awk silently gives up on large printf's. 3428 3429 20080927 3430 + fix build for --with-dmalloc by workaround for redefinition of 3431 strndup between string.h and dmalloc.h 3432 + fix build for --disable-sigwinch 3433 + add environment variable NCURSES_GPM_TERMS to allow override to use 3434 GPM on terminals other than "linux", etc. 3435 + disable GPM mouse support when $TERM does not happen to contain 3436 "linux", since Gpm_Open() no longer limits its assertion to terminals 3437 that it might handle, e.g., within "screen" in xterm. 3438 + reset mouse file-descriptor when unloading GPM library (report by 3439 Miroslav Lichvar). 3440 + fix build for --disable-leaks --enable-widec --with-termlib 3441 > patch by Juergen Pfeifer: 3442 + use improved initialization for soft-label keys in Ada95 sample code. 3443 + discard internal symbol _nc_slk_format (unused since 20080112). 3444 + move call of slk_paint_info() from _nc_slk_initialize() to 3445 slk_intern_refresh(), improving initialization. 3446 3447 20080925 3448 + fix bug in mouse code for GPM from 20080920 changes (reported in 3449 Debian #500103, also Miroslav Lichvar). 3450 3451 20080920 3452 + fix shared-library rules for cygwin with tic- and tinfo-libraries. 3453 + fix a memory leak when failure to connect to GPM. 3454 + correct check for notimeout() in wgetch() (report on linux.redhat 3455 newsgroup by FurtiveBertie). 3456 + add an example warning-suppression file for valgrind, 3457 misc/ncurses.supp (based on example from Reuben Thomas) 3458 3459 20080913 3460 + change shared-library configuration for OpenBSD, make rpath work. 3461 + build-fixes for using libutf8, e.g., on OpenBSD 3.7 3462 3463 20080907 3464 + corrected fix for --enable-weak-symbols (report by Frederic L W 3465 Meunier). 3466 3467 20080906 3468 + corrected gcc options for building shared libraries on IRIX64. 3469 + add configure check for awk programs unable to handle big-strings, 3470 use that to improve the default for --enable-big-strings option. 3471 + makefile-fixes for --enable-weak-symbols (report by Frederic L W 3472 Meunier). 3473 + update test/configure script. 3474 + adapt ifdef's from library to make test/view.c build when mbrtowc() 3475 is unavailable, e.g., with HPUX 10.20. 3476 + add configure check for wcsrtombs, mbsrtowcs, which are used in 3477 test/ncurses.c, and use wcstombs, mbstowcs instead if available, 3478 fixing build of ncursew for HPUX 11.00 3479 3480 20080830 3481 + fixes to make Ada95 demo_panels() example work. 3482 + modify Ada95 'rain' test program to accept keyboard commands like the 3483 C-version. 3484 + modify BeOS-specific ifdef's to build on Haiku (patch by Scott 3485 Mccreary). 3486 + add configure-check to see if the std namespace is legal for cerr 3487 and endl, to fix a build issue with Tru64. 3488 + consistently use NCURSES_BOOL in lib_gen.c 3489 + filter #line's from lib_gen.c 3490 + change delimiter in MKlib_gen.sh from '%' to '@', to avoid 3491 substitution by IBM xlc to '#' as part of its extensions to digraphs. 3492 + update config.guess, config.sub from 3493 3494 (caveat - its maintainer removed support for older Linux systems). 3495 3496 20080823 3497 + modify configure check for pthread library to work with OSF/1 5.1, 3498 which uses #define's to associate its header and library. 3499 + use pthread_mutexattr_init() for initializing pthread_mutexattr_t, 3500 makes threaded code work on HPUX 11.23 3501 + fix a bug in demo_menus in freeing menus (cf: 20080804). 3502 + modify configure script for the case where tic library is used (and 3503 possibly renamed) to remove its dependency upon ncurses/ncursew 3504 library (patch by Dr Werner Fink). 3505 + correct manpage for menu_fore() which gave wrong default for 3506 the attribute used to display a selected entry (report by Mike Gran). 3507 + add Eterm-256color, Eterm-88color and rxvt-88color (prompted by 3508 Debian #495815) -TD 3509 3510 20080816 3511 + add configure option --enable-weak-symbols to turn on new feature. 3512 + add configure-check for availability of weak symbols. 3513 + modify linkage with pthread library to use weak symbols so that 3514 applications not linked to that library will not use the mutexes, 3515 etc. This relies on gcc, and may be platform-specific (patch by Dr 3516 Werner Fink). 3517 + add note to INSTALL to document limitation of renaming of tic library 3518 using the --with-ticlib configure option (report by Dr Werner Fink). 3519 + document (in manpage) why tputs does not detect I/O errors (prompted 3520 by comments by Samuel Thibault). 3521 + fix remaining warnings from Klocwork report. 3522 3523 20080804 3524 + modify _nc_panelhook() data to account for a permanent memory leak. 3525 + fix memory leaks in test/demo_menus 3526 + fix most warnings from Klocwork tool (report by Larry Zhou). 3527 + modify configure script CF_XOPEN_SOURCE macro to add case for 3528 "dragonfly" from xterm #236 changes. 3529 + modify configure script --with-hashed-db to let $LIBS override the 3530 search for the db library (prompted by report by Samson Pierre). 3531 3532 20080726 3533 + build-fixes for gcc 4.3.1 (changes to gnat "warnings", and C inlining 3534 thresholds). 3535 3536 20080713 3537 + build-fix (reports by Christian Ebert, Funda Wang). 3538 3539 20080712 3540 + compiler-warning fixes for Solaris. 3541 3542 20080705 3543 + use NCURSES_MOUSE_MASK() in definition of BUTTON_RELEASE(), etc., to 3544 make those work properly with the "--enable-ext-mouse" configuration 3545 (cf: 20050205). 3546 + improve documentation of build-cc options in INSTALL. 3547 + work-around a bug in gcc 4.2.4 on AIX, which does not pass the 3548 -static/-dynamic flags properly to linker, causing test/bs to 3549 not link. 3550 3551 20080628 3552 + correct some ifdef's needed for the broken-linker configuration. 3553 + make debugging library's $BAUDRATE feature work for termcap 3554 interface. 3555 + make $NCURSES_NO_PADDING feature work for termcap interface (prompted 3556 by comment on FreeBSD mailing list). 3557 + add screen.mlterm terminfo entry -TD 3558 + improve mlterm and mlterm+pcfkeys terminfo entries -TD 3559 3560 20080621 3561 + regenerated html documentation. 3562 + expand manpage description of parameters for form_driver() and 3563 menu_driver() (prompted by discussion with Adam Spragg). 3564 + add null-pointer checks for cur_term in baudrate() and 3565 def_shell_mode(), def_prog_mode() 3566 + fix some memory leaks in delscreen() and wide acs. 3567 3568 20080614 3569 + modify test/ditto.c to illustrate multi-threaded use_screen(). 3570 + change CC_SHARED_OPTS from -KPIC to -xcode=pic32 for Solaris. 3571 + add "-shared" option to MK_SHARED_LIB for gcc on Solaris (report 3572 by Poor Yorick). 3573 3574 20080607 3575 + finish changes to wgetch(), making it switch as needed to the 3576 window's actual screen when calling wrefresh() and wgetnstr(). That 3577 allows wgetch() to get used concurrently in different threads with 3578 some minor restrictions, e.g., the application should not delete a 3579 window which is being used in a wgetch(). 3580 + simplify mutex's, combining the window- and screen-mutex's. 3581 3582 20080531 3583 + modify wgetch() to use the screen which corresponds to its window 3584 parameter rather than relying on SP; some dependent functions still 3585 use SP internally. 3586 + factor out most use of SP in lib_mouse.c, using parameter. 3587 + add internal _nc_keyname(), replacing keyname() to associate with a 3588 particular SCREEN rather than the global SP. 3589 + add internal _nc_unctrl(), replacing unctrl() to associate with a 3590 particular SCREEN rather than the global SP. 3591 + add internal _nc_tracemouse(), replacing _tracemouse() to eliminate 3592 its associated global buffer _nc_globals.tracemse_buf now in SCREEN. 3593 + add internal _nc_tracechar(), replacing _tracechar() to use SCREEN in 3594 preference to the global _nc_globals.tracechr_buf buffer. 3595 3596 20080524 3597 + modify _nc_keypad() to make it switch temporarily as needed to the 3598 screen which must be updated. 3599 + wrap cur_term variable to help make _nc_keymap() thread-safe, and 3600 always set the screen's copy of this variable in set_curterm(). 3601 + restore curs_set() state after endwin()/refresh() (report/patch 3602 Miroslav Lichvar) 3603 3604 20080517 3605 + modify configure script to note that --enable-ext-colors and 3606 --enable-ext-mouse are not experimental, but extensions from 3607 the ncurses ABI 5. 3608 + corrected manpage description of setcchar() (discussion with 3609 Emanuele Giaquinta). 3610 + fix for adding a non-spacing character at the beginning of a line 3611 (report/patch by Miroslav Lichvar). 3612 3613 20080503 3614 + modify screen.* terminfo entries using new screen+fkeys to fix 3615 overridden keys in screen.rxvt (Debian #478094) -TD 3616 + modify internal interfaces to reduce wgetch()'s dependency on the 3617 global SP. 3618 + simplify some loops with macros each_screen(), each_window() and 3619 each_ripoff(). 3620 3621 20080426 3622 + continue modifying test/ditto.c toward making it demonstrate 3623 multithreaded use_screen(), using fifos to pass data between screens. 3624 + fix typo in form.3x (report by Mike Gran). 3625 3626 20080419 3627 + add screen.rxvt terminfo entry -TD 3628 + modify tic -f option to format spaces as \s to prevent them from 3629 being lost when that is read back in unformatted strings. 3630 + improve test/ditto.c, using a "talk"-style layout. 3631 3632 20080412 3633 + change test/ditto.c to use openpty() and xterm. 3634 + add locks for copywin(), dupwin(), overlap(), overlay() on their 3635 window parameters. 3636 + add locks for initscr() and newterm() on updates to the SCREEN 3637 pointer. 3638 + finish table in curs_thread.3x manpage. 3639 3640 20080405 3641 + begin table in curs_thread.3x manpage describing the scope of data 3642 used by each function (or symbol) for threading analysis. 3643 + add null-pointer checks to setsyx() and getsyx() (prompted by 3644 discussion by Martin v. Lowis and Jeroen Ruigrok van der Werven on 3645 python-dev2 mailing list). 3646 3647 20080329 3648 + add null-pointer checks in set_term() and delscreen(). 3649 + move _nc_windows into _nc_globals, since windows can be pads, which 3650 are not associated with a particular screen. 3651 + change use_screen() to pass the SCREEN* parameter rather than 3652 stdscr to the callback function. 3653 + force libtool to use tag for 'CC' in case it does not detect this, 3654 e.g., on aix when using CC=powerpc-ibm-aix5.3.0.0-gcc 3655 (report/patch by Michael Haubenwallner). 3656 + override OBJEXT to "lo" when building with libtool, to work on 3657 platforms such as AIX where libtool may use a different suffix for 3658 the object files than ".o" (report/patch by Michael Haubenwallner). 3659 + add configure --with-pthread option, for building with the POSIX 3660 thread library. 3661 3662 20080322 3663 + fill in extended-color pair two more places in wbkgrndset() and 3664 waddch_nosync() (prompted by Sedeno's patch). 3665 + fill in extended-color pair in _nc_build_wch() to make colors work 3666 for wide-characters using extended-colors (patch by Alejandro R 3667 Sedeno). 3668 + add x/X toggles to ncurses.c C color test to test/demo 3669 wide-characters with extended-colors. 3670 + add a/A toggles to ncurses.c c/C color tests. 3671 + modify test/ditto.c to use use_screen(). 3672 + finish modifying test/rain.c to demonstrate threads. 3673 3674 20080308 3675 + start modifying test/rain.c for threading demo. 3676 + modify test/ncurses.c to make 'f' test accept the f/F/b/F/</> toggles 3677 that the 'F' accepts. 3678 + modify test/worm.c to show trail in reverse-video when other threads 3679 are working concurrently. 3680 + fix a deadlock from improper nesting of mutexes for windowlist and 3681 window. 3682 3683 20080301 3684 + fixes from 20080223 resolved issue with mutexes; change to use 3685 recursive mutexes to fix memory leak in delwin() as called from 3686 _nc_free_and_exit(). 3687 3688 20080223 3689 + fix a size-difference in _nc_globals which caused hanging of mutex 3690 lock/unlock when termlib was built separately. 3691 3692 20080216 3693 + avoid using nanosleep() in threaded configuration since that often 3694 is implemented to suspend the entire process. 3695 3696 20080209 3697 + update test programs to build/work with various UNIX curses for 3698 comparisons. This was to reinvestigate statement in X/Open curses 3699 that insnstr and winsnstr perform wrapping. None of the Unix-branded 3700 implementations do this, as noted in manpage (cf: 20040228). 3701 3702 20080203 3703 + modify _nc_setupscreen() to set the legacy-coding value the same 3704 for both narrow/wide models. It had been set only for wide model, 3705 but is needed to make unctrl() work with locale in the narrow model. 3706 + improve waddch() and winsch() handling of EILSEQ from mbrtowc() by 3707 using unctrl() to display illegal bytes rather than trying to append 3708 further bytes to make up a valid sequence (reported by Andrey A 3709 Chernov). 3710 + modify unctrl() to check codes in 128-255 range versus isprint(). 3711 If they are not printable, and locale was set, use a "M-" or "~" 3712 sequence. 3713 3714 20080126 3715 + improve threading in test/worm.c (wrap refresh calls, and KEY_RESIZE 3716 handling). Now it hangs in napms(), no matter whether nanosleep() 3717 or poll() or select() are used on Linux. 3718 3719 20080119 3720 + fixes to build with --disable-ext-funcs 3721 + add manpage for use_window and use_screen. 3722 + add set_tabsize() and set_escdelay() functions. 3723 3724 20080112 3725 + remove recursive-mutex definitions, finish threading demo for worm.c 3726 + remove a redundant adjustment of lines in resizeterm.c's 3727 adjust_window() which caused occasional misadjustment of stdscr when 3728 softkeys were used. 3729 3730 20080105 3731 + several improvements to terminfo entries based on xterm #230 -TD 3732 + modify MKlib_gen.sh to handle keyname/key_name prototypes, so the 3733 "link_test" builds properly. 3734 + fix for toe command-line options -u/-U to ensure filename is given. 3735 + fix allocation-size for command-line parsing in infocmp from 20070728 3736 (report by Miroslav Lichvar) 3737 + improve resizeterm() by moving ripped-off lines, and repainting the 3738 soft-keys (report by Katarina Machalkova) 3739 + add clarification in wclear's manpage noting that the screen will be 3740 cleared even if a subwindow is cleared (prompted by Christer Enfors 3741 question). 3742 + change test/ncurses.c soft-key tests to work with KEY_RESIZE. 3743 3744 20071222 3745 + continue implementing support for threading demo by adding mutex 3746 for delwin(). 3747 3748 20071215 3749 + add several functions to C++ binding which wrap C functions that 3750 pass a WINDOW* parameter (request by Chris Lee). 3751 3752 20071201 3753 + add note about configure options needed for Berkeley database to the 3754 INSTALL file. 3755 + improve checks for version of Berkeley database libraries. 3756 + amend fix for rpath to not modify LDFLAGS if the platform has no 3757 applicable transformation (report by Christian Ebert, cf: 20071124). 3758 3759 20071124 3760 + modify configure option --with-hashed-db to accept a parameter which 3761 is the install-prefix of a given Berkeley Database (prompted by 3762 pierre4d2 comments). 3763 + rewrite wrapper for wcrtomb(), making it work on Solaris. This is 3764 used in the form library to determine the length of the buffer needed 3765 by field_buffer (report by Alfred Fung). 3766 + remove unneeded window-parameter from C++ binding for wresize (report 3767 by Chris Lee). 3768 3769 20071117 3770 + modify the support for filesystems which do not support mixed-case to 3771 generate 2-character (hexadecimal) codes for the lower-level of the 3772 filesystem terminfo database (request by Michail Vidiassov). 3773 + add configure option --enable-mixed-case, to allow overriding the 3774 configure script's check if the filesystem supports mixed-case 3775 filenames. 3776 + add wresize() to C++ binding (request by Chris Lee). 3777 + define NCURSES_EXT_FUNCS and NCURSES_EXT_COLORS in curses.h to make 3778 it simpler to tell if the extended functions and/or colors are 3779 declared. 3780 3781 20071103 3782 + update memory-leak checks for changes to names.c and codes.c 3783 + correct acsc strings in h19, z100 (patch by Benjamin C W Sittler). 3784 3785 20071020 3786 + continue implementing support for threading demo by adding mutex 3787 for use_window(). 3788 + add mrxvt terminfo entry, add/fix xterm building blocks for modified 3789 cursor keys -TD 3790 + compile with FreeBSD "contemporary" TTY interface (patch by 3791 Rong-En Fan). 3792 3793 20071013 3794 + modify makefile rules to allow clear, tput and tset to be built 3795 without libtic. The other programs (infocmp, tic and toe) rely on 3796 that library. 3797 + add/modify null-pointer checks in several functions for SP and/or 3798 the WINDOW* parameter (report by Thorben Krueger). 3799 + fixes for field_buffer() in formw library (see Redhat #310071, 3800 patches by Miroslav Lichvar). 3801 + improve performance of NCURSES_CHAR_EQ code (patch by Miroslav 3802 Lichvar). 3803 + update/improve mlterm and rxvt terminfo entries, e.g., for 3804 the modified cursor- and keypad-keys -TD 3805 3806 20071006 3807 + add code to curses.priv.h ifdef'd with NCURSES_CHAR_EQ, which 3808 changes the CharEq() macro to an inline function to allow comparing 3809 cchar_t struct's without comparing gaps in a possibly unpacked 3810 memory layout (report by Miroslav Lichvar). 3811 3812 20070929 3813 + add new functions to lib_trace.c to setup mutex's for the _tracef() 3814 calls within the ncurses library. 3815 + for the reentrant model, move _nc_tputs_trace and _nc_outchars into 3816 the SCREEN. 3817 + start modifying test/worm.c to provide threading demo (incomplete). 3818 + separated ifdef's for some BSD-related symbols in tset.c, to make 3819 it compile on LynxOS (report by Greg Gemmer). 3820 20070915 3821 + modify Ada95/gen/Makefile to use shlib script, to simplify building 3822 shared-library configuration on platforms lacking rpath support. 3823 + build-fix for Ada95/src/Makefile to reflect changed dependency for 3824 the terminal-interface-curses-aux.adb file which is now generated. 3825 + restructuring test/worm.c, for use_window() example. 3826 3827 20070908 3828 + add use_window() and use_screen() functions, to develop into support 3829 for threaded library (incomplete). 3830 + fix typos in man/curs_opaque.3x which kept the install script from 3831 creating symbolic links to two aliases created in 20070818 (report by 3832 Rong-En Fan). 3833 3834 20070901 3835 + remove a spurious newline from output of html.m4, which caused links 3836 for Ada95 html to be incorrect for the files generated using m4. 3837 + start investigating mutex's for SCREEN manipulation (incomplete). 3838 + minor cleanup of codes.c/names.c for --enable-const 3839 + expand/revise "Routine and Argument Names" section of ncurses manpage 3840 to address report by David Givens in newsgroup discussion. 3841 + fix interaction between --without-progs/--with-termcap configure 3842 options (report by Michail Vidiassov). 3843 + fix typo in "--disable-relink" option (report by Michail Vidiassov). 3844 3845 20070825 3846 + fix a sign-extension bug in infocmp's repair_acsc() function 3847 (cf: 971004). 3848 + fix old configure script bug which prevented "--disable-warnings" 3849 option from working (patch by Mike Frysinger). 3850 3851 20070818 3852 + add 9term terminal description (request by Juhapekka Tolvanen) -TD 3853 + modify comp_hash.c's string output to avoid misinterpreting a null 3854 "\0" followed by a digit. 3855 + modify MKnames.awk and MKcodes.awk to support big-strings. 3856 This only applies to the cases (broken linker, reentrant) where 3857 the corresponding arrays are accessed via wrapper functions. 3858 + split MKnames.awk into two scripts, eliminating the shell redirection 3859 which complicated the make process and also the bogus timestamp file 3860 which was introduced to fix "make -j". 3861 + add test/test_opaque.c, test/test_arrays.c 3862 + add wgetscrreg() and wgetparent() for applications that may need it 3863 when NCURSES_OPAQUE is defined (prompted by Bryan Christ). 3864 3865 20070812 3866 + amend treatment of infocmp "-r" option to retain the 1023-byte limit 3867 unless "-T" is given (cf: 981017). 3868 + modify comp_captab.c generation to use big-strings. 3869 + make _nc_capalias_table and _nc_infoalias_table private accessed via 3870 _nc_get_alias_table() since the tables are used only within the tic 3871 library. 3872 + modify configure script to skip Intel compiler in CF_C_INLINE. 3873 + make _nc_info_hash_table and _nc_cap_hash_table private accessed via 3874 _nc_get_hash_table() since the tables are used only within the tic 3875 library. 3876 3877 20070728 3878 + make _nc_capalias_table and _nc_infoalias_table private, accessed via 3879 _nc_get_alias_table() since they are used only by parse_entry.c 3880 + make _nc_key_names private since it is used only by lib_keyname.c 3881 + add --disable-big-strings configure option to control whether 3882 unctrl.c is generated using the big-string optimization - which may 3883 use strings longer than supported by a given compiler. 3884 + reduce relocation tables for tic, infocmp by changing type of 3885 internal hash tables to short, and make those private symbols. 3886 + eliminate large fixed arrays from progs/infocmp.c 3887 3888 20070721 3889 + change winnstr() to stop at the end of the line (cf: 970315). 3890 + add test/test_get_wstr.c 3891 + add test/test_getstr.c 3892 + add test/test_inwstr.c 3893 + add test/test_instr.c 3894 3895 20070716 3896 + restore a call to obtain screen-size in _nc_setupterm(), which 3897 is used in tput and other non-screen applications via setupterm() 3898 (Debian #433357, reported by Florent Bayle, Christian Ohm, 3899 cf: 20070310). 3900 3901 20070714 3902 + add test/savescreen.c test-program 3903 + add check to trace-file open, if the given name is a directory, add 3904 ".log" to the name and try again. 3905 + add konsole-256color entry -TD 3906 + add extra gcc warning options from xterm. 3907 + minor fixes for ncurses/hashmap test-program. 3908 + modify configure script to quiet c++ build with libtool when the 3909 --disable-echo option is used. 3910 + modify configure script to disable ada95 if libtool is selected, 3911 writing a warning message (addresses FreeBSD #114493). 3912 + update config.guess, config.sub 3913 3914 20070707 3915 + add continuous-move "M" to demo_panels to help test refresh changes. 3916 + improve fix for refresh of window on top of multi-column characters, 3917 taking into account some split characters on left/right window 3918 boundaries. 3919 3920 20070630 3921 + add "widec" row to _tracedump() output to help diagnose remaining 3922 problems with multi-column characters. 3923 + partial fix for refresh of window on top of multi-column characters 3924 which are partly overwritten (report by Sadrul H Chowdhury). 3925 + ignore A_CHARTEXT bits in vidattr() and vid_attr(), in case 3926 multi-column extension bits are passed there. 3927 + add setlocale() call to demo_panels.c, needed for wide-characters. 3928 + add some output flags to _nc_trace_ttymode to help diagnose a bug 3929 report by Larry Virden, i.e., ONLCR, OCRNL, ONOCR and ONLRET, 3930 3931 20070623 3932 + add test/demo_panels.c 3933 + implement opaque version of setsyx() and getsyx(). 3934 3935 20070612 3936 + corrected xterm+pcf2 terminfo modifiers for F1-F4, to match xterm 3937 #226 -TD 3938 + split-out key_name() from MKkeyname.awk since it now depends upon 3939 wunctrl() which is not in libtinfo (report by Rong-En Fan). 3940 3941 20070609 3942 + add test/key_name.c 3943 + add stdscr cases to test/inchs.c and test/inch_wide.c 3944 + update test/configure 3945 + correct formatting of DEL (0x7f) in _nc_vischar(). 3946 + null-terminate result of wunctrl(). 3947 + add null-pointer check in key_name() (report by Andreas Krennmair, 3948 cf: 20020901). 3949 3950 20070602 3951 + adapt mouse-handling code from menu library in form-library 3952 (discussion with Clive Nicolson). 3953 + add a modification of test/dots.c, i.e., test/dots_mvcur.c to 3954 illustrate how to use mvcur(). 3955 + modify wide-character flavor of SetAttr() to preserve the 3956 WidecExt() value stored in the .attr field, e.g., in case it 3957 is overwritten by chgat (report by Aleksi Torhamo). 3958 + correct buffer-size for _nc_viswbuf2n() (report by Aleksi Torhamo). 3959 + build-fixes for Solaris 2.6 and 2.7 (patch by Peter O'Gorman). 3960 3961 20070526 3962 + modify keyname() to use "^X" form only if meta() has been called, or 3963 if keyname() is called without initializing curses, e.g., via 3964 initscr() or newterm() (prompted by LinuxBase #1604). 3965 + document some portability issues in man/curs_util.3x 3966 + add a shadow copy of TTY buffer to _nc_prescreen to fix applications 3967 broken by moving that data into SCREEN (cf: 20061230). 3968 3969 20070512 3970 + add 'O' (wide-character panel test) in ncurses.c to demonstrate a 3971 problem reported by Sadrul H Chowdhury with repainting parts of 3972 a fullwidth cell. 3973 + modify slk_init() so that if there are preceding calls to 3974 ripoffline(), those affect the available lines for soft-keys (adapted 3975 from patch by Clive Nicolson). 3976 + document some portability issues in man/curs_getyx.3x 3977 3978 20070505 3979 + fix a bug in Ada95/samples/ncurses which caused a variable to 3980 become uninitialized in the "b" test. 3981 + fix Ada95/gen/Makefile.in adahtml rule to account for recent 3982 movement of files, fix a few incorrect manpage references in the 3983 generated html. 3984 + add Ada95 binding to _nc_freeall() as Curses_Free_All to help with 3985 memory-checking. 3986 + correct some functions in Ada95 binding which were using return value 3987 from C where none was returned: idcok(), immedok() and wtimeout(). 3988 + amend recent changes for Ada95 binding to make it build with 3989 Cygwin's linker, e.g., with configure options 3990 --enable-broken-linker --with-ticlib 3991 3992 20070428 3993 + add a configure check for gcc's options for inlining, use that to 3994 quiet a warning message where gcc's default behavior changed from 3995 3.x to 4.x. 3996 + improve warning message when checking if GPM is linked to curses 3997 library by not warning if its use of "wgetch" is via a weak symbol. 3998 + add loader options when building with static libraries to ensure that 3999 an installed shared library for ncurses does not conflict. This is 4000 reported as problem with Tru64, but could affect other platforms 4001 (report Martin Mokrejs, analysis by Tim Mooney). 4002 + fix build on cygwin after recent ticlib/termlib changes, i.e., 4003 + adjust TINFO_SUFFIX value to work with cygwin's dll naming 4004 + revert a change from 20070303 which commented out dependency of 4005 SHLIB_LIST in form/menu/panel/c++ libraries. 4006 + fix initialization of ripoff stack pointer (cf: 20070421). 4007 4008 20070421 4009 + move most static variables into structures _nc_globals and 4010 _nc_prescreen, to simplify storage. 4011 + add/use configure script macro CF_SIG_ATOMIC_T, use the corresponding | http://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=NEWS;hb=c72b2c2c48ade76b5b4090b0f54b619a4fd483d9 | CC-MAIN-2022-40 | refinedweb | 29,772 | 66.13 |
Source alignment and coordinate frames¶
The aim of this tutorial is to show how to visually assess that the data are well aligned in space for computing the forward solution, and understand the different coordinate frames involved in this process.
Topics
Let’s start out by loading some data.
import os.path as op import numpy as np import mne from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = op.join(data_path, 'subjects') raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif') trans_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw-trans.fif') raw = mne.io.read_raw_fif(raw_fname) trans = mne.read_trans(trans_fname) src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')) a source space... Computing patch statistics... Patch information added... Distance information added... [done] Reading a source space... Computing patch statistics... Patch information added... Distance information added... [done] 2 source spaces read
Understanding coordinate frames¶
For M/EEG source imaging, there are three coordinate frames (further explained in the next section) that we must bring into alignment using two 3D transformation matrices that define how to rotate and translate points in one coordinate frame to their equivalent locations in another.
mne.viz.plot_alignment() is a very useful function for inspecting
these transformations, and the resulting alignment of EEG sensors, MEG
sensors, brain sources, and conductor models. If the
subjects_dir and
subject parameters are provided, the function automatically looks for the
Freesurfer MRI surfaces to show from the subject’s folder.
We can use the
show_axes argument to see the various coordinate frames
given our transformation matrices. These are shown by axis arrows for each
coordinate frame:
shortest arrow is (R)ight/X
medium is forward/(A)nterior/Y
longest is up/(S)uperior/Z
i.e., a RAS coordinate system in each case. We can also set
the
coord_frame argument to choose which coordinate
frame the camera should initially be aligned with.
Let’s take a look:
fig = mne.viz.plot_alignment(raw.info, trans=trans, subject='sample', subjects_dir=subjects_dir, surfaces='head-dense', show_axes=True, dig=True, eeg=[], meg='sensors', coord_frame='meg') mne.viz.set_3d_view(fig, 45, 90, distance=0.6, focalpoint=(0., 0., 0.)) print('Distance from head origin to MEG origin: %0.1f mm' % (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3]))) print('Distance from head origin to MRI origin: %0.1f mm' % (1000 * np.linalg.norm(trans['trans'][:3, 3]))) dists = mne.dig_mri_distances(raw.info, trans, 'sample', subjects_dir=subjects_dir) print('Distance from %s digitized points to head surface: %0.1f mm' % (len(dists), 1000 * np.mean(dists)))
Out:
Using lh.seghead for head surface. Distance from head origin to MEG origin: 65.0 mm Distance from head origin to MRI origin: 29.9 mm Using surface from /home/circleci/mne_data/MNE-sample-data/subjects/sample/bem/sample-head.fif. Distance from 72 digitized points to head surface: 1.7 mm
Coordinate frame definitions¶
- Neuromag/Elekta/MEGIN head coordinate frame (“head”, pink axes)
The head coordinate frame is defined through the coordinates of anatomical landmarks on the subject’s head: Usually the Nasion (NAS), and the left and right preauricular points (LPA and RPA). Different MEG manufacturers may have different definitions of the coordinate head frame. A good overview can be seen in the FieldTrip FAQ on coordinate systems.
For Neuromag/Elekta/MEGIN, the head coordinate frame is defined by the intersection of
the line between the LPA (red sphere) and RPA (purple sphere), and
the line perpendicular to this LPA-RPA line one that goes through the Nasion (green sphere).
The axes are oriented as X origin→RPA, Y origin→NAS, Z origin→upward (orthogonal to X and Y).
- MEG device coordinate frame (“meg”, blue axes)
The MEG device coordinate frame is defined by the respective MEG manufacturers. All MEG data is acquired with respect to this coordinate frame. To account for the anatomy and position of the subject’s head, we use so-called head position indicator (HPI) coils. The HPI coils are placed at known locations on the scalp of the subject and emit high-frequency magnetic fields used to coregister the head coordinate frame with the device coordinate frame.
From the Neuromag/Elekta/MEGIN user manual:
The origin of the device coordinate system is located at the center of the posterior spherical section of the helmet with X axis going from left to right and Y axis pointing front. The Z axis is, again normal to the plane with positive direction up.
Note
The HPI coils are shown as magenta spheres. Coregistration happens at the beginning of the recording and the data is stored in
raw.info['dev_head_t'].
- MRI coordinate frame (“mri”, gray axes)
Defined by Freesurfer, the MRI (surface RAS) origin is at the center of a 256×256×256 1mm anisotropic volume (may not be in the center of the head).
Note
We typically align the MRI coordinate frame to the head coordinate frame through a rotation and translation matrix, that we refer to in MNE as
trans.
A bad example¶
Let’s try using
trans=None, which (incorrectly!) equates the MRI
and head coordinate frames.
mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src, subjects_dir=subjects_dir, dig=True, surfaces=['head-dense', 'white'], coord_frame='meg')
Out:
Using lh.seghead for head surface. Getting helmet for system 306m
It is quite clear that the MRI surfaces (head, brain) are not well aligned to the head digitization points (dots).
A good example¶
Here is the same plot, this time with the
trans properly defined
(using a precomputed matrix).
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample', src=src, subjects_dir=subjects_dir, dig=True, surfaces=['head-dense', 'white'], coord_frame='meg')
Out:
Using lh.seghead for head surface. Getting helmet for system 306m
Defining the head↔MRI
trans using the GUI¶
You can try creating the head↔MRI transform yourself using
mne.gui.coregistration().
First you must load the digitization data from the raw file (
Head Shape Source). The MRI data is already loaded if you provide the
subjectand
subjects_dir. Toggle
Always Show Head Pointsto see the digitization points.
To set the landmarks, toggle
Editradio button in
MRI Fiducials.
Set the landmarks by clicking the radio button (LPA, Nasion, RPA) and then clicking the corresponding point in the image.
After doing this for all the landmarks, toggle
Lockradio button. You can omit outlier points, so that they don’t interfere with the finetuning.
Note
You can save the fiducials to a file and pass
mri_fiducials=Trueto plot them in
mne.viz.plot_alignment(). The fiducials are saved to the subject’s bem folder by default.
Click
Fit Head Shape. This will align the digitization points to the head surface. Sometimes the fitting algorithm doesn’t find the correct alignment immediately. You can try first fitting using LPA/RPA or fiducials and then align according to the digitization. You can also finetune manually with the controls on the right side of the panel.
Click
Save As...(lower right corner of the panel), set the filename and read it with
mne.read_trans().
For more information, see step by step instructions in these slides. Uncomment the following line to align the data yourself.
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
Alignment without MRI¶
The surface alignments above are possible if you have the surfaces available
from Freesurfer.
mne.viz.plot_alignment() automatically searches for
the correct surfaces from the provided
subjects_dir. Another option is
to use a spherical conductor model. It is
passed through
bem parameter.
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto') src = mne.setup_volume_source_space(sphere=sphere, pos=10.) mne.viz.plot_alignment( raw.info, eeg='projected', bem=sphere, src=src, dig=True, surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True)
Out:
Fitted sphere radius: 91.0 mm Origin head coordinates: -4.1 16.0 51.7 mm Origin device coordinates: 1.4 17.8 -10.3 mm Equiv. model fitting -> RV = 0.0034856 % mu1 = 0.944754 lambda1 = 0.137089 mu2 = 0.667504 lambda2 = 0.683819 mu3 = -0.26966 lambda3 = -0.0105378 Set up EEG sphere model with scalp radius 91.0 mm Sphere : origin at (-4.1 16.0 51.7) mm radius : 81.9 mm grid : 10.0 mm mindist : 5.0 mm Setting up the sphere... Surface CM = ( -4.1 16.0 51.7) mm Surface fits inside a sphere with radius 81.9 mm Surface extent: x = -86.0 ... 77.8 mm y = -65.9 ... 97.9 mm z = -30.2 ... 133.7 mm Grid extent: x = -90.0 ... 80.0 mm y = -70.0 ... 100.0 mm z = -40.0 ... 140.0 mm 6156 sources before omitting any. 2300 sources after omitting infeasible sources not within 0.0 - 81.9 mm. 1904 sources remaining after excluding the sources outside the surface and less than 5.0 mm inside. Adjusting the neighborhood info. Getting helmet for system 306m Triangle neighbors and vertex normals...
It is also possible to use
mne.gui.coregistration()
to warp a subject (usually
fsaverage) to subject digitization data, see
these slides.
Total running time of the script: ( 0 minutes 18.519 seconds)
Estimated memory usage: 9 MB
Gallery generated by Sphinx-Gallery | https://mne.tools/stable/auto_tutorials/source-modeling/plot_source_alignment.html | CC-MAIN-2020-10 | refinedweb | 1,517 | 52.66 |
Building a Multi-Step Registration Form with React
A simple React example of showcasing a multi-step registration where using state can determine what's rendered.
We've really enjoyed working with React here at Viget. We've used it on client projects, personal ventures, and most recently on Pointless Corp.
One great feature of React is how it handles the state of our application. Each time the state of a React component updates, React will rerender an updated view. Translation: React is great at showing the user something when something happens -- if it needs to.
I thought a good example of showcasing this ability would be in a multi-step registration where we update a component's state to show which step the user is on, then show the fields for that step accordingly. But before we dive in let's see what we'll be building.
Live Demo at CodePen and Github Repo
I added a little more markup, some CSS, and a progress bar to visualize the current step a little clearer. Other than that, we'll essentially be building the same thing.
Getting Started
We'll have a 4-step registration process. The user will:
- Enter basic account information
- Answer a short survey
- Confirm the information is correct
- Be shown a success message
An easy way to show just the relevant fields for a given step is to have that content organized into discrete components. Then, when the user goes to the next step in the process, we'll increase the step state by 1. React will see the change to our step state and automatically rerender the component to show exactly what we want the user to. Here's the basic code:
//
When the step is 1 (when our component is first loaded) we'll show the Account fields, at 2 we'll show Survey questions, then Confirmation at 3, and finally a success message on the 4th step. I'm including these components using the CommonJS pattern; each of these will be a React component.
Next, we'll create an object to hold the values our user will be entering. We'll have a name, email, password, age, and favorite colors fields. For now let's save this information as fieldValues at the top of our parent component (Register.jsx).
// file: Registration.jsx var fieldValues = { name : null, email : null, password : null, age : null, colors : [] } // The rest of our file ...
Our first component we show to the user, <AccountFields />, contains the fields used to create a new account: name, password, and email. When the user clicks "Save and Continue" we'll save the data and advance them to step 2 in the registration process.
// file: AccountFields.jsx var React = require('react') var AccountFields = React.createClass({ render: function() { return ( <div> <label>Name</label> <input type="text" ref="name" defaultValue={ this.props.fieldValues.name } /> <label>Password</label> <input type="password" ref="password" defaultValue={ this.props.fieldValues.password } /> <label>Email</label> <input type="email" ref="email" defaultValue={ this.props.fieldValues.email } /> <button onClick={ this.saveAndContinue }>Save and Continue</button></div> ) }, saveAndContinue: function(e) { e.preventDefault() // Get values via this.refs var data = { name : this.refs.name.getDOMNode().value, password : this.refs.password.getDOMNode().value, email : this.refs.email.getDOMNode().value, } this.props.saveValues(data) this.props.nextStep() } }) module.exports = AccountFields
Four things to note in <AccountFields />:
- defaultValue will set the starting value of our input. React does this instead of using the value attribute in order to account for some funkiness in how HTML handles default field input values. Click here to read more on this topic.
- We set the defaultValue to the associated this.props.fieldValues key, which is passed as properties from the parent component (Registration.jsx). This is so when the user saves and continues to the next step, but then goes back to a previous step, the input they've already entered will be visible.
- We are getting the value of these fields by referencing the DOM nodes using refs. To read up on how refs work in React check out this documentation. It's basically just an easier way of referencing a node.
- We'll need to create saveValues and nextStep methods in our Registration component (the parent), then pass it to <AccountFields /> (the child) as properties that they can reference. And since on step 2 of the process we'll have been able to go back to a previous step, we'll have to create a previousStep too.
// file: Registration.jsx ... saveValues: function(fields) { return function() { // Remember, `fieldValues` is set at the top of this file, we are simply appending // to and overriding keys in `fieldValues` with the `fields` with Object.assign // See fieldValues = Object.assign({}, fieldValues, fields) }() }, nextStep: function() { this.setState({ step : this.state.step + 1 }) }, // Same as nextStep, but decrementing previousStep: function() { this.setState({ step : this.state.step - 1 }) }, ...
From here we'll pass these newly created methods as properties to each of our child components so they can be called.
// file: Registration.jsx ... render: function() { switch (this.state.step) { case 1: return <AccountFields fieldValues={fieldValues} nextStep={this.nextStep} saveValues={this.saveValues} /> case 2: return <SurveyFields fieldValues={fieldValues} nextStep={this.nextStep} previousStep={this.previousStep} saveValues={this.saveValues} /> case 3: return <Confirmation fieldValues={fieldValues} previousStep={this.previousStep} submitRegistration={this.submitRegistration} /> case 4: return <Success fieldValues={fieldValues} /> } }
You'll notice <AccountFields /> doesn't get passed previousStep since it's our first step and you can't go back. Also, instead of passing saveValues or nextStep to <Confirmation />, we pass a newly created submitRegistration method, which will handle submitting the users input (fieldValues) and increase the step of our registration process to 4, thus showing <Success />.
We would repeat the process of creating <AccountFields /> for the <SurveyFields />, <Confirmation />, and <Success /> components. For the sake of brevity, you can check out the code on Github here, here, and here.
An Aside On Saving Data
Notice how we are saving user input and having to pass it (fieldValues={fieldValues}) to every component that needs it every time? Imagine we had even further nested components relying on this data, or were showing the data in multiple components that were being shown to the user at the same time, opening up the possibility of one having the most up-to-date data, but not the other? As you can see, our above implementation can quickly become tedious and brittle.
We can get ourselves out of this situation by saving our data in a storage entity that Facebook calls a Flux Store. From this Store we could save our data in a central location and rerender just the components that listened to changes to that data accordingly. If you're interested in learning a bit more on how this works I recommend checking out this talk by Pete Hunt.
Conclusion
React is awesome at handling what and when to show something to the user. In our example, what we're showing are related input fields (simple markup), and when those fields are shown is determined by the current step (the state of our Registration component). One way of thinking of this relationship is state determines shape. Depending on the state of our application, we're able to simply and predictably render something different, whether it's as small as a single character change or showing a completely different component altogether.
Have any questions about React or feedback on how I did something? Feel free to post a comment. | https://www.viget.com/articles/building-a-multi-step-registration-form-with-react/ | CC-MAIN-2018-22 | refinedweb | 1,240 | 55.24 |
bus_alloc_resource -- alloc resources on a bus
#include <sys/param.h>
#include <sys/bus.h>
#include <machine/bus.h>
#include <sys/rman.h>
#include <machine/resource.h>
struct resource *
bus_alloc_resource(device_t dev, int type, int *rid, u_long start,
u_long end, u_long count, u_int flags);
This is an easy interface to the resource-management functions. It hides
the indirection through the parent's method table. This function generally
should be called in attach, but (except in some rare cases) never
earlier.
Its arguments are as follows:
dev is the device that requests ownership of the resource. Before allocation,.
start and end are the start/end addresses of the resource. If you specify
values of 0 for start and ~0 for end and 1 for count, the default
values for the bus are calculated.
count is the size of the resource, e.g. the size of an I/O port (often 1,
but some devices override this). If you specified the default values for
start and end, then the default value of the bus is used if count is
smaller than the default value and count is used, if it is bigger as.
RF_TIMESHARE resource permits time-division sharing.
A pointer to struct res is returned on success, a null pointer otherwise.
This is some example code.(dev, SYS_RES_IRQ, &irqid,
0ul, ~0ul, 1, RF_ACTIVE | RF_SHAREABLE);
bus_activate_resource(9), bus_release_resource(9), device(9), driver(9)
This man page was written by Alexander Langer <alex@big.endian.de> with
parts by Warner Losh <imp@FreeBSD.org>.
FreeBSD 5.2.1 May 18, 2000 FreeBSD 5.2.1 | http://nixdoc.net/man-pages/FreeBSD/man9/bus_alloc_resource.9.html | CC-MAIN-2013-20 | refinedweb | 261 | 68.26 |
Need working example code to expose C++ class to QJSEngine
I've studied the documentation and also researched online for help, but it seems like things are not crystal clear.
I don't understand how a class should be built such that it can be exposed to the javascript environment in QJSEngine.
So, suppose I have the next class:
-- testclass.h
#include <QObject> class QString; class testclass : public QObject{ Q_OBJECT public: testclass(); int getPropOne() const; void setPropOne(const int); QString getPropTwo() const; void setPropTwo(const QString &); private: int one; QString two; };
-- testclass.cpp
#include "testclass.h" #include <QString> testclass::testclass(){ one = 1; two = "two"; } int testclass::getPropOne() const { return one; } void testclass::setPropOne(const int newvalue){ one = newvalue; } QString testclass::getPropTwo() const { return two; } void testclass::setPropTwo(const QString &val){ two = val; }
Can you please show me some working code that would:
- expose an object of class "testclass" from C++ to the QJSEngine environment;
- make it possible to instantiate new objects of type "testclass" directly from the QJSEngine environment
Also, I'm sure I've read it somewhere but can't find it anymore... I remember the only objects I can expose from C++ to the QJSEngine environment are those derived from QObject, and must contain the Q_OBJECT macro. Can you confirm that? The documentation is very poor on QJSEngine and I'm sorry to ask you such basic questions. Thank you in advance to those willing to help me.
On this page (), the section 'Making a QObject Available to the Script Engine' has some info but I know it's not much. It mentions, in not so many words, that an instance of QObject-based class is needed if you want it accessible in JavaScript.
In general, here's what I've needed to do before:
- Create a class that is derived from QObject and add the Q_OBJECT macro.
- The function-to-be-called must be under a section with 'slots'. For example, 'public slots' or 'protected slots'.
- With an instance of that QObject-derived class, create a JavaScript object (using QJSEngine::newQObject) and assign it to a property on the Global Object associated with the script engine (using QJSValue::setProperty).
You've done #1. To do #2, in testclass.h, add
public slots:
on a line before
int getPropOne() const;
Here's an example on how to do #3:
testclass* myTestClass = new testclass(); QJSEngine jsEngine; QJSValue myTestClassObject = jsEngine.newQObject( myTestClass ); QJSEngine jsEngine.globalObject().setProperty( "myTestClass", myTestClassObject );
Within your JavaScript code, you should now be able to make calls like myTestClass.getPropOne().
About your second question, I don't know if it's possible to instantiate new 'testclass' objects directly in JavaScript. I think not but I have nothing to back that up.
Thank you very much, your answer is really helpful!
I will investigate more about instantiating new objects but for now I could only find how to do that from QML. I remember though that it needed some createNew() method in the class that would return the new object... I don't really know how this works but If someone has any idea let me know.
Have you tried what the person in this post did:
I've not tried it out myself but it looks like you'll need to register testclass using qmlRegisterType and prepend Q_INVOKABLE to the 'createNew' function that returns the new object.
Right now I am confused about the correlation between QML and QJSEngine. I mean, the QJSE is supposed to provide a JS interpreter and (possibly) all the functionality to interact between JS and C++. So, to some extent, QJSEngine is supposed to be complete on its own, without needing QML. I think, the poor documentation on this part lacks to mention the proper interaction between the two which consequently confuses people on what they actually need for the task.
I will try looking through all the documentation about QML and QJSEngine, hopefully I'll understand more about them. And I'll try to use the Q_INVOKABLE way. I'll be back with news as soon as I get something to work. | https://forum.qt.io/topic/60538/need-working-example-code-to-expose-c-class-to-qjsengine | CC-MAIN-2018-22 | refinedweb | 681 | 61.46 |
2. Fixed array[SIZE]; SIZE = 4.
3. Each array points to dynamic array
So do i have the user determine the array size with the pointer? I'm so confused how to do a ragged array. I get
that its a multidimensional array with multiple inputs and outputs. But how do I go about prompting the user for it
and printing out each possible outcome? Like user says station 1, lab 2, and their custom id number. I hate being
confused.
#include <iostream> using namespace std; int main () { int *lab1, *id, *station; cout << "What station are you going to be on? "; lab1 = int[i]; cin >> i; if (lab1[0] == 0) { cout << "Error"; } else { for (lab=0; n<i; n++) { cout << "Enter ID number: "; cin >> lab[n]; } cout << "You have entered: "; for (lab=0; n<i; n++) cout << lab[n] << ", "; delete[] p; } return 0; } | http://www.dreamincode.net/forums/topic/302654-ragged-array-and-pointers/ | CC-MAIN-2016-22 | refinedweb | 143 | 73.37 |
Investors in Zillow Group Inc (Symbol: Z) saw new options become available today, for the June 14th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the Z options chain for the new June 14th contracts and identified one put and one call contract of particular interest.
The put contract at the $32.00 strike price has a current bid of $2.00. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $32.00, but will also collect the premium, putting the cost basis of the shares at $30.00 (before broker commissions). To an investor already interested in purchasing shares of Z, that could represent an attractive alternative to paying $32.47.25% return on the cash commitment, or 53.05% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Zillow Group Inc, and highlighting in green where the $32.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $33.00 strike price has a current bid of $2.15. If an investor was to purchase shares of Z stock at the current price level of $32.47/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $33.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 8.25% if the stock gets called away at the June 14th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if Z shares really soar, which is why looking at the trailing twelve month trading history for Zillow Group Inc, as well as studying the business fundamentals becomes important. Below is a chart showing Z's trailing twelve month trading history, with the $33.00 strike highlighted in red:
Considering the fact that the $33.62% boost of extra return to the investor, or 56.21% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 62%, while the implied volatility in the call contract example is 65%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $32.47) to be 61%.. | https://www.nasdaq.com/articles/interesting-z-put-and-call-options-june-14th-2019-05-02 | CC-MAIN-2021-31 | refinedweb | 415 | 64.1 |
It’s been a while since my last meaningful post – I’ve been tied up with our DR Project (but working with some pretty funky technologies – I like Virtualisation, but now that I have been working with a Veeam Backup and Replication my love of the technology has increased substantially). On the plus side, during my break I started thinking and playing around with methods to remove some of our modifications – something that I have been preaching for over a year now but fallout from pesky national state of emergency got in the way – so I expect to be posting a few more articles over the next week or so.
One of the headaches that I’ve had with Smart Office and jscripts is handling messages that come back from the server – be they information, warnings or errors.
I’ve needed to keep track of them for GL Imports and Budget Imports and more recently my work on removing one of our modifications (which I will be discussing soon). Previously I’d walk the visual tree looking for the control which displayed the error message at the bottom of the panel – a method which works but isn’t really fantastic and encounters issues if the user has their error messages displayed in a dialog box.
The removal of one of our modifications is a panel that has been changed to remove a lot of needless fields, and change the tab-order for our forklift users. Our forklift users do tend to get a little bored at times and are more likely than most of my other staff to change settings – for example, turning dialog box notifications on :-@
So, I wanted something a little more robust, not to mention I find it a little offensive that I should have to jump through those hoops to get a simple piece of information.
Out came Visual Studios Object Browser and a search for the word “Error” – looks like there are lots of methods to retrieve various error information within Smart Office but nothing seemed to fit the bill. The few that did seemed to fail when I created a script to test them.
But fate smiled upon me – and I found something interesting. MForms.Runtime has a class called Runtime, under Runtime is a property called “Result”. Looked very promising.
So I threw together a little wee script which would query what was returned. And well, it’s a fair bit of information to be honest, but I hit pay-dirt.
The Runtime.Result property returns an xml document which provides layout information for the panel, information about the session and then ControlData. It is the ControlData element that I was interested in.
Under the ControlData element there was three elements that held information that I was looking for, <Msg> <MsgID> <MsgLvl>, ok, well it’s really only one of these that I was interested in – the <Msg> element. It contains the data / message that will be displayed to the user. If there is no error/warning/information, then the message element won’t appear at all in the response document – so you can search for the existence of the <Msg> element within the document <string>.IndexOf(“<Msg>”) and fail if it exists, or succeed if it doesn’t exist.
Runtime.Result property returned by MMS100 – it shows a message of “Transaction type not permitted in this function”
Runtime.Result property returned by OIS101 after adding a line – it shows no <Msg> element whatsoever as no error occurred.
And the curveball
MMS100/N Quick Entry on a lot controlled item comes up with a request to confirm before the submission of the transaction.
Below is the script that I used to extract the XMLDocument.
If you attach it to OIS101 and then add a line with an incorrect product code you’ll see the results – run the script again then try again entering a correct product code.
Oh, if anyone finds a better way then please let me know 🙂
import System; import System.Windows; import System.Windows.Controls; import MForms; import Mango.Core; import MForms.Controls; import Mango.UI.Services.Applications; package MForms.JScript { class testResponse { var gController; var gDebug; public function Init(element: Object, args: Object, controller : Object, debug : Object) { gController = controller; gDebug = debug; var content : Object = controller.RenderEngine.Content; // TODO Add your code here gController.add_RequestCompleted(OnRequestCompleted); } public function OnRequestCompleted(sender: Object, e: RequestEventArgs) { if(e.CommandType == MNEProtocol.CommandTypeKey) { if(e.CommandValue == MNEProtocol.KeyEnter) { if(null != gController.Runtime.Result) { gDebug.WriteLine(gController.Runtime.Result); // remove the event handler gController.remove_RequestCompleted(OnRequestCompleted); } } } } }
Thanks for that Scott. I’ll need to try this approach with the issue we discussed last year for sounding a note when an error message is displayed. Does this approach deal with the situation when the user turns dialog box notifications on and thus they don’t appear in the message bar at the bottom of the panel?
Cheers, Al.
Hi Al,
in my testing, yes, it gets around the issue when the user has “Display system messages in a dialog window” turned on.
Cheers,
Scott
Hi Scott & Al,
Karl posted another solution about error messages here: | https://potatoit.kiwi/2012/01/16/extracting-the-error-message/ | CC-MAIN-2017-39 | refinedweb | 858 | 51.07 |
jGuru Forums
Posted By:
Anonymous
Posted On:
Thursday, August 11, 2005 11:04 AM
Hello,
I am having problems in sending a parameter from my action to my jsp. I want to know what I gotta do to get the parameter in the jsp.
My action is the following:
public class ListaColecaoAction extends Action{
public ActionForward execute (
ActionMapping mapping,
ActionForm form,
HttpServletRequest request,
HttpServletResponse response) throws Exception {
List lista = ColecaoService.getInstance().getColecaoList();
request.setAttribute("col", lista);
return mapping.findForward("ok");
}
}
As you see, I did a "request.setAttribute("col", lista);", but it doesn't seem enough.
I am trying to get it in my jsp with a logic:iterate like this:
but it gives me the error "cannot find bean col in scope request". I know the problem is only regarding the parameter, because when I put a scriplet to call the method in the jsp, is worked perfectly. But I don't wanna use the scriplet.
What do I gotta do to make ir work? What if I want to use JSTL (c:forEach)?
Thank you all | http://www.jguru.com/forums/view.jsp?EID=1257698 | CC-MAIN-2015-18 | refinedweb | 178 | 65.93 |
!- Search Loader --> <!- /Search Loader -->
Good Afternoon all,
I am attempting to configure Wake on Lan for a customer who has multiple sites. We have purchased a provisioning cert from GoDaddy for the Central site server which also sefrves as the OOB management point for the site. So far provisioning has hit a few bumps (I'll be posting seperately about that) but some machines are provisioning. The site architechture is pretty simple. There is a central site with a few secondary sites below it, and a primary site below the central site, which also has some secondary sites below it. My question is, do I need to purchase another cert for the OOB management point in the primary site below the central site? If not, do I just import the sert I have now into the primary servers store?
Any and all help is appreciated.
Thanks!
As long as the two OOB service points exist in the same DNS namespace, differing hostnames should not matter. You will need to import the certificate (include the certificate chain, and the private key, when you export it) directly into the properties page of the Out of Band Component Configuration node in the ConfigMgr console.
Cheers,
Trevor Sullivan | https://community.intel.com/t5/Intel-vPro-Platform/SCCM-Certificates-required-for-child-sites/td-p/406652 | CC-MAIN-2020-50 | refinedweb | 204 | 61.16 |
Gradle: build.gradle vs. settings.gradle vs. gradle.properties
Last modified: June 9, 2020
1. Overview
In this article, we'll look at the different configuration files of a Gradle Java project. Also, we'll see the details of an actual build.
You can check this article for a general introduction to Gradle.
2. build.gradle
Let's assume that we're just creating a new Java project by running gradle init –type java-application. This'll leave us with a new project with the following directory and file structure:
build.gradle gradle wrapper gradle-wrapper.jar gradle-wrapper.properties gradlew gradlew.bat settings.gradle src main java App.java test java AppTest.java
We can consider the build.gradle file as the heart or the brain of the project. The resulting file for our example looks like this:
plugins { id 'java' id 'application' } mainClassName = 'App' dependencies { compile 'com.google.guava:guava:23.0' testCompile 'junit:junit:4.12' } repositories { jcenter() }
It consists of Groovy code, or more precisely, a Groovy-based DSL (domain specific language) for describing the builds. We can define our dependencies here and also add things like Maven repositories used for dependency resolution.
The fundamental building blocks of Gradle are projects and tasks. In this case, since the java plugin is applied, all necessary tasks for building a Java project are defined implicitly. Some of those tasks are assemble, check, build, jar, javadoc, clean and many more.
These tasks are also set up in such a way, that they describe a useful dependency graph for a Java project, meaning it's generally enough to execute the build task and Gradle (and the Java plugin) will make sure, that all necessary tasks are performed.
If we need additional specialized tasks, like, e.g., building a Docker image, it would also go into the build.gradle file. The easiest possible definition of tasks looks like this:
task hello { doLast { println 'Hello Baeldung!' } }
We can run a task by specifying it as an argument to the Gradle CLI like this:
$ gradle -q hello Hello Baeldung!
It'll do nothing useful, but print out “Hello Baeldung!” of course.
In case of a multi-project build, we'd probably have multiple different build.gradle files, one for each project.
The build.gradle file is executed against a Project instance, with one Project instance created per subproject. The tasks above, which can be defined in the build.gradle file, reside inside the Project instance as part of a collection of Task objects. The Tasks itself consists of multiple actions as an ordered list.
In our previous example, we've added a Groovy closure for printing out “Hello Baeldung!” to the end of this list, by calling the doLast(Closure action) on our hello Task object. During the execution of Task, Gradle is executing each of its Actions in order, by calling the Action.execute(T) method.
3. settings.gradle
Gradle also generates a settings.gradle file:
rootProject.name = 'gradle-example'
The settings.gradle file is a Groovy script as well.
In contrast to the build.gradle file, only one settings.gradle file is executed per Gradle build. We can use it to define the projects of a multi-project build.
Besides, we can also possible to register code as part of different life cycle hooks of a build.
The framework requires the existence of the settings.gradle in a multi-project build, while it's optional for a single-project build.
This file is used after creating the Settings instance of the build, by executing the file against it and thereby configuring it. This means that we're defining subprojects in our settings.gradle file like this:
include 'foo', 'bar'
and Gradle is calling the void include(String… projectPaths) method on the Settings instance when creating the build.
4. gradle.properties
Gradle doesn't create a gradle.properties file by default. It can reside in different locations, for example in the project root directory, inside of GRADLE_USER_HOME or in the location specified by the -Dgradle.user.home command line flag.
This file consists of key-value pairs. We can use it to configure the behavior of the framework itself and it's an alternative to using command line flags for the configuration.
Examples of possible keys are:
- org.gradle.caching=(true,false)
- org.gradle.daemon=(true,false)
- org.gradle.parallel=(true,false)
- org.gradle.logging.level=(quiet,warn,lifecycle,info,debug)
Also, you can use this file to add properties directly to the Project object, e.g., the property with its namespace: org.gradle.project.property_to_set
Another use case is specifying JVM parameters like this:
org.gradle.jvmargs=-Xmx2g -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
Please note that it needs to launch a JVM process to parse the gradle.properties file. This means these JVM parameters only effect separately launched JVM processes.
5. The Build in a Nutshell
We can summarize the general lifecycle of a Gradle build as follows, assuming we don't run it as a daemon:
- It launches as a new JVM process
- It parses the gradle.properties file and configures Gradle accordingly
- Next, it creates a Settings instance for the build
- Then, it evaluates the settings.gradle file against the Settings object
- It creates a hierarchy of Projects, based on the configured Settings object
- Finally, it executes each build.gradle file against its project
6. Conclusion
We've seen, how different Gradle configuration files fulfill various development purposes. We can use them to configure a Gradle build as well as Gradle itself, based on the needs of our project.
“In our previous example, we’ve added a Groovy Clojure for printing out “Hello Baeldung!”” – false, Groovy works with cloSures, while cloJure is a LISP variant running on JVM.
Hey Filip,
Thanks, fixed. | https://www.baeldung.com/gradle-build-settings-properties | CC-MAIN-2020-40 | refinedweb | 967 | 58.99 |
Building a BitClout Social Network Visualization App With Memgraph and D3.js
Learn how to develop a simple application for visualizing and analyzing the BitClout social network using Memgraph, Python, and use D3.js.
Introduction
BitClout is a new decentralized social network that lets you speculate on (the worth of) people and posts with real money. It’s built from the ground up as its own custom blockchain. Its architecture is similar to Bitcoin’s, except that it supports complex social network models like posts, profiles, follows, speculation features, and much more at a significantly higher throughput and scale. Like Bitcoin, BitClout is a fully open-source project and there is no company behind it, it’s just coins and code.
In this tutorial, you’ll learn how to develop a simple application for visualizing and analyzing the BitClout social network using Memgraph, Python, and use D3.js. To make things as simple as possible, you are going to use Docker to containerize your application and achieve hassle-free cross-platform development.
Prerequisites
- Memgraph DB: a Streaming Graph Application Platform that helps you wrangle your streaming data, build sophisticated models that you can query in real-time, and develop applications you never thought possible, all built on the power of the graph. Follow the Docker Installation instructions.
- Flask: a.
- Python 3: An interpreted high-level general-purpose programming language.
- pymgclient: A Python driver for connecting to Memgraph
You can find the source code in our GitHub repository if you don’t want to work on it as you go through the tutorial. If at any point in this tutorial you have a question or something is not working for you, feel free to post on StackOverflow with the tag
memgraphdb. So let’s get started!
Scraping the BitClout HODLers Data
To acquire the data from the BitClout website you’ll need to scrape the data using a method that renders HTML or ping the BitClout servers directly by using an undocumented API.
The scraping method might be a bit easier since you don’t have to worry about headers too much.
That can be done using a browser inside Python like Selenium, and parsing the HTML with beautiful soup.
Getting the API to work might be a bit trickier but the upside is; it requires no parsing. You can head over to a BitClout account of a content creator, under the Creator Coin tab, you’ll notice a call to the
/api/v0/get-hodlers-for-public-key route if inspecting network traffic with a developers console in Chrome.
All that’s left is to copy the headers from the console and convert them to a cURL command. Then you can make a post request to BitClout and get the information about each account individually.
Here’s the python script.
Creating The Flask Server Backend
Before you begin with any actual coding, you need to decide on the architecture for your web application.
The main purpose of the app is to visualizes the BitClout network so let’s create a small Flask server with one view. This will also make your life easier in case you choose to extend the app at some point with additional functionalities or different visualization options.
Let’s implement the actual Flask server that will handle everything. Start by creating the file
bitclout.py and add the following code to it:
import os
MG_HOST = os.getenv('MG_HOST', '127.0.0.1')
MG_PORT = int(os.getenv('MG_PORT', '7687'))
You are going to specify environment variables in the
docker-compose.yml file later on and this is how they are retrieved in Python. Next, let’s set up some basic logging:
import logging
log = logging.getLogger(__name__)
def init_log():
logging.basicConfig(level=logging.INFO)
log.info("Logging enabled")
logging.getLogger("werkzeug").setLevel(logging.WARNING)
init_log()
Nothing special, but this will give you a clear overview of how the app is behaving. In the same fashion, let’s define an optional (but often helpful) input argument parser:
from argparse import ArgumentParser
def parse_args():
parser = ArgumentParser(description=__doc__)
parser.add_argument("--app-host", default="0.0.0.0",
help="Host address.")
parser.add_argument("--app-port", default=5000, type=int,
help="App port.")
parser.add_argument("--template-folder", default="public/template",
help="Path to the directory with flask templates.")
parser.add_argument("--static-folder", default="public",
help="Path to the directory with flask static files.")
parser.add_argument("--debug", default=True, action="store_true",
help="Run web server in debug mode.")
parser.add_argument("--load-data", default=False, action='store_true',
help="Load BitClout network into Memgraph.")
print(__doc__)
return parser.parse_args()
args = parse_args()
This will enable you to easily change the behavior of the app on startup using arguments. For example, the first time you start your app it’s going to be with the
--load-data flag because we need to populate the database.
The next step is connecting to Memgraph so you can populate the database and fetch data at some point:
import mgclient
import time
connection_established = False
while(not connection_established):
try:
connection = mgclient.connect(
host=MG_HOST,
port=MG_PORT,
username="",
sslmode=mgclient.MG_SSLMODE_DISABLE,
lazy=True)
connection_established = True
except:
log.info("Memgraph probably isn't running.")
time.sleep(4)
cursor = connection.cursor()
This is a pretty lazy solution for connecting to the database but it works well.
It’s time to create your server instance:
from flask import Flask
app = Flask(__name__,
template_folder=args.template_folder,
static_folder=args.static_folder,
static_url_path='')
And with this, you can finally create the view functions that will be invoked from the browser via HTTP requests.
One view function is
load_all() which fetches all the nodes and relationships from the database, filters out the most important information and returns it in JSON format for visualization. To can the network load at a minimum, you will send a list with every node
id (no other information about the nodes) and a list that specifies how they are connected to each other.
import JSON
from flask import Response
@app.route('/load-all', methods=['GET'])
def load_all():
"""Load everything from the database."""
start_time = time.time()
try:
cursor.execute("""MATCH (n)-[r]-(m)
RETURN n, r, m
LIMIT 20000;""")
rows = cursor.fetchall()
except:
log.info("Something went wrong.")
return ('', 204)
links = []
nodes = []
visited = []
for row in rows:
n = row[0]
m = row[2]
if n.id not in visited:
nodes.append({'id': n.id})
visited.append(n.id)
if m.id not in visited:
nodes.append({'id': m.id})
visited.append(m.id)
links.append({'source': n.id, 'target': m.id})
response = {'nodes': nodes, 'links': links}
duration = time.time() - start_time
log.info("Data fetched in: " + str(duration) + " seconds")
return Response(
json.dumps(response),
status=200,
mimetype='application/json')
The second view function is
index() and it returns the default homepage view, i.e. the
/public/templates/index.html file:
from flask import render_template
@app.route('/', methods=['GET'])
def index():
return render_template('index.html')
The only thing that’s left is to implement and invoke the
main() method:
def main():
if args.load_data:
log.info("Loading the data into Memgraph.")
database.load_data(cursor)
app.run(host=args.app_host, port=args.app_port, debug=args.debug)
if __name__ == "__main__":
main()
After seeing the
database.load_data(cursor) line, you might ask yourself why we haven’t implemented the
database module yet. If you are interested in how to populate the database with the BitClout network, continue with the next step but otherwise skip it and just copy the contents of
database.py from GitHub.
Importing the BitClout Network into Memgraph
The BitClout network data is stored in three separate CSV files which you will use to populate Memgraph. Create a
database.py module and define a single function,
load_data(cursor). This function uses the object
cursor for submitting queries to the database. Let’s start implementing it step by step:
def load_data(cursor):
cursor.execute("""MATCH (n)
DETACH DELETE n;""")
cursor.fetchall()
cursor.execute("""CREATE INDEX ON :User(id);""")
cursor.fetchall()
cursor.execute("""CREATE INDEX ON :User(name);""")
cursor.fetchall()
cursor.execute("""CREATE CONSTRAINT ON (user:User)
ASSERT user.id IS UNIQUE;""")
cursor.fetchall()
The first query deletes everything from the database in case there was some unexpected data in it.
After each query execution comes a
cursor.fetchall() invocation which we need to commit database transactions.
The second and third queries create a database index for faster processing.
The fourth query creates a constraint because each user needs to have a unique
id property.
The query for creating nodes from CSV files looks like this:
cursor.execute("""LOAD CSV FROM '/usr/lib/memgraph/import-data/profiles-1.csv'
WITH header AS row
CREATE (sample:User {id: row.id})
SET sample += {
name: row.name,
description: row.description,
image: row.image,
isHidden: row.isHidden,
isReserved: row.isReserved,
isVerified: row.isVerified,
coinPrice: row.coinPrice,
creatorBasisPoints: row.creatorBasisPoints,
lockedNanos: row.lockedNanos,
nanosInCirculation: row.nanosInCirculation,
watermarkNanos: row.watermarkNanos
};""")
cursor.fetchall()
However, this isn’t enough to load all the nodes for the network. Because GitHub has a file size limitation, the nodes are split between two CSV files,
profiles-1.csv and
profiles-2.csv. Just copy/paste this code segment with one slight adjustment, change the line
'/usr/lib/memgraph/import-data/profiles-1.csv' to
'/usr/lib/memgraph/import-data/profiles-2.csv'.
The relationships can be created by running:
cursor.execute("""LOAD CSV FROM '/usr/lib/memgraph/import-data/hodls.csv'
WITH header AS row
MATCH (hodler:User {id: row.from})
MATCH (creator:User {id: row.to})
CREATE (hodler)-[:HODLS {amount: row.nanos}]->(creator);""")
cursor.fetchall()
Building Your Frontend with D3.js
Let’s face it, you are probably here because you want to check out if D3.js is worth considering for this kind of visualization or how easy it is to learn. That’s also why I am not going to waste your time by going through the HTML file. Just create a
index.html file in the directory
public/templates/ and copy its contents from here. The only noteworthy line is:
<canvas width="800" height="600" style="border: ..."></canvas>
This line is important because before you begin with anything else, first you need to answer the question: Should I use canvas or svg?. The answer is, as is often the case, it depends on the situation.
This StackOverflow post gives a pretty good overview of the two technologies. I short, canvas is a bit harder to master and interact with, while svg is more intuitive and simplistic when it comes to modeling interactions. On the other hand, canvas is better in terms of performance and is probably best suited for large datasets like our own. We can also use a combination of the two. In this post, we are visualizing the whole network, but if you wanted to visualize only a smaller subset, you could create a separate view with an svg element.
Create the
public/js/index.js file and add the following code:
const width = 800;
const height = 600;
var links;
var nodes;
var simulation;
var transform;
These are just some global variables that will be needed throughout the script. Now, let’s select the canvas element and get its context:
var canvas = d3.select("canvas");
var context = canvas.node().getContext('2d');
The next step is to define an HTTP request to retrieve the data from the server:
var xmlhttp = new XMLHttpRequest();
xmlhttp.open("GET", '/load-all', true);
xmlhttp.setRequestHeader('Content-type', 'application/json; charset=utf-8');
The most crucial part is fetching the data and visualizing it. You will accomplish this inside the EventHandler
onreadystatechange() when the server responds with requested JSON data:
xmlhttp.onreadystatechange = function () {
if (xmlhttp.readyState == 4 && xmlhttp.status == "200") {
data = JSON.parse(xmlhttp.responseText);
links = data.links;
nodes = data.nodes;
simulation = d3.forceSimulation()
.force("center", d3.forceCenter(width / 2, height / 2))
.force("x", d3.forceX(width / 2).strength(0.1))
.force("y", d3.forceY(height / 2).strength(0.1))
.force("charge", d3.forceManyBody().strength(-50))
.force("link", d3.forceLink().strength(1).id(function (d) { return d.id; }))
.alphaTarget(0)
.alphaDecay(0.05);
transform = d3.zoomIdentity;
d3.select(context.canvas)
.call(d3.drag().subject(dragsubject).on("start", dragstarted).on("drag", dragged).on("end", dragended))
.call(d3.zoom().scaleExtent([1 / 10, 8]).on("zoom", zoomed));
simulation.nodes(nodes)
.on("tick", simulationUpdate);
simulation.force("link")
.links(links);
}
}
Here you parse the JSON response data and separate it into two lists,
nodes and
links. The
forceSimulation() method is responsible for arranging our network and the positions of individual nodes. You can learn more about it here.
You also need to map specific functions with events like dragging and zooming. Now, let’s implement these missing functions. The function
simulationUpdate() is responsible for redrawing the canvas when changes are made to an element’s position:
function simulationUpdate() {
context.save();
context.clearRect(0, 0, width, height);
context.translate(transform.x, transform.y);
context.scale(transform.k, transform.k);
links.forEach(function (d) {
context.beginPath();
context.moveTo(d.source.x, d.source.y);
context.lineTo(d.target.x, d.target.y);
context.stroke();
});
nodes.forEach(function (d, i) {
context.beginPath();
context.arc(d.x, d.y, radius, 0, 2 * Math.PI, true);
context.fillStyle = "#FFA500";
context.fill();
});
context.restore();
}
The
dragstart event is fired when the user starts dragging an element or text selection:
function dragstarted(event) {
if (!event.active) simulation.alphaTarget(0.3).restart();
event.subject.fx = transform.invertX(event.x);
event.subject.fy = transform.invertY(event.y);
}
The
dragged event is fired periodically as an element is being dragged by the user:
function dragged(event) {
event.subject.fx = transform.invertX(event.x);
event.subject.fy = transform.invertY(event.y);
}
The
dragended event is fired when a drag operation is being ended:
function dragended(event) {
if (!event.active) simulation.alphaTarget(0);
event.subject.fx = null;
event.subject.fy = null;
}
That’s it for the frontend!
Setting up your Docker Environment
The only thing left to do is to Dockerize your application. Start by creating a
docker-compose.yml file in the root directory:
version: "3"
services:
memgraph:
image: "memgraph/memgraph:latest"
user: root
volumes:
- ./memgraph/entrypoint:/usr/lib/memgraph/entrypoint
- ./memgraph/import-data:/usr/lib/memgraph/import-data
- ./memgraph/mg_lib:/var/lib/memgraph
- ./memgraph/mg_log:/var/log/memgraph
- ./memgraph/mg_etc:/etc/memgraph
ports:
- "7687:7687"
bitclout:
build: .
volumes:
- .:/app
ports:
- "5000:5000"
environment:
MG_HOST: memgraph
MG_PORT: 7687
depends_on:
- memgraph
As you can see from the
docker-compose.yml file, there are two separate services. One is
memgraph and the other is the web application,
bitclout. You also need to add a
Dockerfile to the root directory. This file will specify how the
bitclout image should be built. packages
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
# Copy the source code to the container
COPY public /app/public
COPY bitclout.py /app/bitclout.py
COPY database.py /app/database.py
WORKDIR /app
ENV FLASK_ENV=development
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ENTRYPOINT ["python3", "bitclout.py", "--load-data"]
The command
RUN pip3 install -r requirements.txt installs all the necessary Python requirements. There are only two dependencies in the
requirements.txt file:
Flask==1.1.2
pymgclient==1.0.0
Launching your Application
The app can be started by running the following commands from the root directory:
docker-compose build
docker-compose up
If you encounter errors about permissions, just add
sudo to the commands.
The first time you run the container, leave everything as it is. Afterward, you don’t have to load the BitClout data into Memgraph from CSV files anymore because Docker volumes are used to persist the data. You can turn off the automatic CSV loading by changing the last line in
./Dockerfile to:
ENTRYPOINT ["python3", "bitclout.py"]
This is essentially the same as running the server without the
--load-data flag.
Conclusion
In this tutorial, you learned how to build a BitClout visualization app with Memgraph, Python, and D3.js. From here you can do a few things:
- If you have any questions, comments, or suggestions, make sure to drop us a line on our community forum.
- If you would like to build your own visualization app, you can download Memgraph here.
- If you want to read other similar tutorials, check out this flight network analysis tutorial, or this fraud detection tutorial.
If you end up building something of your own with Memgraph and D3.js, make sure to share your project with us, and we’ll be happy to share it with the Memgraph community! | https://gdespot.medium.com/building-a-bitclout-social-network-visualization-app-with-memgraph-and-d3-js-f2ebd4eaec27?source=post_internal_links---------7---------------------------- | CC-MAIN-2021-25 | refinedweb | 2,737 | 51.65 |
Have you looked at -
?
On Thu, Jan 23, 2014 at 9:35 AM, AnilKumar B <akumarb2010@gmail.com> wrote:
> Hi,
>
> We tried setting up HDFS name node federation set up with 2 name nodes. I
> am facing few issues.
>
> Can any one help me in understanding below points?
>
> 1) how can we configure different namespaces to different name node? Where
> exactly we need to configure this?
>
See the documentation. If it is not clear, please open a jira.
>
> 2) After formatting each NN with one cluster id, Do we need to set this
> cluster id in hdfs-site.xml?
>
There is no need to set the cluster id in hdfs-site.xml
>
> 3) I am getting exception like, data dir already locked by one of the NN,
> But when don't specify data.dir, then it's not showing exception. So what
> could be the issue?
>
Are you running the two namenode processes on the same machine?
>
> Thanks & Regards,
> B Anil Kumar.
>
--
--. | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-user/201401.mbox/%3CCADdVdVHgERa51N0=1gFSis64E7ygs2U+cTkZCDjHo2-i+A+9rQ@mail.gmail.com%3E | CC-MAIN-2018-26 | refinedweb | 162 | 78.75 |
From the Android Performance Udacity course, to the 100 Days of Google Dev videos, Google put out an extraordinary amount of resources to help developers create faster, leaner, and smarter Android applications last year. There was no doubt about it, 2015 was the year of #perfmatters.
The performance of your software is hugely important. But also important is developer performance. How fast, lean and smart is your Android development setup? Here are some tips to create an efficient Android development setup.
Auto organize imports
Back in the days of Eclipse, Cmd+Shift+O (Ctrl+Shift+O on Windows) was second nature to me. I hit these keys so frequently, I even would accidentally type them at the end of every sentence when writing an email. One great time saver in Android Studio is to let the IDE import for you on the fly. You can turn this on by going to Preferences → Editor → General → Auto Import, and selecting both “Optimize imports on the fly” and “Add unambiguous imports on the fly.”
In the past 2 years that I have been using Android Studio, I’ve only had 1 instance where I had to turn this feature off temporarily, as the IDE was automatically adding the wrong import. Overall this feature is really accurate and will save you a lot of time.
Don’t build your project: TDD
We all know that the Gradle build time is less than ideal, and while the Gradle and Android teams are releasing improvements, it’s still not that fast. Because of this, do whatever you possibly can to avoid rebuilding your entire project. One great way to do this is to use Unit Testing and even Test Driven Development to your advantage whenever you can.
With the combined forces of JUnit and Robolectric, you can always find part of your app that can be tested. Running an individual test or test suite will always be faster than re-building and running your project after every code change to see your progress. Move even faster by using the hot keys Ctrl+Shift+R (Ctrl+Shift+F10 on Android Studio 2.0 Preview 4, Shift+F10 on Windows) to run all tests in your current file, or just the test immediately under your cursor. You’ve sped up your development process and end up with a well tested feature; win-win!
Don’t build your project: Leverage layout preview
Another great way to avoid building your project is to leverage the Layout Preview. When you open an XML text layout file in Android Studio, expand the Preview tab located along the right border of the XML file. This gives you a view of what your layout will look like once compiled.
Once you start working with layout preview, you might get frustrated that the preview is not showing you exactly what you want to see. For example, it is common for a screen’s initial layout to be pretty empty if your app will be dynamically populating view content. Some of your views’ visibility may even be set to “gone” initially. The layout you see in the preview will look a lot emptier than the spec you are designing against.
Enter the powerful layout tools attributes. The XML tools attributes allow you
to see a more clear picture of your complete layout, without writing throwaway
code or using dummy values that a user might accidentally see. To use the tools
attributes in your XML file, you will first need to add the tools namespace.
You can do this by adding
xmlns:tools="" to
your file’s parent view element. Alternatively, you can start typing
tools in
any view element, then Opt+Enter (Alt+Enter on Windows) to
automatically add the namespace. It’s also a good idea to include
context for
your tools, so any custom themes and styles are applied to the preview as well.
Do this by including
tools:context=".MainActivity" in the parent element.
Here are the top XML tools attributes that I have found useful:
tools:text="title"- Set text on your view only for the layout preview. There is no lint warning for using a hardcoded string here, since it is just for debugging.
tools:src="@drawable/my_img"- Set the image on your view only for the layout preview.
tools:visibility="visible"- Set your view elements to visible, invisible, or gone only for the layout preview.
tools:listitem="@layout/custom_layout"- Show the actual list item layout inside your ListView preview. Use
tools:textand
tools:srcto fill out your list item layout, and you will see the full picture of what your ListView should look like.
tools:showIn="@layout/acitivity_main"- Render your current layout inside the layout which includes it. This allows you to get the big picture of what your layout will look like, without building the entire project, or having a single monolithic layout file. The wrapping layout is greyed a bit in the preview, so it is clear which parts of the layout you are currently editing.
These are just a few small things you can start doing now to use your tools more efficiently. Be a master of your tools, and speed up your development time this year! | https://thoughtbot.com/blog/developer-perf-matters | CC-MAIN-2021-39 | refinedweb | 870 | 61.67 |
Lopy4 onewire has SyntaxError
Hello,
I am running on the latest firmware "1.18.2.r5" with the pysense board. I am using a DS18B20 temperature sensor connected to Pin 10. The code will work for awhile but eventually had this error:
Traceback (most recent call last):
File "main.py", line 15, in <module>
File "onewire.py", line 215
SyntaxError: invalid syntax
from onewire import DS18X20
Line 215 in onewire.py:
temp_msb = data[1]
Any idea what could be causing this? I am using all the libraries supplied by Pycom for DS18X20 and onewire
@Dylan I just tried that code, and it works. Do you have a 4.7 kOhm Pull-up resistor between Vdd and Data of the DS18B20?
I've uploaded a few times, uploads with no errors. I've got a work around by doing:
def get_temp(): temp_value = None count = 0 while temp_value == None: try: count = count + 1 print(temp_value) temp2.start_conversion() time.sleep(1) temp_value = temp2.read_temp_async() time.sleep(1) if count == 2: machine.reset() except SyntaxError: print("\n") print("\n") print("\n") print("\n") print("syntax error") print("\n") print("\n") print("\n") time.sleep(0.5) machine.reset() return(temp_value)
I know machine.reset() is probably not the best approach but it's the simplest that I know :P
@Dylan have you checked that the onewire.py file has been correctly uploaded? Maybe try to re-upload it.
Also check that you don’t have any files with the same name lying around which could cause confusion. | https://forum.pycom.io/topic/4788/lopy4-onewire-has-syntaxerror | CC-MAIN-2020-40 | refinedweb | 253 | 68.97 |
Introduction: How to Write a Library the Easy Way
First thing is first I am not a programmer or have a lot of experience programming I am just a beginner. there are a lot of ways of doing things this is my way maybe is the wrong way but it works for me. This is for beginners like my self, if you have any questions about anything just let me know and i will try to help. it is hard for me to explain some things with out going in to detail but ill try my best to answer. with that being said lets get started...
Step 1: The Intro
I have been working in a project that uses a Pt2322 6 channel audio processor that uses i2c protocol, there is only one library written by oddwires and its a bit hard to use. I was having some problems understanding all the functions so I started to study the library and after some research it all made sense, so I decided to write my own blink library .
This library consist of 3 files
*.cpp
*.h
keywords.txt
Step 2: The Header File
the code:
#include "Arduino.h"
class Blinker //we define our class
{
public: //public declared variables & functions, accessible outside of the class
Blinker (int pin, int duration); //default constructor of Blinker class
void blink (int times); //we define our function
private : //private declared variables & functions, accessible inside the class
int _pin;
int _d;
}; //end of class definition
looking at the above code we can see how we have created a class object named Blinker, and we see our function
blink we have set up are variables public and private. this is our header file or *.h now our.....
Step 3: Our Cpp File
#include "Arduino.h"
#include "Diylibrary.h" //include our header file
Blinker::Blinker (int pin, int duration) //calling constructor
{ pinMode(pin, OUTPUT); // make pin an output
_pin = pin; //pin
_d = duration / 2; //wait halft of the wanted period
}
void Blinker::blink(int times) //we define our blink fuction
{
for (int i = 0; i< times; i++)
{
digitalWrite(_pin, HIGH);
delay(_d);
digitalWrite(_pin, LOW);
} }
if you look at the code you see that this is our main function first we include our header file in this case "diylibrary.h"
all the lines are well commented so i wont go into much detail. the constructor in this case Blinker assigns all the parameters to the variables. the blink fucntions does all our work for us.
Step 4: Our Keywords File
with out the keywords file nothing from our library would be recognized by the environment and highlighted in color. Each line has the name of the keyword, followed by a tab followed by the kind of keyword. Classes should be KEYWORD1 and are colored orange; functions should be KEYWORD2 and will be brown.
so we add these to our keywords file ans save it as a *.txt
Blinker KEYWORD1
Blink KEYWORD2
Step 5: Using the Library
#include <Diylibrary> // we include our library
int ledPin =11; // green led pin
int ledPin1 = 12; //white led pin
int Duration=500; //our duration
Blinker BlinkWhite (ledPin, Duration); //our new created object will blink pin 11 for the value on our duration variable
Blinker BlinkGreen (ledPin1, Duration); //another new created object will blink pin 12 for value on our duration variable
void setup(){ }
void loop()
{
BlinkWhite.blink (3); // our class is blink it takes a single argument of the number of times to flash
delay(2000);
BlinkGreen.blink(3); // blink our white led the wait 2 seconds blink our green led and wait 2 seconds repeat (3) times delay(2000);
}
really simple code we define our pins and the duration. we create two objects BlinkWhite and BlinkGreen we add this to our loop and declare the number of times to flash with only 3 lines of codes we can keep adding more and more leds.
add the library to the arduino library folder and to make it work just connect two leds one on pin 11 and the other one on pin 12 and run the sketch
Recommendations
We have a be nice policy.
Please be positive and constructive.
2 Comments
I will update this instructable with more details and definitions of constructors destructors, methods, classes, fucntion etc.. ect.. and will add a new class and Fader fuctions to the library so it is easier to understand and follow.
Nice informational based instructable! Very useful for those that are new to coding concepts. | http://www.instructables.com/id/How-to-write-a-library-the-easy-way/ | CC-MAIN-2018-22 | refinedweb | 749 | 65.25 |
Formatting Frustrations — Dataframes
Getting Presentation Ready Formats with Aggregate Functions
this seems like nothing to write about; however, given Python lacks the point and click functionality offered in Excel, accessing individual elements within a dataframe can be challenging. I will demonstrate this dilemma by introducing you to the function that was giving me the formatting fits: df.describe(). Before I start I will remind you of the target audience for this blog. This blog along with my previous posts are targeted toward the beginner. Furthermore, the solution proposed below, to me seems like genius, but may seem like painting by the numbers to more advance data alchemists. As a side not during the process of developing my work around I felt like doing what the guy above is doing 😊. Now lets get started
What is a Df.describe()?
df.describe() is one of my favorite aggregate functions within Python. With one word and some parenthesis “()” python generates 8 key statistics providing information on the distribution of your data. Below is an example, using a small set of movie data (# of movies, box office gross lifetime sales for the Top 10 movie actors).
Step 1. Loading And df.describing() Your Data
Code snippets shown in grey for easy cutting and pasting
import pandas as pd
# Create data for analysis
d ={‘Function’:[‘Actor’,’Actor’,’Actor’,’Director’,’Director’,’Director’,’Producer’,’Producer’,’Producer’,’Writer’,’Writer’,’Writer’],
‘Name’:[‘Will Smith’,’Samuel L. Jackson’,’Robert Downey, Jr.’,’Joe Russo’,’Steven Spielberg’,’Jennifer Lee’,’Kevin Feige’,’Kathleen Kennedy’,’Jerry Bruckheimer’,’Jennifer Lee’,’Stephen McFeely’,’Christopher Markus’],
'Box Office’:[7052831352,5655231264,5474166329,4561643598,4542164603,2634334761,8545426433,5999104450,4306736018,3504008616,3175631122,3175631122],
‘MovieCount’:[68,64,43,16,36,6,30,35,36,15,12,12]}
# Load your data into a DataFrame
df_Movie_Data = pd.DataFrame(data=d)
# Aggregate your data using df.describe()
df_Movie_Data.describe()
Wow look at that output! We were able to generate 8 descriptive statistics (count, mean, std, min, max top 25%, 50%, 75%), on our data…across both columns! Very easy and pretty awesome! However, there is a problem. The formatting for our “Box Office” column is in scientific notation and it should be in $’s and our “Movie Counts” column which is just the number of movies added together have several trailing 0’s. I really can’t understand the results myself nor could I present these numbers to folks in the business without appearing lazy, and we don’t want that! These numbers definitely are not what I would call “presentation ready”. No problem Python has a solution for formatting, as we saw in my previous post, we can use df.style.format().
Step 2. Formatting Output of df.Describe() using .style.format()
The .format() function take in as arguments the name of the column(s) and the format you want. In my case {‘Box Office’:’${0:,.0f}’, ‘MovieCount’: ‘{0:,.0f}’}
# Use .style.format() to format and create a dictionary for each column defining the format
format_dict = {‘Box Office’:’${0:,.0f}’, ‘MovieCount’: ‘{0:,.0f}’}
df_Movie_Data.describe().style.format(format_dict)
Great! My formats are working! But wait… “count” in column “Box Office” is formatted as a $ (see highlight), and that makes no sense. How do I format “count” to be a number and not currency? No problem, I will simply use the index of the dataframe and reset that row to format without the “$” sign…Right?
Wrong! I get an error. Once you apply a format to your entire dataframe using .style.format(), the return object is of type Styler, not dataframe and Styler has no way to target its elements by row or column. You are not able to use lambda on a Styler either…. ARG….. So what should you do? Cut and paste the dataframe into Excel, reformate the cell, and drop into your presentation? Sure, that is a solution but becoming a Python users requires fortitude so we pressed on!
Proposed Solution/ Workaround
After several frustrating hours of searching and reaching out to my Python network at Flatiron bootcamp I found the following work around
# Use Apply Map for formatting with Criteria by element instead of applying a format to entire column
df_Movie_Data_Desc.applymap(lambda x: ‘{:.0f}’.format(x) if int(x) <100 else ‘${:,.0f}’.format(x))
Explanation: Essentially what is happening is I am using lambda and .applymap() NOT .apply() (see differences here) to access and format each individual cell/ element and looking at its value to determine if I should apply a format. Given I know the data and that smaller numbers in my data set are actually just counts vs currency I apply my formats to the entire dataframe element by element using the value to drive the format I apply.
Final Note
Certainly, there are limitations to my work around. Thankfully in my case the value of the data was an indicator of the formatting I wanted to apply at the element level. Something to note here using the .applymap() vs. apply was necessary as .apply() tries to access the .format() function using the entire Series and Series does not have a function called .format(). However, applymap feeds the lambda function element wise — meaning cell by cell in Excel terms. By feeding the lambda function elementwise vs. Series wise you are able to access the .format() method which is available on an int object level, essentially the value of the specific cell.
Hopefully this example helps others looking to apply different formats to the same column/ row within a dataframe. As I learned during this journey having two or more formats in one column or row is not as easy as just using the df.stlye().format() technique. In hindsight this task was especially frustrating knowing how easily this could be done using Excel. However, becoming a data alchemist requires mastery of Python and mastery requires commitment. Overcoming small mountains today, fuels larger victories tomorrow! Hopefully you enjoyed this writeup and it is helpful to you? See you in the next journey 😊
Next Stop — DATA ALCHEMY!
| https://rgpihlstrom.medium.com/formatting-frustrations-dataframes-510c1ba1fd37 | CC-MAIN-2021-31 | refinedweb | 998 | 56.55 |
NPP C# plugin - How to get the list of Notepad++'s open files.
- Alessandro last edited by Alessandro last edited by
Can you explain me why the method saveCurrentSessionDemo present in the Demo NPP C# plugin, even if the Win32.SendMessage return the full path of the file to be saved (i.e. C:\text.txt) the file text.txt is not created. It happens only if the static string sessionFilePath = @“C:\text.txt”" and it doesn’t if *static string sessionFilePath = @“C:\Users\USR1\Desktop\text.txt”.
Here is the source code of the saveCurrentSessionDemo
static void saveCurrentSessionDemo()
{
string sessionPath = Marshal.PtrToStringUni(Win32.SendMessage(PluginBase.nppData._nppHandle, (uint) NppMsg.NPPM_SAVECURRENTSESSION, 0, sessionFilePath));
if (!string.IsNullOrEmpty(sessionPath))
MessageBox.Show(sessionPath, “Saved Session File :”, MessageBoxButtons.OK);
}
Regards and thanks in advance
Alessandro
I’m a little bit confused about what is working and what doesn’t.
Are you saying if you specify sessionFilePath = “C:\text.txt” it is working
but if you specify sessionFilePath = “C:\Users\USR1\Desktop\text.txt"
it does not work?
Or is it vice versa?
As I’m not a csharp developer I tried this with python and both ways work for me
so I have to assume that the path C:\Users\USR1\Desktop does not exist or permissions
problems happen.
Cheers
Claudia
You confusion comes from the fact I’m not a native english, sorry.
But yes in my case sessionFilePath = “C:\text.txt” doesn’t work. It also for me a matter of permissions. Then the question is Win32.SendMessage doesn’t return a path null or empty as the file ins’t created.
Thank you so much.
Alessandro
- Claudia Frank last edited by Claudia Frank
I tried this with python and it is behaving the same. I don’t get an info that the call failed.
I did a quick look to the npp source code and it looks like we should get the info
case NPPM_SAVECURRENTSESSION: { return (LRESULT)fileSaveSession(0, NULL, reinterpret_cast<const TCHAR *>(lParam)); }
and
const TCHAR * Notepad_plus::fileSaveSession(size_t nbFile, TCHAR ** fileNames, const TCHAR *sessionFile2save) { if (sessionFile2save) { Session currentSession; if ((nbFile) && (fileNames)) { for (size_t i = 0 ; i < nbFile ; ++i) { if (PathFileExists(fileNames[i])) currentSession._mainViewFiles.push_back(generic_string(fileNames[i])); } } else getCurrentOpenedFiles(currentSession); (NppParameters::getInstance())->writeSession(currentSession, sessionFile2save); return sessionFile2save; } return NULL; }
In case of success it should return the full path, in case of error NULL.
So I assume, this isn’t correctly implemented, either in python and c# plugin pack
or npp. Hmm, as I do not have a windows operating system anymore I can hardly test it. Sorry.
Cheers
Claudia
- gurikbal singh last edited by
@Alessandro
BGuenthner has good answer here for get all open files and current open files include new tab | https://community.notepad-plus-plus.org/topic/14560/npp-c-plugin-how-to-get-the-list-of-notepad-s-open-files/ | CC-MAIN-2019-51 | refinedweb | 455 | 58.69 |
The React Radar chart visualizes data in terms of values and angles. It provides options for visual comparison between several quantitative or qualitative aspects of a situation. It supports interactive features such as selection and tooltip.
The React Radar Chart provides an option to reverse the axis labels and ticks. This swaps the higher and lower ranges of an axis.
Customize the start angle of the chart to rotate it..
Supports different types of series like line, column, spline, range area, scatter, and stacked area.
You can create a wind rose chart with the help of stacked column series in polar and radar charts.
Easily get started with React Radar Chart by using a few lines of HTML and TS code, as demonstrated below. Also explore our React Radar Chart Example that shows how to render and configure the chart.
import { Chart, RadarSeries, Category } from '@syncfusion/ej2-charts'; Chart.Inject(RadarSeries, Category); let chart: Chart = new Chart({ primaryXAxis: { valueType: 'Category' }, series:[{ type: 'Radar', drawType: 'Line', dataSource: [{ x: 'JPN', text: 'Japan', y: 5156, y1: 4849, y2: 4382, y3: 4939 }, { x: 'DEU', text: 'Germany', y: 3754, y1: 3885, y2: 3365, y3: 3467 }, { x: 'FRA', text: 'France', y: 2809, y1: 2844, y2: 2420, y3: 2463 }, { x: 'GBR', text: 'UK', y: 2721, y1: 3002, y2: 2863, y3: 2629 }, { x: 'BRA', text: 'Brazil', y: 2472, y1: 2456, y2: 1801, y3: 1799 }, { x: 'RUS', text: 'Russia', y: 2231, y1: 2064, y2: 1366, y3: 1281 }, { x: 'ITA', text: 'Italy', y: 2131, y1: 2155, y2: 1826, y3: 1851 }, { x: 'IND', text: 'India', y: 1857, y1: 2034, y2: 2088, y3: 2256 }, { x: 'CAN', text: 'Canada', y: 1843, y1: 1793, y2: 1553, y3: 1529 }], xName: 'x', yName: 'y', }, ], }, '#Chart');
<!DOCTYPE html> <html> <head></head> <body style = 'overflow: hidden'> <div id="container"> <div id="Chart"></div> </div> <style> #control-container { padding: 0px !important; } </style> </body> </html>
Learn the available options to customize the React Radar chart.
Polar Chart API Reference
Explore the React Radar chart APIs. | https://www.syncfusion.com/react-ui-components/react-charts/chart-types/radar-chart | CC-MAIN-2021-31 | refinedweb | 325 | 66.98 |
We’ve seen how to create a Dictionary using LINQ. A Dictionary is a hash table in which only one object may be stored for each key. It can also be useful to store more than one object for a given key, and for that, the C# Lookup<> (part of the System.Linq namespace) generic type can be used.
LINQ provides the ToLookup() method for creating Lookups. It works in much the same way as ToDictionary(), except that as many objects as you like can be attached to each key.
Returning to our example using Canadian prime ministers, we can create a Lookup in which the key is the first letter of the prime minister’s last name. The code is
PrimeMinisters[] primeMinisters = PrimeMinisters.GetPrimeMinistersArray(); var pmLookup01 = primeMinisters.ToLookup(pm => pm.lastName[0]); var keys01 = pmLookup01.Select(pm => pm.Key).OrderBy(key => key); Console.WriteLine("----->pmLookup01"); foreach (var key in keys01) { Console.WriteLine("PMs starting with {0}", key); foreach (var pm in pmLookup01[key]) { Console.WriteLine(" - {0}, {1}", pm.lastName, pm.firstName); } }
This is the simplest version of ToLookup(). The method takes a single argument, which is a function specifying how to calculate the key. In this case, we just take the first char in the string pm.lastName.
For some reason, the Lookup class doesn’t contain a property for retrieving the list of keys, so we need to use a roundabout method to get them. Line 5 uses a Select() to retrieve the keys and an OrderBy() to sort them into alphabetical order. We can then iterate over the keys and, for each key, we can iterate over the prime ministers for that key. Note that the object pmLookup01[key] is not a single object; rather it contains a list of all prime ministers whose last name begins with the letter contained in the key.
The output from this code is:
----->pmLookup01 PMs starting with A - Abbott, John PMs starting with B - Bowell, Mackenzie - Borden, Robert - Bennett, Richard PMs starting with C - Clark, Joe - Campbell, Kim - Chrétien, Jean PMs starting with D - Diefenbaker, John PMs starting with H - Harper, Stephen PMs starting with L - Laurier, Wilfrid PMs starting with M - Macdonald, John - Mackenzie, Alexander - Meighen, Arthur - Mackenzie King, William - Mulroney, Brian - Martin, Paul PMs starting with P - Pearson, Lester PMs starting with S - St. Laurent, Louis PMs starting with T - Thompson, John - Tupper, Charles - Trudeau, Pierre - Turner, John
We can do the same thing using the second form of ToLookup(), which allows us to specify an EqualityComparer to be used in determining which keys are equal. The comparer class looks like this:
using System.Collections.Generic; namespace LinqObjects01 { class LookupComparer : IEqualityComparer<string> { public bool Equals(string x, string y) { return x[0] == y[0]; } public int GetHashCode(string obj) { return obj[0].GetHashCode(); } } }
This comparer compares two strings and says they are equal if their first characters are equal. Using this class, we can apply the second form of ToLookup():
PrimeMinisters[] primeMinisters = PrimeMinisters.GetPrimeMinistersArray(); var pmLookup02 = primeMinisters.ToLookup(pm => pm.lastName, new LookupComparer()); var keys02 = pmLookup02.Select(pm => pm.Key).OrderBy(key => key); Console.WriteLine("----->pmLookup02"); foreach (var key in keys02) { Console.WriteLine("PMs starting with {0}", key[0]); foreach (var pm in pmLookup02[key]) { Console.WriteLine(" - {0}, {1}", pm.lastName, pm.firstName); } }
The first argument to ToLookup() now passes the entire pm.lastName, and the second argument to ToLookup() is the comparer object.
When we print out the results, we have to remember that the key for each entry in the Lookup is now the full last name of the first prime minister encountered whose name starts with a given letter. Thus if we printed out the full key, we’d get a full last name. That’s why we print out key[0] on line 8; that way we get the first letter of the name.
The third version of ToLookup() allows us to specify a custom data type to return. If we wanted just the first and last names of each prime minister, for example, we could write:
PrimeMinisters[] primeMinisters = PrimeMinisters.GetPrimeMinistersArray(); var pmLookup03 = primeMinisters.ToLookup(pm => pm.lastName[0], pm => new { lastName = pm.lastName, firstName = pm.firstName }); var keys03 = pmLookup03.Select(pm => pm.Key).OrderBy(key => key); Console.WriteLine("----->pmLookup03"); foreach (var key in keys03) { Console.WriteLine("PMs starting with {0}", key); foreach (var pm in pmLookup03[key]) { Console.WriteLine(" - {0}, {1}", pm.lastName, pm.firstName); } }
We’ve returned to using the first letter of the last name as the key (that is, there’s no comparer), and passed in an anonymous data type as the second argument to ToLookup(). Apart from that, the code is the same as in the first example.
Finally, the fourth version allows us to specify both a custom data type and a comparer, so we can combine that last two examples to get this:
PrimeMinisters[] primeMinisters = PrimeMinisters.GetPrimeMinistersArray(); var pmLookup04 = primeMinisters.ToLookup(pm => pm.lastName, pm => new { lastName = pm.lastName, firstName = pm.firstName }, new LookupComparer()); var keys04 = pmLookup04.Select(pm => pm.Key).OrderBy(key => key); Console.WriteLine("----->pmLookup04"); foreach (var key in keys04) { Console.WriteLine("PMs starting with {0}", key[0]); foreach (var pm in pmLookup04[key]) { Console.WriteLine(" - {0}, {1}", pm.lastName, pm.firstName); } }
Now we’re back to using the full last name as the key, since the comparer does the checking for matching first letters. Just make sure you pass in the arguments in the right order: (1) choose key; (2) choose custom data type; (3) choose comparer. | https://programming-pages.com/2012/09/28/linq-tolookup/ | CC-MAIN-2018-26 | refinedweb | 913 | 66.33 |
kstars
#include <flagcomponent.h>
Detailed Description
Represents a flag on the sky map.
Each flag is composed by a SkyPoint where coordinates are stored, an epoch and a label. This class also stores flag images and associates each flag with an image. When FlagComponent is created, it seeks all file names beginning with "_flag" in the user directory and consider them as flag images.
The file flags.dat stores coordinates, epoch, image name and label of each flags and is read to init FlagComponent
- Version
- 1.1
Definition at line 44 of file flagcomponent.h.
Constructor & Destructor Documentation
Constructor.
Definition at line 39 of file flagcomponent.cpp.
Member Function Documentation
Add a flag.
- Parameters
-
Definition at line 159 of file flagcomponent.cpp.
Draw the object on the SkyMap
skyp a pointer to the SkyPainter to use.
Implements SkyComponent.
Definition at line 63 of file flagcomponent.cpp.
Get epoch.
- Returns
- the epoch as a string
- Parameters
-
Definition at line 246 of file flagcomponent.cpp.
epochCoords return coordinates recorded in original epoch
- Parameters
-
- Returns
- pair of RA/DEC in original epoch
Definition at line 368 of file flagcomponent.cpp.
Get list of flag indexes near specified SkyPoint with radius specified in pixels.
- Parameters
-
Definition at line 311 of file flagcomponent.cpp.
Return image names.
- Returns
- the list of all image names
Definition at line 236 of file flagcomponent.cpp.
Get image.
- Returns
- the image associated with the flag
- Parameters
-
Definition at line 276 of file flagcomponent.cpp.
Get images.
- Returns
- all images that can be use
Definition at line 306 of file flagcomponent.cpp.
Get image.
- Parameters
-
- Returns
- an image from m_Images
Definition at line 343 of file flagcomponent.cpp.
Get image name.
- Returns
- the name of the image associated with the flag
- Parameters
-
Definition at line 291 of file flagcomponent.cpp.
Get label.
- Returns
- the label as a string
- Parameters
-
Definition at line 256 of file flagcomponent.cpp.
Get label color.
- Returns
- the label color
- Parameters
-
Definition at line 266 of file flagcomponent.cpp.
Load flags from flags.dat file.
Definition at line 82 of file flagcomponent.cpp.
Remove a flag.
- Parameters
-
Definition at line 185 of file flagcomponent.cpp.
Save flags to flags.dat file.
Definition at line 143 of file flagcomponent.cpp.
- Returns
- true if component is to be drawn on the map.
Reimplemented from SkyComponent.
Definition at line 77 of file flagcomponent.cpp.
Return the numbers of flags.
- Returns
- the size of m_PointList
Definition at line 241 of file flagcomponent.c from SkyComponent.
Definition at line 379 of file flagcomponent.cpp.
Update a flag.
- Parameters
-
Definition at line 206 of file flagcomponent. | https://api.kde.org/extragear-api/edu-apidocs/kstars/html/classFlagComponent.html | CC-MAIN-2019-30 | refinedweb | 434 | 53.37 |
The break statement is used inside loops and switch case.
C – break statement
1. It is used to come out of the loop instantly. When a break statement is encountered inside a loop, the control directly comes out of loop and the loop gets terminated. It is used with if statement, whenever used inside loop.
2. This can also be used in switch case control structure. Whenever it is encountered in switch-case block, the control comes out of the switch-case(see the example below).
Flow diagram of break statement
Syntax:
break;
Example – Use of break in a while loop
#include <stdio.h> int main() { int num =0; while(num<=100) { printf("value of variable num is: %d\n", num); if (num==2) { break; } num++; } printf("Out of while-loop"); return 0; }
Output:
value of variable num is: 0 value of variable num is: 1 value of variable num is: 2 Out of while-loop
Example – Use of break in a for loop
#include <stdio.h> int main() { int var; for (var =100; var>=10; var --) { printf("var: %d\n", var); if (var==99) { break; } } printf("Out of for-loop"); return 0; }
Output:
var: 100 var: 99 Out of for-loop
Example – Use of break statement in switch-case
#include <stdio.h> int main() { int num; printf("Enter value of num:"); scanf("%d",&num); switch (num) { case 1: printf("You have entered value 1\n"); break; case 2: printf("You have entered value 2\n"); break; case 3: printf("You have entered value 3\n"); break; default: printf("Input value is other than 1,2 & 3 "); } return 0; }
Output:
Enter value of num:2 You have entered value 2
You would always want to use break statement in a switch case block, otherwise once a case block is executed, the rest of the subsequent case blocks will execute. For example, if we don’t use the break statement after every case block then the output of this program would be:
Enter value of num:2 You have entered value 2 You have entered value 3 Input value is other than 1,2 & 3
in break statement, the variable “num” is not declared in program.
Nice collections of examples of break statement in switch case in c programming…Thankyou
write a c program of switch case statement that will output the following: 2*3=6. 2+3=5. 4+6=10. 4/5=0.8 | https://beginnersbook.com/2014/01/c-break-statement/ | CC-MAIN-2018-05 | refinedweb | 405 | 60.48 |
Hello everyone,
I have been writing Flash games for a while and now I would like to write games for browser, Android and iOS. I am very comfortable with AS3 (coming from Java) have read through books and tutorials, had online trainining from Lynda.com you name it. I would like to hear from real developres some best practice approaches to the 'write once, deploy anywhere' feature of Flash Professional. Namely I would like to know:
These are the main things I would like to know. Feel free to add any hands-on knowledge you would offer me and users of this forum. I think this is a great discussion to have that would elimate a lot of the 'fire extinguishing' that takes place on help forums. I am running CS6 Web Premium on Windows 7.
- Should I use timeline code?
- no. only use timeline code for simple projects (and i can't think of any game that's simple coding-wise) and for testing code snippets. otherwise, you can have relatively complex scope issues to resolve and/or problems working on your code after you've forgotten where each snippet is located.
- Should I include third party engines like Tween Lite. For example, if I have a Tween Lite com folder in my app or game will that compile to native iOS code?
- yes, tweenlite is an excellent choice and there's no problem using it with any flash target platform.
- How should I approach sound clips and music?
- there's no problem with sound and android/iOS. for the web, sound file size is a consideration so, you may want to (pre)load your sounds.
Awesome. Immensely helpful. Your right, the coding on games can be very daunting. Recently, I ran into the "unexpected file format" error and had to learn the hard way to use version control. Are there any other pitfalls that come to your (or anyone's) mind that people should know?
Having a proactive discussion is great so people, like me, who are new to Flash but not to coding can avoid posting question after question while making our app/game/etc.
that's another benefit of class coding: you'll never end up with an unrecoverable class file.
be aware of the stage size differences between your target platforms and decide asap if you can live with one of the stagescalemode settings (so you can easily code one fla/one swf to publish for iOS, android and web platforms). otherwise, use a liquid layout so you can use one fla and publish 3 swfs.
Great point. So far, I am approaching it as writing an FLA for each platform making just minor adjustments (hopefully). I am trying to avoid numbers when setting sizes as much as I can. Instead I'm determining the size of graphics relative to the stage size.
There is another question that comes to mind. In a lot of games I use either counters or the timer events. For example, if I wanted to intoduce enemies onto the stage every few seconds I might use a counter which is basically a "var counter:Number = 0; counter++; Then say, if counter == 250 introduce enemy. I might do the same with the timer events.
My question is, is there a better way to approach this when trying to 'write once, run anywhere?' I ask because I have heard that the timing delays may vary on different platforms. Are there ways around this? Does Flash already account for this?
all timer events and enterframe loops are approximate. meaning, you can expect accuracy on all platforms that can maintain the average enterframe rate. ie, if you overload a platform, the average enterframe rate (and every timer rate) will fall behind the expected rate.
however, flash only attempts to maintain the average frame rate and timer intervals. the duration between loops can and does very tremendously which is not a problem for most games. it is a problem for anyone trying to maintain a musical beat.
You are a tremendous help kglad!
you're welcome. and, good luck!
Hmmmm. Quick question about timeline code. Will timeline code compile and port into iOS and Android? I have found it very easy to control the functions of a movieclip loaded with buttons and symbols through the timeline. It is just easier for me. I simply write the whole MovieClip's logic on one screen and then instatiate it onto the first frame of my game all as one MovieClip.
If timeline code will compile, I would love to use it. So does it compile and port to other platforms?
yes, it will compile for all the publishable platforms.
Here's a spinner. "Scenes." What's the verdict on scenes in cross-platform design. Should they be avoided? Do they compile 'nicely.' I make a note of avoiding them but they are a very convenient, almost lazy, way of transitioning to different parts of a game. For example, say there is a disclaimer I want to run at the beginning of a game, for non-commercial browser publishing I would just put a scene at the beginning, throw a few hundred key frames into it and it would play, stay for a while and then disappear forever.
Back to the question. Are scenes to be avoided, in cross-platform publishing? Are there any good approaches to using scenes for cross-platform publishing, keeping in mind the 'write-once, run anywhere' portability aspect?
using scenes in as3 doesn't have any unexpected problems that i know of. i never use them but that may be a hold-over from as2 which had a problem with scenes.
you just have to be aware that at frame 1 of the next scene all objects created in the ide are gone. anything created with actionscript will still exist from one scene to the next.
fastalgebra, I'm going to go off the rails (sorry, kglad) and say that I've had tremendous success with timeline code. I've done some class work, but I'm really a procedural programmer at heart. I've done some extremely large and complex stuff with timeline code, albeit with a ton of trial-and-error over many years to learn what works and what doesn't. My code placement is consistent, it's reusable, and it's easy for me to go back to projects from years ago to find things. Granted, I don't use the timeline for actual animation or timing... I just use the timeline as a "filing cabinet" for all my code.
While I haven't done any Android work, I've built apps that work on all iPhones and the iPad, with one .fla and one .swf (or .ipa).
I think there are simply a lot of ways to skin the cat.
Thanks for chiming in AV
(helpful
)
No doubt about it. There are, however, a lot of caveats to using timeline code. For example, always remembering to copy/paste into notepad and save in the case of a corrupted FLA, being limited to only one frame vs the tentacles of a document class, weight (it seems all the code loads and runs without waiting to be called) and the list goes on and on. You get the point.
Timeline code is like a filing cabinet, as you put it, whereas the class approach is like a books on a shelf. I think Kglad's opinion was specific to the question I asked, bearing in mind that I was making games. In the case of games, timeline code can become very expenisive. I try to reuse code wherever I can to keep things light.
My approach, so far, is still mostly class based but I almost always use timeline code for animations varying from five to thirty lines of code on average, I would say. Then I just call pre-animated symbols from my document class. Seems to be working so far.
Thanks for sharing that timeline code is porting nicely into .ipa. That is a huge reassurance! That was actually my main concern. Android is still using a virtual machine (so I am not too worried about that) but I heard that publishing to .ipa converts to native code. Hats off to the team who pulled that off.
And like you said, there are definitely "a lot of ways to skin the cat." Happy coding.
Question. When coding for different platforms, would it present a problem to reference a movieclip in timeline code (via an instance name) and then reference the same movieclip in the document class with a different name? I know using the same name may be a namespace conflict but what if the movieclip is called on twice at the same time?
Obviously, this is not a problem for the browser, but what about when it converts to native code or ports for Android?
an object can have only one instance name at any one time.
it can have different paths depending where the code is that references the object and you can use different references/variables to reference the object but you can't assign different instance names. | http://forums.adobe.com/thread/1219339 | CC-MAIN-2014-15 | refinedweb | 1,533 | 72.66 |
Simple Deluge Client
Project description
A lightweight pure-python rpc client for deluge. Note, does not support events and any additional replies from deluge will mess up the datastream.
Requirements
Deluge 1.3.x, 2.0 beta
Python 2.7, 3.5, 3.6, 3.7
Install
From GitHub (develop):
pip install git+
From PyPi (stable):
pip install deluge-client
Usage
>>> from deluge_client import DelugeRPCClient >>> client = DelugeRPCClient('127.0.0.1', 12345, 'username', 'password') >>> client.connect() >>> client.connected True >>> client.call('core.get_torrents_status', {}, ['name']) {'79816060ea56d56f2a2148cd45705511079f9bca': {'name': 'TPB.AFK.2013.720p.h264-SimonKlose'}} >>> client.core.get_torrents_status({}, ['name']) {'79816060ea56d56f2a2148cd45705511079f9bca': {'name': 'TPB.AFK.2013.720p.h264-SimonKlose'}}
It is also usable as a context manager.
>>> from deluge_client import DelugeRPCClient >>> with DelugeRPCClient('127.0.0.1', 12345, 'username', 'password') as client: ... client.call('core.get_torrents_status', {}, ['name']) {'79816060ea56d56f2a2148cd45705511079f9bca': {'name': 'TPB.AFK.2013.720p.h264-SimonKlose'}}
Idiom to use for automatic reconnect where the daemon might be offline at call time.
import time from deluge_client import DelugeRPCClient, FailedToReconnectException def call_retry(client, method, *args, **kwargs): # We will only try the command 10 times for _ in range(10): try: return client.call(method, *args, **kwargs) except FailedToReconnectException: # 5 second delay between calls time.sleep(5)
Idiom usage
client = DelugeRPCClient('127.0.0.1', 58846, 'username', 'password', automatic_reconnect=True) # The client has to be online when you start the process, # otherwise you must handle that yourself. client.connect() call_retry(client, 'core.get_torrents_status', {}, ['name']) # or if you have local Deluge instance, you can use the local client # LocalDelugeRPCClient accepts the same parameters, but username and password can be omitted from deluge_client import LocalDelugeRPCClient localclient = LocalDelugeRPCClient() localclient.connect()
List of Deluge RPC commands
Sadly, this part isn’t well documented. Your best bet is to check out the source code and try to figure
out what you need. The commands are namespaced so the commands you mostly need, core commands, are prefixed
with a
core. - Check out this search for all commands
and core.py for core commands.
The exported commands are decorated with
@export.
You can also get a list of exported commands by calling the
daemon.get_method_list method:
client.call('daemon.get_method_list') # or client.daemon.get_method_list()
License
MIT, see LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/deluge-client/ | CC-MAIN-2022-40 | refinedweb | 391 | 52.36 |
feed processing automation
Bajet $100-300 USD
Distributor feed access database (Automated in as few clicks as possible)
My goal is to take several distributor feeds and automate the daily processing of the files to show current cost and price mark-up using the parameters below. I would like to have as much of this process automated as humanly possible. The code should account for product changes (additions, inventory levels and price changes.)
Process feeds thru a mark up process that may involve excel ( I have an excel file that produces the intended price mark-up below)
Formula for mark up.
MSRP-cost = total margin oppty
total margin oppty divided by 2 = mark up
mark up+ cost = raw price
raw price rounded up to .88
Example
MSRP-cost = total margin oppty
$199.99 - $140.00 = 59.99
total margin oppty divided by 2 = mark up
59.99 / 2 = 29.99
mark up+ cost = raw price
29.99 + 140.00 = 169.99
raw price rounded up to .88
170.88
import resulting feed thru MS access database.
Match attributes to shopping cart template. Output CSV finished feed for import into website
FTP set up to Import into OSCommerce site on daily basis at given time | https://www.my.freelancer.com/projects/php-c-c/feed-processing-automation/ | CC-MAIN-2018-13 | refinedweb | 204 | 65.12 |
#include "[path]filename.as":String
Includes the contents of the specified file, as if the commands in the file are part of the calling script. The
#include directive is invoked at compile time. Therefore, if you make any changes to an external file, you must save the file and recompile any FLA files that use it.
If you use the Check Syntax button for a script that contains
#include statements, the syntax of the included files is also checked.
You can use
#include in FLA files and in external script files, but not in ActionScript 2.0 class files.
You can specify no path, a relative path, or an absolute path for the file to be included. If you don't specify a path, the AS file must be in one of the following locations:
#includestatement
To specify a relative path for the AS file, use a single dot (.) to indicate the current directory, two dots (
..) to indicate a parent directory, and forward slashes (
/) to indicate subdirectories. See the following example section.
To specify an absolute path for the AS file, use the format supported by your platform (Macintosh or Windows). See the following example section. (This usage is not recommended because it requires the directory structure to be the same on any computer that you use to compile the script.)
If you place files in the First Run/Include directory or in the global Include directory, back up these files. If you ever need to uninstall and reinstall Flash, these directories might be deleted and overwritten.
Do not place a semicolon (;) at the end of the line that contains the #include directive.
Availability: ActionScript 1.0; Flash Player 4.0
[path]filename.as
:String - filename.asThe filename and optional path for the script to add to the Actions panel or to the current script; .as is the recommended filename extension.
The following examples show various ways of specifying a path for a file to be included in your script:
// Note that #include statements do not end with a semicolon (;) // AS file is in same directory as FLA file or script // or is in the global Include directory or the First Run/Include directory #include "init_script.as" // AS file is in a subdirectory of one of the above directories // The subdirectory is named "FLA_includes" #include "FLA_includes/init_script.as" // AS file is in a subdirectory of the script file directory // The subdirectory is named "SCRIPT_includes" #include "SCRIPT_includes/init_script.as" // AS file is in a directory at the same level as one of the above directories // AS file is in a directory at the same level as the directory // that contains the script file // The directory is named "ALL_includes" #include "../ALL_includes/init_script.as" // AS file is specified by an absolute path in Windows // Note use of forward slashes, not backslashes #include "C:/Flash_scripts/init_script.as" // AS file is specified by an absolute path on Macintosh #include "Mac HD:Flash_scripts:init_script.as"
Flash CS3 | http://www.adobe.com/livedocs/flash/9.0/main/00001155.html | crawl-002 | refinedweb | 491 | 55.13 |
Is it possible to send a URL to pythonista from my phone and make it open on Windows?
Hello everyone,,
I tried a few apps that say they sync ios with windows, but they don't work the way I want or they are very limited.
What I want to do is pass a URL from my iOS device to my windows PC and make it open automatically.
The steps I think it need to have are:
Get the URL
Pass it to Pythonista
Use a script to send this URL, as well as appropriate commands to my windows PC
Run the command, which will open a new tab, loading the URL
What I cannot do yet is to connect pythonista and windows. I think that I'll be able to ssh (now that windows 10 has native ssh). But I have little to no knowledge on these things and I don't even know how to start configuring a ssh server on my pc, let alone connect it using stash.
Do you think this is possible? If it is, could you help me understand how can I create an active server (maybe ssh, but it can also be by using powershell)
@Jhonmicky, see the instructions here for setting up SSH Remoting on Windows 10.
In Pythonista, you would only use stash if you want to send the URL manually every time; in a script, you can use the paramiko module.
@Jhonmicky, I have had the exact same need for an app I am currently developping.
You can use Desktop.py below as follows:
- Run the module on the PC
- On the iPhone :
from Desktop import announce_service # Will launch Firefox on the Desktop, and open 'your_url' announce_service(your_url, True) ... # (Optional) Will close the Firefox window announce_service(your_url, False)
Lots of things can be configured, please see in the source code. This module was developped for a specific application (MyDB, you will see it mentionned several times in the source), not as a general purpose module, you may have to tweak it a bit to suit your needs.
Note on implementation : what started out as a simple UDP broadcast function ended up as a full protocol with a fallback against routers that filter out UDP broadcast packets, and fences against lost or delayed packets, WiFi networks that are slow to authenticate, etc., as I tried it out in various environments (a tame Internet Box at home, enterprise LAN, hotel LAN, etc.)
This is still work in progress.
Hope this helps.
""" Desktop companion for MyDB Web UI. There are two components to this module: - service_daemon() runs on the desktop. It will automatically open a browser window when the user activates desktop mode on the iPhone, and close the browser window when the user exits desktop mode. This is done by listening for MyDB Web UI service announcements. - announce_service() is used by Web_UI.py, to announce that MyDB's Web UI service is available / unavailable. Revision history: 22-Jul-2019 TPO - Created this module 25-Jul-2019 TPO - Initial release """ import json import os import socket import subprocess import time import threading # When the following variable is set to True, both service_daemon() and # announce_service() will print debug information on stdout. DEBUG = True # Change the following 2 variables if using another browser than Firefox: START_BROWSER = [(r'"C:\Program Files (x86)\Mozilla Firefox\Firefox.exe" ' r'-new-window {ip}/'), (r'"C:\Program Files\Mozilla Firefox\Firefox.exe" ' r'-new-window {ip}/')] STOP_BROWSER = r'TASKKILL /IM firefox.exe /FI "WINDOWTITLE eq MyDB*"' ANNOUNCE_PORT = 50000 ACK_PORT = 50001 MAGIC = "XXMYDBXX" ANNOUNCE_COUNTER = 0 def debug(message: str) -> None: global DEBUG if DEBUG: print(message) def announce_service(ip: str, available: bool) -> bool: """ Announce that MyDB's Web UI service is available / unavailable. Broadcast the status of the MyDB Web UI service, so that the desktop daemon can open a browser window when the user activates desktop mode on the iPhone, and close the browser window when the user exits desktop mode. Arguments: - ip: string containing our IP address. - available: if True, announce that the service is now available. If False, announce that the service is now unavailable. Returns: True if the announcement has been received and ackowledged by the desktop daemon, False otherwise. Two broadcast modes are tried: - UDP broadcast is tried first (code is courtesy of goncalopp,) - If UDP broadcast fails, as can happen on LANs where broadcasting packets are filtered by the routers (think airport or hotel LAN), a brute force method is tried, by sending the announcement packet to 254 IP adresses, using values 1 - 255 for byte 4 of our own IP address (should actually use subnet mask, but this is a quick and dirty kludge !) TODO: document ACK mechanism + counter and session id """ def do_brute_force_broadcast(s: socket.socket, ip: str, port: int, data: bytes) -> None: ip_bytes_1_to_3 = ip[:ip.rfind('.')] + '.' for i in range(1, 255): print(i, sep=" ") s.sendto(data, (ip_bytes_1_to_3 + str(i), port)) print(".") global ACK_PORT, ANNOUNCE_PORT, MAGIC, ANNOUNCE_COUNTER SOCKET_TIMEOUT = 0.3 data = json.dumps({'magic': MAGIC, 'counter': ANNOUNCE_COUNTER, 'service available': available, 'IP': ip, 'Session id': time.time()}).encode('utf-8') snd = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) snd.bind(('', ANNOUNCE_PORT)) rcv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) rcv.bind(('', ACK_PORT)) rcv.settimeout(SOCKET_TIMEOUT) ack = False debug(f"Counter = {ANNOUNCE_COUNTER}, announcing service is " f"{'ON' if available else 'OFF'}") for brute_force, retries in ((False, 3), (True, 8)): debug(f" Trying {'Brute force' if brute_force else 'UDP'} broadcast") snd.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, not brute_force) for retry in range(retries): debug(f" Retry # {retry}") if brute_force: brute_force_broadcast_thread = threading.Thread( target=do_brute_force_broadcast, args=(snd, ip, ANNOUNCE_PORT, data)) brute_force_broadcast_thread.start() else: snd.sendto(data, ('<broadcast>', ANNOUNCE_PORT)) while 1: try: data_bytes, addr = rcv.recvfrom(1024) except socket.timeout: debug(" Socket time out, going for next retry") break debug(f" Received {data_bytes}") try: data = json.loads(data_bytes.decode('utf-8')) except json.JSONDecodeError: debug(" Invalid JSON, ignoring") continue if (isinstance(data, dict) and data.get('magic') == MAGIC and data.get('counter') == ANNOUNCE_COUNTER and data.get('IP') == ip): debug(" ACK received") ack = True break print(" Invalid ACK, ignoring") if ack: break if brute_force: # Need to wait for broadcast_thread to be done before we proceed # to close the snd socket, or do_brute_force_broadcast() will fail # with "Errno 9: Bad file descriptor". brute_force_broadcast_thread.join() if ack: break if not ack: debug(" Both UDP and brute force broadcast methods failed, giving up") snd.close() rcv.close() ANNOUNCE_COUNTER += 1 return ack def service_daemon(): """ Automatically open / close web browser on desktop. service_daemon() runs on the desktop. It will automatically open a browser window when the user activates desktop mode on the iPhone, and close the browser window when the user exits desktop mode. This is done by listening for MyDB Web UI service announcements. """ global ACK_PORT, ANNOUNCE_PORT, MAGIC # Keep track of the counter value for last annoucement packet processed, in # order to ignore retry packets sent by announce_service(), which all have # the same counter value. last_counter = -1 # Web_UI sessions all start with a counter value of 0, so we need to keep # track of Web_UI sessions and reset last_counter every time a new session # is started (i.e. when the user activates desktop mode on the iPhone) current_session_id = -1 rcv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) rcv.bind(('', ANNOUNCE_PORT)) snd = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) snd.bind(('', ACK_PORT)) debug(f"Listening on port {ANNOUNCE_PORT}") while 1: data_bytes, addr = rcv.recvfrom(1024) debug(f"Received packet from {addr}:\n '{data_bytes}'") try: data = json.loads(data_bytes.decode('utf-8')) except json.JSONDecodeError: debug(" Invalid JSON, ignoring") continue if (isinstance(data, dict) and data.get('magic') == MAGIC and 'counter' in data and 'IP' in data and 'service available' in data and 'Session id' in data): if (data['Session id'] == current_session_id and data['counter'] <= last_counter): debug(f" Ignoring MyDB announcement for counter = " f"{data['counter']}, already processed") continue current_session_id = data['Session id'] last_counter = data['counter'] debug(f" MyDB announcement: IP = {data['IP']}, " f"service {'ON' if data['service available'] else 'OFF'}, " f"counter = {data['counter']}") ack = json.dumps({'magic': MAGIC, 'counter': data['counter'], 'IP': data['IP']}).encode('utf-8') snd.sendto(ack, (data['IP'], ACK_PORT)) debug(f" ACK sent back to {data['IP']}:{ACK_PORT}") if data['service available']: debug(" Launching browser") for start_browser in START_BROWSER: try: subprocess.Popen(start_browser.format(ip=data['IP'])) except FileNotFoundError: continue break else: debug(" Closing browser") os.system(STOP_BROWSER) else: debug(" Not a MyDB announcement, ignoring") if __name__ == '__main__': service_daemon()
A few month ago someone asked a similiar question on reddit.
Assuming you can run python on both sides, you can use this example. For more details, please see the discussion in the reddit discussion linked above. This example does not use SSH, but the
socket&
webbrowsermodules instead.
@Jhonmicky, can you be more precise on your use case ?
More specifically, does your Windows 10 PC have a fixed IP address ? If so, the method proposed by @bennr01 should work (and would be simpler). If your PC does not have a fixed IP address, then you need to establish a protocol between your iOS device and your windows PC, so that they can exchange IP addresses (and this is where the module I posted earlier would come in handy).
To dialog with my Mac, I use SSH (paramiko) and I connect to the computer name of the Mac which does not have a fixed ip (dhcp) | https://forum.omz-software.com/topic/5761/is-it-possible-to-send-a-url-to-pythonista-from-my-phone-and-make-it-open-on-windows | CC-MAIN-2022-40 | refinedweb | 1,553 | 54.42 |
Hello,
I'm having issues with the player score for when they pick up collectables in my game.
Basically, in my game collectables only appear after the player travels a certain distance and are therefore spawned in with a script, now I want the players score to be kept within a game manager object.
Basics of what I need help with:
Player collides with coin obj
Coin script takes collision and stores the coins value
Coin script updates the score store within the game manager object
game manager script updates the score UI
There are multiple coin objects in the scene that are constantly spawning in
Can anyone help me with a script that would update the players score that's stored on the game manager object when they collide with the collectables, even when there are multiple collectables in the scene?
My code so far; C# :
public class coin : MonoBehaviour {
public GameObject gmScoreTxt;
public int value = 1;
private int count = 0;
public int playerScoreNum = 0;
public int setScore = 0;
void Start()
{
gmScoreTxt = GameObject.Find("gameManager");
}
void OnTriggerEnter(Collider collision)
{
if (collision.gameObject.tag == "Player")
{
count = count + value;
playerScoreNum = count;
setScore = gmScoreTxt.GetComponent<playerScore>().score;
setScore = playerScoreNum;
Destroy(gameObject);
}
GameManager:
public class playerScore : MonoBehaviour {
public int score = 0;
public Text scoreTxt;
void Update()
{
scoreTxt.text = score.ToString();
}
Thank you!
Answer by abeLincoln41
·
May 14, 2017 at 07:14 AM
Your code looks pretty good so far. Have you tried running the game and seeing if it works? One comment I have right away is that you do not have to use the toString() method with the score you can simply use score.
public Text scoreText;
public long score;
void Update () {
scoreText.text = "Score: " + Mathf.Round(score); // Doesnt have to be exactly like this.
}
As for your coin class, you are setting setScore to equal something twice, so the first line in which you set the score to is unnecessary. PlayerScoreNum is also unnecessary because you can just use count and set the score equal to that. You also have to change your score in the score manager using the coin class. You can do this by creating an object of playerScore in the coin class. Make sure you drag the score manager that you create into the public field setScore in the coin class.
playerScore setScore;
if (collision.gameObject.tag == "Player")
{
setScore.score =+ value;
Destroy(gameObject);
}
I hope this works. If it does not feel free to let me know and I will do my best to help!
Hey, thanks for the reply @abeLincoln41.
I've tried this and it still doesn't want to work, as soon as I collide with one of the coins I get this error:
NullReferenceException: Object reference not set to an instance of an object coin.OnTriggerEnter (UnityEngine.Collider collision) (at Assets/scripts/coin.cs.
when i reload the scene the score number remains the same and doesnt revert back to zero
1
Answer
In updating some score text, why should we use the Update function instead of separate functions?
0
Answers
Help making a high score with player prefs/displaying it
1
Answer
Can't update score text - small error breaks the game (Space Shooter Tutorial)
1
Answer
Spawn Object on Set Score
0
Answers | https://answers.unity.com/questions/1352899/problem-with-player-score.html | CC-MAIN-2019-43 | refinedweb | 542 | 69.82 |
Blocks and Bullets 0.1
A simple game of Blocks and Bullets
Blocks and Bullets is a simple game, which at the moment has little gameplay value. It is more an experiment on the components of a game.
Versions
Later versions have been released, and can be obtained on
This version, 0.1, is the only one obtainable as both a python package and as uncompiled code. All later versions are as a zipped pyc file.
Installation
Please ensure you are running Python 2.7 or better. Ensure you have extracted the files from the .tar.gz file.
Run the standard command:
python setup.py install
To do this, open terminal, cmd, or your nearest equivalent. Navigate to the location of the extracted files. Run the command above.
If you are running on Windows, and the above does not work, try:
setup.py install
If you are running Ubuntu and the above does not work, add sudo to the start and see how it goes.
Game Setup
Running the game should be done by running python command line and typing in:
import BlocksandBullets
To start, simply enter the amount of players, from 2 to 15. Then, give each character their name of 1-25 characters that cannot be the same between characters, and a symbol, the 1 character that is their playing piece, and cannot be a hash, arrow including v or space.
It is recommended you resize the window to be slightly taller than default.
Controls
Type in each command then press enter.
WASD to move, E to reload, and SPACE to shoot. 8426 to move, 0 to reload, and 0 to shoot. Alternatively type each command, eg "Up", "Reload", "Shoot".
Gameplay
Each player is allocated 10 hp and 10 bullets, as well as a spot on the edge of the 50x30 board, and enough room to move.
The board consists of blocks (#), players, and bullets (< v > ^).
Players take turns, either moving a spot, shooting a bullet, or reloading.
Players that shoot with no ammo will miss their turn, as will players walking into blocks, players, or the edge of the board.
Bullets move once per players' go, and so move faster relative to players the more players there are. For example, with two players, a bullet moves twice as fast as players, whereas with 15 players, bullets move 15 times as fast as players.
A bullet hitting a player will knock of 1 hp and be destroyed, bullets hitting a wall will destroy both the bullet and the wall, bullets colliding will be destroyed, and bullets two bullets hitting a wall at the same time will both be destroyed, as well as the wall.
A player with 0 hp will dissapear.
When 1 player remains they will be declared winner, but if none remain, the game will be declared a draw.
Known Bugs
Report bugs to olligobber, at
- Author: Olligobber
- License: LICENSE.txt
- Package Index Owner: olligobber
- DOAP record: Blocks and Bullets-0.1.xml | http://pypi.python.org/pypi/Blocks%20and%20Bullets/0.1 | crawl-003 | refinedweb | 497 | 73.47 |
NCM Manual Download of Configs - timing issueHerrDoktor Nov 25, 2016 10:02 AM
Hi Folks,
we have a NCM Only installation where we usually do automated config backups during the night.
Now some of our admins complained that they are not able to either download a running or a startup config of a Cisco device that is TACACS enabled.
We narrowed it down to the following:
- When you download either config (Running or Startup) you need to wait approx 30 seconds before you can download another config (manually) - we were able to verify this with a few nodes.
- This happens with either prototcol (SSH or Telnet)
- This only happens on some nodes, not on all nodes
We assume the following:
- TACACS is configured that only 1 connection per user at a time is possible
- when we disconnect from a node, the information needs to be sent/processed to the tacacs server depending on the tacacs config, and this is the duration when we can not log in again to the device
We are not maintaining the TACACS System and only have a brief understanding of that, but we need to get as much information as possible to ask the TACACS admins to help us out.
Can someone give us some ideas on how to troubleshoot
Thanks,
Holger
Re: NCM Manual Download of Configs - timing issuemesverrum
Nov 25, 2016 10:22 AM (in response to HerrDoktor)1 of 1 people found this helpful
I'm not personally too familiar with the TACACS side of this but on the NCM side you can break your backup scheduled job into two parts, one pass for running configs and another starting a few minutes later for startups. Hopefully that gets around this issue for you if there isn't an easy way to fix it in the TACACS setup.
Loop1 Systems: SolarWinds Training and Professional Services
- LinkedIN: Loop1 Systems
- Facebook: Loop1 Systems
- Twitter: @Loop1Systems
Re: NCM Manual Download of Configs - timing issuerschroeder
Nov 25, 2016 10:49 AM (in response to mesverrum)
I use this exact solution--with TACACS. One job for backing up startup configs, a separate job run later for catching running-configs. It works well.
Re: NCM Manual Download of Configs - timing issuerschroeder
Nov 25, 2016 10:52 AM (in response to HerrDoktor)1 of 1 people found this helpful
Which version of NCM are you using?
And what TACACS solutions are you running--ACS? Or something entirely different?
I've had many problems with ACS logging/reporting capabilities and reliability, and am anxiously awaiting for our ISE to replace it.
But the earlier advice is appropriate: One job for backing up startup configs, a separate job run later for catching running-configs. It works well.
Re: NCM Manual Download of Configs - timing issueHerrDoktor Nov 27, 2016 10:20 AM (in response to HerrDoktor)
Hi Guys,
thanks for your comments.
the automatic download jobs work perfectly fine. It is just when the fellow device admins are impatient and download startup and running config manually on the webpage. We have about 120 admins working with NCM And many of them are not patient. The Orion Team is constantly cancelling Download Jobs that are "stuck"'on itializing because others were not waiting 30 seconds before hitting the Download button again.
I am almost certain that this is a TACACS timing issue however at the moment it is my/Solarwinds fault And I want to get some proof that it is not.
thanks
Re: NCM Manual Download of Configs - timing issuerschroeder
Nov 27, 2016 9:10 PM (in response to HerrDoktor)
120 Admins using NCM? Impressive?
How many nodes are on your network?
How many elements do you manage?
I have 4 pollers, NPM, NCM, NTA, just under 800 managed nodes, elements are about 35,000.
Re: NCM Manual Download of Configs - timing issueHerrDoktor Nov 29, 2016 12:32 PM (in response to rschroeder)
35.000 Devices/Nodes over 3 Polling Engines NCM Only (unfortunately)
import of nodes via SDK from a CMDB
my largest installations when it comes to number of nodes.
But not the most complex installation
cheers
holger
Re: NCM Manual Download of Configs - timing issueHerrDoktor Dec 1, 2016 9:55 AM (in response to rschroeder)
I didn't answer your question correctly... the Customer has over 70.000 Network devices, but only half of them need to be managed by NCM. So the Network is a lot bigger than the NCM Installation represents. | https://thwack.solarwinds.com/thread/108042 | CC-MAIN-2018-51 | refinedweb | 743 | 56.49 |
From: Michal Nazarewicz <address@hidden> This commit adds a server-auth-key variable which allows user to specify a default authentication key used by the server process. --- lisp/server.el | 42 +++++++++++++++++++++++++++++++++++------- 1 files changed, 35 insertions(+), 7 deletions(-) Hello, attached is a patch that adds a `server-auth-key' variable, which I use to easily allow a host to connect to Emacs daemon listening on TCP port without the need of synchronising the server file each time server starts. The etc/CONTRIBUTE mentions ChangeLog entry. I'm unsure whether you need anything more then the commit message above but in case you do, here's ChangeLog entry: 2011-02-21 Michal Nazarewicz <address@hidden> (tiny change) * lisp/server.el: Introduce server-auth-key variable which allows user to specify a default authentication key used by the server process. Hope you guys don't mind git style patch mail. diff --git a/lisp/server.el b/lisp/server.el index df8cae0..3963e86 100644 --- a/lisp/server.el +++ b/lisp/server.el @@ -134,6 +134,27 @@ directory residing in a NTFS partition instead." ;;;###autoload (put 'server-auth-dir 'risky-local-variable t) +(defcustom server-auth-key nil + "Server authentication key. + +Normally, authentication key is generated on random when server +starts, which guarantees a certain level of security. It is +recommended to leave it that way. + +In some situations however, it can be difficult to share randomly +generated password with remote hosts (eg. no shared directory), +so you can set the key with this variable and then copy server +file to remote host (with possible changes to IP address and/or +port if that applies). + +You can use \\[server-generate-key] to get a random authentication +key." + :group 'server + :type '(choice + (const :tag "Random" nil) + (string :tag "Password")) + :version "24.0") + (defcustom server-raise-frame t "If non-nil, raise frame when switching to a buffer." :group 'server @@ -495,6 +516,19 @@ See variable `server-auth-dir' for details." (unless safe (error "The directory `%s' is unsafe" dir))))) +(defun server-generate-key () + "Generates and returns a random 64-byte strings of random chars +in the range `!'..`~'. If called interactively, also inserts it +into current buffer." + (interactive) + (let ((auth-key + (loop repeat 64 + collect (+ 33 (random 94)) into auth + finally return (concat auth)))) + (if (called-interactively-p) + (insert auth-key)) + auth-key)) + ;;;###autoload (defun server-start (&optional leave-dead inhibit-prompt) "Allow this Emacs process to be a server for client processes. @@ -588,13 +622,7 @@ server or call `M-x server-force-delete' to forcibly disconnect it.") (unless server-process (error "Could not start server process")) (process-put server-process :server-file server-file) (when server-use-tcp - (let ((auth-key - (loop - ;; The auth key is a 64-byte string of random chars in the - ;; range `!'..`~'. - repeat 64 - collect (+ 33 (random 94)) into auth - finally return (concat auth)))) + (let ((auth-key (or server-auth-key (server-generate-key)))) (process-put server-process :auth-key auth-key) (with-temp-file server-file (set-buffer-multibyte nil) -- 1.7.3.1 | https://lists.gnu.org/archive/html/emacs-devel/2011-02/msg00979.html | CC-MAIN-2021-31 | refinedweb | 506 | 54.63 |
Here we are for another episode of the "random thoughts" series brought
to you by Stefano's fried synapses.
In a recent message (also picked up by xmlhack.com) I expressed my
concerns about the harm that a tool like Cocoon could do to the ideas
that XML and friends propose. Many of you responded with strong
arguments about the need for XML operativity on the server side, and I
totally agree with them (of course), also it was pointed out that such
needs would not be removed by the existance of widespread XML-capable
clients, true, they would be reshaped, but never removed.
I've thought very much about this and came to the conclusion that while
Cocoon is _NOT_ harmful to the XML model in general, it leaves to the
user a very important part of the job to enforce the "semantic web".
During the last few days, I went over all the "web design issues" that
W3C director Tim Berners-Lee outlines (),
did my homework on RDF, RDF Schema and all related materials, read some
whitepapers about metadata activities and started to think on how Cocoon
could help.
I've come across many interesting ideas and powerful dreams of a "web of
knowledge" where a layer of machine understandable information is
processed to create a layer of human understandable information, but
generally easier to process by humans because already filtered by
metadata processing.
I know many of you don't know about RDF and many others believe it's
just the XML equivalent of the HTML <meta> tag. In general, RDF is
believed to be a useless waste of time. I used to think this myself, but
I think it's time to look forward... and outline the problems that RDF
and friends have.
There are problems: the baby is hard to understand and use. RDF is
generally verbose, it has this (please, allow me) "useless"
element/attribute equivalence (which breaks validation in all possible
ways), it's utterly abstracted and provides no example of use that would
pay off in the short term.
RDF is more than a year old and almost nobody (except the Mozilla
project) has been using it. Why?
I don't have a general answer but I have my own: why should I care about
embedding RDF markup in my documents, if nobody is able to use it?
But same thing could be said for RDF-based applications (the infamous
chicken & egg problem): why should I write an RDF-capable engine if
there is no content available which contains RDF?
Sure, there are RFC that teach you how to embed RDF into your HTML
(yeah, right... you wish), also RFC that teach you what metadata
elements to use (the dublin core), David Megginson also wrote an RDF
wrapper for SAX, everybody in this world knows that this might be
big....
.... but the energy gap to arrive to that usability plateau is _HUGE_
and it seems that nobody is able to write that "killer app" that makes
this ball spinning.
Can Cocoon be this "killer app"?
I strongly believe so. Let me explain why:
Cocoon (starting from its version 2.0) is based on the sitemap. The
sitemap is the location of all the processing information required to
generate a resource. This is metadata, this is "data about data". If we
clean it up a little, RDF-ize it, then it would be very easy for Cocoon
to expose its sitemap to semantic crawlers.
Also, thru the use of content negotiation, it could be possible for the
crawler to obtain the "RDF" information (which could be the original
one, or one created on purpose), which along with XLink/XBase/XPointer
would allow the crawler to crawl in a friendly manner the site.
Ok, you say, I get that, but what would be different from today?
The thing is that we are going to write that crawler and connect it
directly to Cocoon so we would gain:
1) The sitemap is the instruction for both the resource generator
processor and both the information semantic crawler. Single point of
management, but would also allow people to pay off instantly their
metadata effort.
2) Each Cocoon would have it's own semantically driven search engine.
3) Each Cocoon would connect to other semantic search engines which make
available RDF views of their information (the mozilla directory, for
example) to increase their action range.
4) Each Cocoon would be contacted by other agents (other Cocoon or
equivalently behaving) and provide RDF views of its information,
possibly already semantically processed to avoid the need of site
recrawling of that agent.
If you think about it, such "cellular" semantically-based indexing would
work much like Napster/Gnutella networks where there would be no central
point of failure.
Imagine a web where each site controls not only its information, its
schemas, its stylesheets and web applications, but also its own search
engine and everyone of them is the entry point for a distributed (but
locally manageable) semantically based searching enviornment.
It would work much like routing tables work for TCP/IP networks,
propagating information as soon as they are available or delegating
search and retrieval to other networks.
I don't know if this feasible or not, but the idea seems to me *very*
exiting, to say the least.
----------- o -------------
But how would a "semantically based search engine" work?
I still don't have a clear view of this, but I have a few ideas to
share: first of all, the RDFSchema WD adds a great deal of functionality
to the RDF idea and makes it very appealing.
[Careful here: RDFSchema is not to be confused by XMLSchema which is a
totally different thing. RDFSchema is -NOT- the XMLSchema for RDF, also
because RDF cannot be validated]
RDFSchema provides mostly object-oriented capabilities to the RDF model,
allowing, for example to say
<rdf:description
<dreamer>Stefano Mazzocchi</dreamer>
</rdf:description>
where the namespace "" indicates (with an
RDFSchema) that
dreamer --(extend)--> dc:author
where dc:author indicates the author tag of the Dublin Core standard
metadata set, which indicates the author of the described resource.
Then, on the local site, since users generally are made aware of the
specific metadata tags used, "dreamer" might have other meanings, but
for other sites that are unaware of these site-specific meanings, they
can fall back on standard "author" tag since the semantic has been
inherited.
Think about something like this where you are able to define whatever
metadata markup is required for your needs, but you provide semantic
hooks for outter searches to still match.
More or less like standard API provide functionalities that you extend
as you please, but then they allow you to run your program on any
compatible platform, such a semantic web would be based on standard set
of metadata tags, then what you need to do (if you don't want to use
those tags, or what to provide special searching capabilities) is to
extend them and make the RDFSchemas accessible in a known place.
Would this solve all of us problems? no way, like for XMLSchemas, the
problem of web "balkanization" and fragmentation exists, but as many
outlined, stable points on a dynamic system happen on the bottom of a
bell-shaped surface.
Today, we have a stable dyanamic system, since its potential energy is
on a local minimum.
W3C is providing us ideas on an ideal new minimum of the web potential
energy that lies far higher, into another local minimum.
We need to behave as catalizers to lower the energy required to move
from this minimum (current web) to the other minimum high above
(semantic web), otherwise, these ideas will simply remain in those W3C
specs and will never change our favorite information infrastructure as
they fully deserve to do.
Cocoon was born to allow the adoption of XML technologies to solve real
life problems and acted as a catalizer.
Now I want to provide the same thing to complete the job.
Don't know when I'll have time for this, but I invite you to follow me
on this quest if you like the idea... and if you think you have a better
one, it's even better.
I know I'm crazy. I!
------------------------- --------------------- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200005.mbox/%3C39203311.2A42604@apache.org%3E | CC-MAIN-2017-26 | refinedweb | 1,381 | 54.66 |
The following form allows you to view linux man pages.
#include <sys/epoll.h>
int epoll_create(int size);
int epoll_create1(int flags);
ep argu-
ment is dropped, epoll_create1() is the same as epoll_create(). The
following value can be included in flags to obtain different behavior:
EPOLL_CLOEXEC
Set the close-on-exec (FD_CLOEXEC) flag on the new file descrip-
tor. See the description of the O_CLOEXEC flag in open(2) for
reasons why this may be useful.
On success, these system calls return a nonnegative file descriptor.
size must still be greater than zero, in order to ensure backward com-
patibility when new epoll applications are run on older kernels.
close(2), epoll_ctl(2), epoll_wait(2), epoll(7)
webmaster@linuxguruz.com | http://www.linuxguruz.com/man-pages/epoll_create1/ | CC-MAIN-2020-10 | refinedweb | 120 | 55.95 |
Python language basics 61: raising exceptions
Introduction
In the previous post we saw how to extract some useful information from an exception in Python. We discussed how to store the exception in a variable using an “as” clause and how to read its type and arguments.
In this post we’ll see how to use the “raise” keyword to re-raise or re-throw an exception.
The raise keyword
Raising an exception can be helpful in order to let an exception bubble up the call stack instead of silently suppressing it.
Consider the following code:
def divide(first, second): res = 0 try: res = first / second except ZeroDivisionError: res = -1 return res res = divide(10, 2) print(res)
The function will return 5 of course. However, if you call the function with 10 and 0 then it returns -1. We catch the zero division error in the divide function and return an error code of -1. However, it can be misleading. Calling the function with e.g. 10 and -10 will also return -1. The error code alone will be an ambiguous result.
It’s a better strategy to catch the exception, handle it in some way if you wish, e.g. log it and then re-raise it so that the exception isn’t lost along the way. In general you should let exceptions bubble up the call stack so that you can discover them and fix them.
Raising an exception in Python is done with the “raise” keyword:
def divide(first, second): res = 0 try: res = first / second except ZeroDivisionError: print("Big NONO") raise return res res = divide(10, 0) print(res)
You’ll see the following output in PyCharm if you run the above code – the exact code lines and file names will most certainly differ:
Big NONO
Traceback (most recent call last):
File “C:/PythonProjects/HelloWorld/Various.py”, line 10, in
res = divide(10, 0)
File “C:/PythonProjects/HelloWorld/Various.py”, line 4, in divide
res = first / second
ZeroDivisionError: division by zero
Make sure you call “raise” as you see in the example so that all the information about the exception is preserved when it’s re-raised, or rethrown.
We can then catch the exception when calling the divide function:
def divide(first, second): res = 0 try: res = first / second except ZeroDivisionError: print("Big NONO") raise return res try: res = divide(10, 0) print(res) except Exception as err: print(type(err)) print(err.args) print(err)
The above code will result in the following output:
Big NONO
(‘division by zero’,)
division by zero
In the next post we’ll see how to run code even after an exception is thrown using the finally block.
Read all Python-related posts on this blog here.
Pingback: Python language basics 61: raising exceptions | Dinesh Ram Kali. | https://dotnetcodr.com/2015/11/07/python-language-basics-61-raising-exceptions/ | CC-MAIN-2018-51 | refinedweb | 468 | 59.13 |
This pattern describes a factory pattern which returns an object I call a RAX which contains three lists or collections.
The lists are described below:
1) The remove list or collection.
2) The add list or collection.
3) The exist list or collection.
The factory method parameters are:
RAXFactory.create(Collection toSystem, Collection fromSystem)
.
.
you may have to overload additional methods to accomadate Lists, Hashtables, etc.
The toSystem, are items or objects the user has selected to be persisted to the system.
The fromSystem, are items or objects that were previously selected.
Objects in the collection should have an overidden equals() method to determine object equality. Note: some java objects you must overide. If you create an object you must overide the equals() method.
example:
Date date1 = ....today();
Date date2 = new Date(date1.getTime());
Date dateA = date1;
Date dateB = date2;
System.out.println( "Are they the same ? " + dateA.equals(dateB) );
Are they the same ? true.
Pattern steps:
1) Obtain the selected object collection from the user. (toSystem)
2) Query system to obtain previously selected objects from the system. (fromSystem)
3) Take the toSystem and fromSystem collections as params.
4) Fill REMOVE list with the objects that exist in the fromSystem list and not in the toSystem list.
5) Fill ADD list with the objects that exist in the toSystem list and not in the fromSystem list.
6) Fill EXIST list with the objects that exist in both toSystem and fromSystem list.
7) return RAX object to caller.
Senario or the use of the pattern.
Editing previous user preferences:
[display edit mode]
My favorite fruit
*Bananas
Apples
*Grapes
*Oranges
User selects
[new selection]
My favorite fruit
Bananas
*Apples
*Grapes
*Oranges
Remove list : (remove from system)
Bananas
Add list: (add to system)
Apples
eXist list: (update system or leave alone)
Grapes
Oranges
-------------------------------------------
other examples:
Credentials for ACL security.
User preferences
Multiple choice input interfaces.
Gui examples
List boxes. (multi select)
Check boxes.
-------------------------------------------
Anyone want code examples please let me know.
I've used this 10 years ago with C code. And I've decided to put it on The Server Side. If someone else has already thought of this pattern oh well... Just let me know. Everyone at one point has seen this pattern.
-Carl P. Dea
RAX Pattern (5 messages)
Threaded Messages (5)
- RAX Pattern (Code Example) by Carl Dea on September 21 2003 19:03 EDT
- good idea by michael yao on January 14 2004 21:07 EST
- good idea by Carl Dea on March 30 2004 10:52 EST
- I've done this with a different approach by James Watson on June 10 2004 10:56 EDT
- Re: I've done this with a different approach by Pawan Kumar on March 23 2007 03:22 EDT
RAX Pattern (Code Example)[ Go to top ]
I didn't get feedback maybe a code example would help.
- Posted by: Carl Dea
- Posted on: September 21 2003 19:03 EDT
- in response to Carl Dea
Below are just 2 classes Rax.java and RaxFactory.java:
===============================================================================
/**
* Title: RAX This object contains 3 lists.
* Description: This class describes a 3 Lists 'R'emove, 'A'dd, e'X'ists.
* @author cdea
* @version 1.0
*/
import java.util.*;
public class Rax {
private List removeList;
private List addList;
private List existList;
public Rax() {
}
public List getRemoveList() {
if (removeList == null) {
removeList = new ArrayList();
}
return removeList;
}
public List getAddList() {
if (addList == null) {
addList = new ArrayList();
}
return addList;
}
public List getExistList() {
if (existList == null) {
existList = new ArrayList();
}
return existList;
}
void setRemoveList( List list) {
removeList = list;
}
void setAddList( List list) {
addList = list;
}
void setExistList( List list) {
existList = list;
}
public String toString() {
StringBuffer sb = new StringBuffer();
// output remove list
sb.append("Remove list: \n" );
if (getRemoveList().size() == 0) {
sb.append(" No elements \n");
}
for (int i = 0; i < getRemoveList().size(); i++) {
Object obj = getRemoveList().get(i);
if (obj instanceof java.util.Date) {
java.util.Date date = (java.util.Date) obj;
sb.append(" element " + i + " " + date + " " + date.getTime() + "\n");
} else {
sb.append(" element " + i + getRemoveList().get(i) + "\n");
}
}
// output add list
sb.append("Add list: \n" );
if (getAddList().size() == 0 ){
sb.append(" No elements \n");
}
for (int i = 0; i < getAddList().size(); i++) {
Object obj = getAddList().get(i);
if (obj instanceof java.util.Date) {
java.util.Date date = (java.util.Date) obj;
sb.append(" element " + i + " " + date + " " + date.getTime() + "\n");
} else {
sb.append(" element " + i + getAddList().get(i) + "\n");
}
}
// output exist list
sb.append("eXist list: \n" );
if (getExistList().size() == 0) {
sb.append(" No elements \n");
}
for (int i = 0; i < getExistList().size(); i++) {
Object obj = getExistList().get(i);
if (obj instanceof java.util.Date) {
java.util.Date date = (java.util.Date) obj;
sb.append(" element " + i + " " + date + " " + date.getTime() + "\n");
} else {
sb.append(" element " + i + getExistList().get(i) + "\n");
}
}
return sb.toString();
}
}
===============================================================================
/**
* Title: RaxFactory
* Description: This static Class returns a RAX object.
* @author Carl Dea
* @version 1.0
*/
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
import java.util.Iterator;
import java.util.List;
public class RaxFactory {
//
private RaxFactory() {
}
/**
* Factory method which gets 2 lists and returns a RAX object.
* @param fromSystemC a collection from The system.
* @param toSystemC a collection from the system.
*
*/
public static Rax create(Collection fromSystemC, Collection toSystemC) {
Rax rax = new Rax();
List fromSystem = (List) fromSystemC;
List toSystem = (List) toSystemC;
boolean found = false;
// fromSystem [nothing] toSystem[something] in list.
if ( (fromSystem == null || fromSystem.size() == 0) &&
(toSystem != null && toSystem.size() > 0) ) {
rax.setAddList(toSystem);
return rax;
}
// fromSystem [something] toSystem[nothing] in list.
if ( (fromSystem != null && fromSystem.size() > 0) &&
(toSystem == null || toSystem.size() == 0) ) {
rax.setRemoveList(fromSystem);
return rax;
}
Iterator it1 = fromSystem.iterator();
List removeList = rax.getRemoveList();
List addList = rax.getAddList();
List existList = rax.getExistList();
// check from system list
Object oneItem = null;
Object oneItem2 = null;
// Deal with the 'From System List'.
while (it1.hasNext()) {
found = false;
oneItem = it1.next();
Iterator it2 = toSystem.iterator();
while (it2.hasNext()) {
oneItem2 = it2.next();
// found in the from list and to list already exists.
if ( oneItem.equals(oneItem2) ) {
existList.add(oneItem2);
found = true;
break;
}
}
// when not in list it is to be removed.
if (!found) {
removeList.add(oneItem);
}
}
// Deal with the 'To System List'.
oneItem = null;
oneItem2 = null;
Iterator addIterator = toSystem.iterator();
Iterator existIterator = null;
while (addIterator.hasNext() ) {
found = false;
oneItem = addIterator.next();
existIterator = existList.iterator();
while (existIterator.hasNext()) {
oneItem2 = existIterator.next();
if (oneItem.equals(oneItem2)) {
found = true;
break;
}
}
// when found reset
if (!found) {
addList.add(oneItem);
}
}
// set lists
rax.setRemoveList(removeList);
rax.setAddList(addList);
rax.setExistList(existList);
return rax;
}
/**
* Factory method which gets 2 lists and returns a RAX object.
* @param fromSystem a collection from The system.
* @param toSystem a collection from the system.
*
*/
public static Rax create(Object[] fromSystem, Object[] toSystem) {
return create(Arrays.asList(fromSystem), Arrays.asList(toSystem));
}
public static void main (String[] args) {
System.out.println(" Rax testing Strings");
String[] fromSystem = {"Carl", "Test", "Test2"};
String[] toSystem = {"Test", "Test3"};
Rax rax = create(fromSystem, toSystem);
System.out.println( " Rax String - From System List : " + fromSystem );
System.out.println( " Rax String - To System List : " + toSystem );
System.out.println( " Rax String - Remove List : " + rax.getRemoveList() );
System.out.println( " Rax String - Add List : " + rax.getAddList() );
System.out.println( " Rax String - eXist List : " + rax.getExistList() );
}
}
================================================================================
This code could be optimized. Objects should always have their equals() method overwritten unless the object already is overwritten. Such as java.util.Date, String, Integer, etc.
-Carl Dea
good idea[ Go to top ]
it was good idea just for small amount data
- Posted by: michael yao
- Posted on: January 14 2004 21:07 EST
- in response to Carl Dea
good idea[ Go to top ]
- Posted by: Carl Dea
- Posted on: March 30 2004 10:52 EST
- in response to michael yao
it was good idea just for small amount dataSorry for the very slow reply.
Actually, this implementation is for small amount of data. But,
anyone can provide a more robust and scalable solution, this is just a reoccuring pattern in software development. I only provided a simple implementation to demonstrate the pattern. In distributed environments I find this useful when managing many singleton objects across different application servers. The use of JMX, JMS, etc.
I hope this becomes useful.
There is always a better way.
-Carl
I've done this with a different approach[ Go to top ]
I've implemented this functionality by implementing it inside a List.
- Posted by: James Watson
- Posted on: June 10 2004 10:56 EDT
- in response to Carl Dea
First we had Objects with bound properites. Each Object had a state: new, modified, or old.
The key was then to hold an internal List that held the deletions.
Another approach that would work with out the bound Objects would be to maintain two internal Lists. One for adds and one for deletes. The problem with this approahch is that you must have some way of differentiating between the adds that create the 'original' list and actaul adds later. One way would be to use a constructor that takes a Collection to create the 'original'. I think this would only work well with immutable Objects.
Re: I've done this with a different approach[ Go to top ]
There are two issues that we will face in this: 1. Implementation of equals method. Every such comparing object should implement the equals method properly, else the definition of exists is lost. And thereforth, every operation goes haywire. 2. Iteration through a list is heavy. What is better is to use a map for retrieval of objects. Most of the business objects/Domain Model objects have a property/collection of properties that uniqely identify it. A suitable combination can be used as a key for this map. Another thing with regards to RAX is that it looks at iteration of collection everytime comparison is needed. Possibly, there can be a caching mechanism in the object itself for the caching/saving of the map, which will lead to lesser number of iterations.
- Posted by: Pawan Kumar
- Posted on: March 23 2007 03:22 EDT
- in response to James Watson | http://www.theserverside.com/discussions/thread.tss?thread_id=21367 | CC-MAIN-2016-30 | refinedweb | 1,652 | 61.93 |
As some of our readers might remember, interoperability on Rakudo JVM has been described as “basic” by jnthn++ last year. Which is to say, calling methods on Java objects inside Rakudo on the JVM was possible, but it wasn’t always pretty or convenient. As a reminder:
use java::util::zip::CRC32:from<java>; my $crc = CRC32.new(); for 'Hello, Java'.encode('UTF-8').list { $crc.'method/update/(I)V'($_); }
In this post I want to describe my journey towards figuring out why the method call looks like that and what we can do to improve it.
What’s wrong with that method call?
The reason behind the unusual method call is that there’s no multi-method dispatch (MMD) for overloaded methods on Java objects from the Perl 6 side. Sure enough, checking the javadoc for
java.util.zip.CRC32 reveals that
update() is overloaded with three candidates. Readers familiar with the JVM might notice that the method we call on
$crc is a method descriptor, defined in the JVM specification as
A method descriptor represents the parameters that the method takes and the value that it returns. --
In our case this means “a method called
update which takes a primitive
int and returns
void“[1]. Clearly Rakudo should figure out on its own which candidate we want, considering Rakudo has MMD on Perl 6 objects and Java has compile-time dispatch on it’s own objects as well. Let’s see what it takes!
The nitty-gritty, if only some of it
Let’s start at the top of the code example. We’re starting with a
use statement which by and large does the same in Perl 6 as it does in Perl 5, except we are supplying a longname consisting of the module name and the colonpair
:from<java>. The colonpair with key
from tells the ModuleLoader to not use the default Perl 6 module loader, but a different one, in our case the JavaModuleLoader.
In JavaModuleLoader::load_module we’re starting to tread towards vm-level code. After making sure we have an interop loader, we call out to Java code with
typeForNameFromJar() or
typeForName() respectively. This is where we’re leaving NQP code and entering Java code on our trail. Next stop:
org.perl6.nqp.runtime.BootJavaInterop
typeForName() and
typeForNameFromJar() both do some amount of path-munging to find the .class or .jar file, build a ClassLoader with the path where they found those files and pass the loaded class to
getSTableForClass. A STable, or shared table, represents a pairing of a
HOW and a
REPR, that is, a pairing between a metaobject and the vm-level representation of an object of that type. Creation of a STable happens lazily, via a
ClassValue, where the InteropInfo remembers the Java class it represents as well as the computed interop and STable. The important thing we’re looking for is set inside
computeInterop, where the documentation rightfully states the gory details begin. The details in question concern themselves with bytecode generation via the framework
org.objectweb.asm, although most of the aforementioned details are not particularly important at this stage. What is important though is the following bit:
HashMap<String, SixModelObject> names = new HashMap< >(); // ... for (int i = 0; i < adaptor.descriptors.size(); i++) { String desc = adaptor.descriptors.get(i); SixModelObject cr = adaptorUnit.lookupCodeRef(i); int s1 = desc.indexOf('/'); int s2 = desc.indexOf('/', s1+1); // create shortname String shorten = desc.substring(s1+1, s2); // XXX throw shortname => coderef away if shortname is overloaded names.put(shorten, names.containsKey(shorten) ? null : cr); // but keep the descriptor names.put(desc, cr); } // ... freshType.st.MethodCache = names;
The
adaptor object is a dataholder for a multitude of things we need while generating the adaptor, for example the name of the adaptor class or a list of the constants it has to contain, or even the CompilationUnit we generated in bytecode. The
adaptorUnit is a shorthand for the mentioned CompilationUnit. What happens here is that we construct the MethodCache (which is a HashMap) by extracting the shortname of the method out of the descriptor and adding the shortname as key with the CodeRef as value, as long as we only have one candidate. To be sure we aren’t forgetting anything, we also add the descriptor as key to the MethodCache with the same CodeRef. Thus we have figured out why the method call in the original example has to look the way it does: we don’t even know the method by its shortname.
Great. How do we fix it?
Dynamically. Which is a bit complicated on the JVM, because Java is statically typed. Luckily JSR 292 [2] has been approved some time ago which means we have an instruction available that “supports efficient and flexible execution of method invocations in the absence of static type information”. The basic mechanics of this are as follows:
- While generating bytecode for the runtime of our language we insert an
invokedynamicinstruction in a place where we want to be able to decide at runtime which method we want to invoke.
- This instruction references a slightly special method (usually called a bootstrap method) with a specific signature. Most importantly this method has to return a
java.lang.invoke.CallSite.
- When the
invokedynamicinstruction is encountered while executing the bytecode, the bootstrap method is called.
- Afterwards the resulting CallSite is installed in place of the
invokedynamicinstruction, and on repeated calls the method installed with the CallSite will be invoked instead of the bootstrap method.
To be able to catch methods that we want to treat as
multi on the Perl 6 side, we have to change how the adaptors are generated. Recall that we currently only know which methods are overloaded after we generated the adaptor, thus we’re too late to insert an
invokedynamic as adaptor method. So we override
BootJavaInterop.createAdaptor, and instead of walking through all methods and simply creating an adaptor method directly, we additionally memorize which methods would end up having the same shortname and generate an
invokedynamic instruction for those as well.
There’s one more problem though. The fact that we have a shortname that should dispatch to different methods depending on the arguments means that we can’t take full advantage of installing a
CallSite. This is because any given
CallSite always dispatches to exactly one method, and method signatures in Java are statically typed. Luckily we can instead resolve to a
CallSite which installs a fallback method, which does the actual resolution of methods. [3]
To summarize this briefly: Via
invokedynamic we install a
CallSite that dispatches to a fallback method which converts the Perl 6 arguments to Java objects and then looks among all methods with the same shortname for one that fits the argument types. I won’t paste the
org.objectweb.asm instructions for generating byte code here, but the fallback method looks approximately as follow
Object fallback(Object intc, Object incf, Object incsd, Object... args) throws Throwable { // some bookkeeping omitted Object[] argVals = new Object[args.length]; for(int i = 0; i < args.length; ++i) { // unboxing of Perl 6 types to Java objects happens here } int handlePos = -1; OUTER: for( int i = 0; i < hlist.length; ++i ) { Type[] argTypes = Type.getArgumentTypes(((MethodHandle)hlist[i]).type().toMethodDescriptorString()); if(argTypes.length != argVals.length) { // Different amount of parameters never matches continue OUTER; } INNER: for( int j = 1; j < argTypes.length; ++j ) { if( !argTypes[j].equals(Type.getType(argVals[j].getClass())) ) { switch (argTypes[j].getSort()) { // comparison of types of the unboxed objects with // the types of the method parameters happens here } // if we didn't match the current signature type we can skip this method continue OUTER; } } handlePos = i; break; } if( handlePos == -1 ) { // die here, we didn't find a matching method } // create a MethodHandle with a boxed return type MethodType objRetType = ((MethodHandle)hlist[handlePos]).type().changeReturnType(Object.class); // and convince our adaptor method to return that type instead MethodHandle resHandle = ((MethodHandle) hlist[handlePos]).asType(objRetType); MethodHandle rfh; try { // here's where we look for the method to box the return values } catch (NoSuchMethodException|IllegalAccessException nsme) { throw ExceptionHandling.dieInternal(tc, "Couldn't find the method for filtering return values from Java."); } MethodHandle rethandle = MethodHandles.filterReturnValue(resHandle, (MethodHandle) rfh); return ((MethodHandle)rethandle).invokeWithArguments(argVals); }
The curious may check the whole file here to see the omitted parts (which includes heaps of debug output), although you’d also have to build NQP from this branch for the ability to load
.class files as well as a few fixes needed for overriding some of the methods of classes contained in
BootJavaInterop if you wanted to compile it.
The result of these changes can be demonstrated with the following classfile
Bar.java (which has to be compiled with
javac Bar.java)
public class Bar { public String whatsMyDesc(int in, String str) { String out = "called 'method/answer/(ILjava/lang/String;)Ljava/lang/String;"; out += "\nparameters are " + in + " and " + str; return out; } public String whatsMyDesc(String str, int in) { String out = "called 'method/answer/(Ljava/lang/String;I)Ljava/lang/String"; out += "\nparameters are " + str + " and " + in; return out; } }
and this oneliner:
$ perl6-j -e'use Bar:from<java>; my $b = Bar.new; say $b.whatsMyDesc(1, "hi"); say $b.whatsMyDesc("bye", 2)' called 'method/answer/(ILjava/lang/String;)Ljava/lang/String; parameters are 1 and hi called 'method/answer/(Ljava/lang/String;I)Ljava/lang/String; parameters are bye and 2
Closing thoughts
While working on this I discovered a few old-looking bugs with marshalling reference type return values. This means that the current state can successfully dispatch among shortname-methods, but only value types and
java.lang.String can be returned without causing problems, either while marshalling to Perl 6 objects or when calling Perl 6 methods on them. Additionally, there’s a few cases where we can’t sensibly decide which candidate to dispatch to, e.g. when two methods only differ in the byte size of a primitive type. For example one of
public String foo(int in) { // ... } public String foo(short in) { // ... }
is called with a Perl 6 type that does not supply byte size as argument, let’s say
Int. This is currently resolved by silently choosing the first (that is, declared first) candidate and not mentioning anything about this, but should eventually warn about ambiguity and give the method descriptors for all matching candidates, to facilitate an informed choice by the user. Another, probably the biggest problem that’s not quite resolved is invocation of overloaded constructors. Constructors in Java are a bit more than just static methods and handling of them doesn’t quite work properly yet, although it’s next on my list.
These shortcoming obviously need to be fixed, which means there’s still work to be done, but the thing I set out to do – improving how method calls to Java objects look on the surface in Perl 6 code – is definitely improved.
Addendum: As of today (2015-01-04) the works described in this advent post have been extended by a working mechanism for shortname constructors, the marshalling issues for reference types have been solved and the resulting code has been merged into rakudo/nom. Note that accessors for Java-side fields are not yet implemented on the Perl 6 side via shortname, so you’ll have to use quoted methods of the form “field/get_$fieldName/$descriptor”() and “field/set_$fieldName/$descriptor”($newValue) respectively, where $fieldName is the name of the field in the Java class and $descriptor the corresponding field descriptor. That’s next on my list, though.
4 thoughts on “Day 12 – Towards cleaner JVM-Interoperability”
The formating errors in this post are all my fault. I went to fix a simple typo and somehow broke WordPress’s support of escaped symbols in all the code blocks. I have no idea how to get it back, even restoring Peschwa’s version retains the problem I introduced.
I’ve corrected all formatting errors I could find, thanks for fixing the typo.
{{String foo(int in)}} Vs. {{String foo(short in)}} … hmm I think the bigger one should win out…
Reminds me of the c++11 faq I was reading just the other day.
They added yet more syntax to avoid problem of narrowing silently.
I agree that the current way of choosing a candidate by “whichever came first” is not much of sensible resolution method.
Matching the widest type by default is likely the most sensible thing to do. As far as I know, coercion to native types is not yet implemented, but would probably be the best solution for choosing a specific method by its type. This likely still means warning whenever the type is automatically narrowed to fit the Java method. | https://perl6advent.wordpress.com/2014/12/12/day-12-towards-cleaner-jvm-interoperability/ | CC-MAIN-2017-22 | refinedweb | 2,123 | 53.1 |
Get on the Good Foot with MicroPython on the ESP32
I’m going to show you how to t̶u̶r̶n̶ ̶o̶n̶ ̶y̶o̶u̶r̶ ̶f̶u̶n̶k̶ ̶m̶o̶t̶o̶r get started with MicroPython on an Espressif ESP32 development board. In this first part of this tutorial, I’ll show you how to:
- Get up & running with MicroPython on the ESP32
- Connect to WiFi
- Upload scripts to the board
- Read the ambient temperature (everyone loves that, right?)
In the forthcoming second part of this tutorial, I’ll show you publish the data you’ve collected with MQTT.
This guide expects you to possess:
- …familiarity with the command-line
- …basic experience interfacing with development boards (like Arduino)
- …a basic understanding of programming in Python
If I’ve glossed over something I shouldn’t have, please let me know!
Before we begin, you will need some stuff.
Bill of Stuff
You need Stuff in the following categories.
Hardware
- One (1) ESP32 development board such as the SparkFun ESP32 Thing (any kind will do; they are all roughly the same)
- One (1) DS18B20 digital thermometer (datasheet) in its TO-92 package
- One (1) 4.7kꭥ resistor
- Four (4) jumper wires
- One (1) 400-point or larger breadboard
- One (1) USB Micro-B cable
If you need to solder header pins on your dev board: do so.
If you have a DS18B20 “breakout board”: these typically have the resistor built-in, so you won’t need it. You will need to figure out which pin is which, however.
Software
You will need to download and install some software. Some of these things you may have installed already. Other things may need to be upgraded. This guide assumes you ain’t got jack squat.
I apologize that I don’t have much information for Windows users! However, I assure you that none of this is impossible.
VCP Driver
If you’re running macOS or Windows, you may need to download and install a Virtual COM Port (VCP) driver, if you haven’t done so already. Typically, the USB-to-serial chip on these boards is a CP210x or FT232RL; check the datasheet for your specific board or just squint at the IC near the USB port.
Newer Linux kernels have support for these chips baked-in, so driver installation is unnecessary.
Here’s an example of a CP2104 on an ESP32 dev board of mine:
To assert the driver is working, plug your dev board into your computer. If you’re on Linux, check for
/dev/ttyUSB0:
$ ls -l /dev/ttyUSB0
crw-rw---- 1 root dialout 188, 0 Dec 19 17:04 /dev/ttyUSB0
Or
/dev/tty.SLAB_USBtoUART on macOS:
$ ls -l /dev/tty.SLAB_USBtoUART
crw-rw-rw- 1 root wheel 21, 20 Dec 19 17:10 /dev/tty.SLAB_USBtoUART
Serial Terminal
A free, cross-platform, GUI terminal is CoolTerm. Linux & macOS users can get away with using
screen on the command line. More purpose-built solutions include
miniterm, which ships with Python 3, and can be launched via
python3 -m serial.tools.miniterm, and
minicom.
Python, Etc.
You will also need:
- Python v3.6.x
- For extra libraries, a clone or archive of micropython/micropython-lib (
git clone)
How you install these will vary per your installation of Python:
- To flash the board, esptool (version 2.2 or newer)
- To manage files on the board, adafruit-ampy
You could try
pip3 install esptool adafruit-ampy. This worked for me on macOS with Homebrew; YMMV. You might need to preface that with
sudo if not using Homebrew.
MicroPython Firmware
Finally, you’ll need to download the latest MicroPython firmware for ESP32.
Now that our tools are at hand, we can begin by flashing the ESP32 board with MicroPython.
Flashing MicroPython & First Steps
Unless MicroPython is already installed on your ESP32, you will want to start by connecting it to your computer via USB, and erasing its flash:
In the below examples, replace
/dev/tty.SLAB_USBtoUARTwith the appropriate device or COM port for your system.
$ esptool.py --chip esp32 -p /dev/tty.SLAB_USBtoUART erase_flash
esptool.py v2.2
Connecting........___
Chip is ESP32D0WDQ6 (revision 1)
Uploading stub...
Running stub...
Stub running...
Erasing flash (this may take a while)...
Chip erase completed successfully in 4.6s
Hard resetting...
Now, we can flash it with the firmware we downloaded earlier:
$ esptool.py --chip esp32 -p /dev/tty.SLAB_USBtoUART write_flash \
-z 0x1000 ~/Downloads/esp32-20171219-v1.9.2-445-g84035f0f.bin
esptool.py v2.2
Connecting........_
Chip is ESP32D0WDQ6 (revision 1)
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Compressed 936288 bytes to 587495...
Wrote 936288 bytes (587495 compressed) at 0x00001000 in 51.7 seconds (effective 144.8 kbit/s)...
Hash of data verified.
Leaving...
Hard resetting...
If you’re feeling dangerous, you can increase the baud rate when flashing by using the
--baudoption.
If that worked, you should be able to enter a MicroPython REPL by opening up the port:
# 115200 is the baud rate at which the REPL communicates
$ screen /dev/tty.SLAB_USBtoUART 115200
>>>
Congratulations,
>>> is your REPL prompt. This works similarly to a normal Python REPL (e.g. just running
python3 with no arguments). Try the
help() function:')
If you’ve never seen this before on an MCU: I know, crazy, right?
You can type in the commands from “Basic WiFi configuration” to connect. You will see a good deal of debugging information from the ESP32 (this can be suppressed, as you’ll see):
>>> import network
>>> sta_if = network.WLAN(network.STA_IF)
I (323563) wifi: wifi firmware version: 111e74d
I (323563) wifi: config NVS flash: enabled
I (323563) wifi: config nano formating: disabled
I (323563) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE
I (323573) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE
I (323593) wifi: Init dynamic tx buffer num: 32
I (323593) wifi: Init data frame dynamic rx buffer num: 64
I (323593) wifi: Init management frame dynamic rx buffer num: 64
I (323603) wifi: wifi driver task: 3ffe1584, prio:23, stack:4096
I (323603) wifi: Init static rx buffer num: 10
I (323613) wifi: Init dynamic rx buffer num: 0
I (323613) wifi: Init rx ampdu len mblock:7
I (323623) wifi: Init lldesc rx ampdu entry mblock:4
I (323623) wifi: wifi power manager task: 0x3ffe84b0 prio: 21 stack: 2560
W (323633) phy_init: failed to load RF calibration data (0x1102), falling back to full calibration
I (323793) phy: phy_version: 362.0, 61e8d92, Sep 8 2017, 18:48:11, 0, 2
I (323803) wifi: mode : null
>>> sta_if.active(True)
I (328553) wifi: mode : sta (30:ae:a4:27:d4:88)
I (328553) wifi: STA_START
True
>>> sta_if.scan()
I (389423) network: event 1
[(b'SON OF ZOLTAR', b"`\xe3'\xcf\xf4\xf5", 1, -57, 4, False), (b'CenturyLink6105', b'`1\x97%\xd9t', 1, -96, 4, False)]
>>> sta_if.connect('SON OF ZOLTAR', '<REDACTED>')
>>> I (689573) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:1
I (690133) wifi: state: init -> auth (b0)
I (690133) wifi: state: auth -> assoc (0)
I (690143) wifi: state: assoc -> run (10)
I (690163) wifi: connected with SON OF ZOLTAR, channel 1
I (690173) network: event 4
I (691723) event: sta ip: 10.0.0.26, mask: 255.255.255.0, gw: 10.0.0.1
I (691723) network: GOT_IP
I (693143) wifi: pm start, type:0
>>> sta_if.isconnected()
True
Cool, huh?
Now that we know we can connect to WiFi, let’s have the board connect every time it powers up.
Creating a MicroPython Module
To perform tasks upon boot, MicroPython wants you to put code in a file named
boot.py, which is a MicroPython module.
Let’s create
boot.py with code modified from the MicroPython ESP8266 docs, replacing where indicated:())
We can also create a function to disable debugging output. Append to
boot.py:
def no_debug():
import esp
# this can be run from the REPL as well
esp.osdebug(None)
These functions will be defined at boot, but not called automatically. Let’s test them before making them automatically execute.
To do this, we can upload
boot.py. You’ll need to close the connection to the serial port. If you’re using
screen, type
Ctrl-A Ctrl-\, then
y to confirm; otherwise disconnect or just quit your terminal program.
Uploading a MicroPython Module
Though there are other ways to do this, I’ve found the most straightforward for the ESP32 is to use ampy, a general-purpose tool by Adafruit. Here’s what it can do:
$ ampy --help
Usage: ampy [OPTIONS] COMMAND [ARGS]... environemnt
variable. [required]
-b, --baud BAUD Baud rate for the serial connection (default
115200). Can optionally specify with AMPY_BAUD.
MicroPython stores files (scripts or anything else that fits) in a very basic filesystem. By default, an empty
boot.py should exist already. To list the files on your board, execute:
$ ampy -p /dev/tty.SLAB_USBtoUART ls
boot.py
Using the
get command will echo a file’s contents to your shell (which could be piped to a file, if you wish):
$ ampy -p /dev/tty.SLAB_USBtoUART get boot.py
# This file is executed on every boot (including wake-boot from deepsleep)
We can overwrite it with our own
boot.py:
$ ampy -p /dev/tty.SLAB_USBtoUART put boot.py
And retrieve it to see that it overwrote the default
boot.py:
$ ampy -p /dev/tty.SLAB_USBtoUART get boot.py())
def no_debug():
import esp
# this can be run from the REPL as well
esp.osdebug(None)
Success! This is the gist of uploading files with
ampy. You can also upload entire folders, as we’ll see later.
From here, we can open our REPL again, and run our code. No need to restart the board!
Running a MicroPython Module
In following examples, I will eliminate the command prompt (
>>>) from code run in a REPL, for ease of copying & pasting.
Re-connect to the REPL.
$ screen /dev/tty.SLAB_USBtoUART 115200
First, we’ll disconnect from WiFi:
import network
sta_if = network.WLAN(network.STA_IF)
sta_if.disconnect()
Debug output follows:
I (3299583) wifi: state: run -> init (0)
I (3299583) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:1
I (3299583) wifi: pm stop, total sleep time: 0/-1688526567
I (3299583) wifi: STA_DISCONNECTED, reason:8
Then, we can
import the
boot module. This will make our
connect and
no_debugfunctions available.
import boot
connect()
Output:
connecting to network...
I (87841) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:1
I (88401) wifi: state: init -> auth (b0)
I (88401) wifi: state: auth -> assoc (0)
I (88411) wifi: state: assoc -> run (10)
I (88441) wifi: connected with SON OF ZOLTAR, channel 1
I (88441) network: event 4
I (90081) event: sta ip: 10.0.0.26, mask: 255.255.255.0, gw: 10.0.0.1
I (90081) network: GOT_IP
network config: ('10.0.0.26', '255.255.255.0', '10.0.0.1', '10.0.0.1')
I (91411) wifi: pm start, type:0
Super. Let’s silence the noise, and try again:
no_debug()
sta_if.disconnect()
connect()
Output:
connecting to network...
network config: ('10.0.0.26', '255.255.255.0', '10.0.0.1', '10.0.0.1')
LGTM.
The IP addresses above depend upon your local network configuration, and will likely be different.
Disconnect from the port (if using
screen:
Ctrl-A Ctrl-\,
y) and append these lines to
boot.py:
no_debug()
connect()
Upload it again via
ampy put boot.py, which will overwrite the existing
boot.py. Hard reset (“push the button”) or otherwise power -cycle the board. Reconnect to the REPL and execute
connect() to assert connectivity:
connect()
Output:
network config: ('10.0.0.26', '255.255.255.0', '10.0.0.1', '10.0.0.1')
You’ll notice “connecting to network…” was not printed to the console; if already connected, the
connect() function prints the configuration and returns. If you’ve gotten this far, then your board is successfully connecting to Wifi at boot. Good job!
We now have two more items to check off our list, unless you forgot what we were trying to do:
- We need to read the ambient temperature on an interval.
- We need to publish this information to an MQTT broker.
Next, we’ll knock out that temperature reading.
Temperature Readings in MicroPython
As we write our code, we can use the REPL to experiment.
I’m using the example found here. You’ll need to import three (3) modules,
machine,
onewire and
ds18x20 (note the
x):
import machine, onewire, ds18x20
I’ve connected my sensor to pin 12 on my ESP32. Your breadboard should look something like this:
To read temperature, we will create a Matryoshka-doll-like object by passing a
Pininstance into a
OneWire constructor (read about 1-Wire) and finally into a
DS18X20constructor:
pin = machine.Pin(12)
wire = onewire.OneWire(pin)
ds = ds18x20.DS18X20(wire)
Note that if the output of the following command is an empty list (
[]), the sensor couldn't be found. Check your wiring!
Now, we can ask
ds to scan for connected devices, and return their addresses:
ds.scan()
Output:
[bytearray(b'(\xee3\x0c"\x15\x004')]
ds.scan() returns a
list of device addresses in
bytearray format. Yours may look slightly different. Since we only have one, we can save its address to a variable. To read temperature data, we tell the 1-Wire bus to reset via
ds.convert_temp(), take a short pause of 750ms (in case you're pasting this):
import time
addr = ds.scan().pop()
ds.convert_temp()
time.sleep_ms(750)
temp = ds.read_temp(addr)
temp
Output:
19.875
This reading is in Celsius. If you’re like me, you don’t speak Celsius, so maybe you want to convert it to Fahrenheit:
(temp * 1.8) + 32
Output:
67.775
…which is right around what I expected!
Let’s take what we’ve done and create a new file,
temperature.py:
import time
from machine import Pin
from onewire import OneWire
from ds18x20 import DS18X20
class TemperatureSensor:
"""
Represents a Temperature sensor
"""
def __init__(self, pin):
"""
Finds address of single DS18B20 on bus specified by `pin`
:param pin: 1-Wire bus pin
:type pin: int
"""
self.ds = DS18X20(OneWire(Pin(pin)))
addrs = self.ds.scan()
if not addrs:
raise Exception('no DS18B20 found at bus on pin %d' % pin)
# save what should be the only address found
self.addr = addrs.pop()
def read_temp(self, fahrenheit=True):
"""
Reads temperature from a single DS18X20
:param fahrenheit: Whether or not to return value in Fahrenheit
:type fahrenheit: bool
:return: Temperature
:rtype: float
"""
self.ds.convert_temp()
time.sleep_ms(750)
temp = self.ds.read_temp(self.addr)
if fahrenheit:
return self.c_to_f(temp)
return temp
@staticmethod
def c_to_f(c):
"""
Converts Celsius to Fahrenheit
:param c: Temperature in Celsius
:type c: float
:return: Temperature in Fahrenheit
:rtype: float
"""
return (c * 1.8) + 32
Disconnect from the REPL. Upload
temperature.py via
ampy:
$ ampy -p /dev/tty.SLAB_USBtoUART put temperature.py
Then we can open our REPL once again, and try it:
from temperature import TemperatureSensor
t = TemperatureSensor(12)
t.read_temp() # use t.read_temp(False) to return Celsius
Seems to have warmed up a bit. Output:
68.7875
Good work!
Conclusion of Part One (1)
In the first part of this tutorial, we’ve learned how to:
- Flash an ESP32 dev board with MicroPython
- Use MicroPython’s REPL to experiment
-”.
This article originally appeared January 8, 2018 on boneskull.com. | https://hackernoon.com/get-on-the-good-foot-with-micropython-on-the-esp32-decdd32c4720?gi=c195b92f55bd | CC-MAIN-2018-13 | refinedweb | 2,597 | 65.93 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.