text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
<?xml:namespace prefix = w ns = “urn:schemas-microsoft-com:office:word” />
The press release indicated that SQL Express is already available, but those of you who have followed the link have found that it is not actually there. Nothing to worry about, has a dependency on the .NET Framework 3.5 SP1 and we need to coordinate the release to the web for both of these which will take a few more days. I’ll blog about it when we release it and you can watch the Express web site for updates; when we release it, it will be available from that site.
There will be three editions of SQL Express, each one adding to the functionality of the previous one; you simply pick the edition that includes the set of functionality you need and install it. The information I posted about SQL Express RC0 is still valid, but I’m reproducing the feature comparison table here to include the third edition:
* from the SQL Server 2008 Feature Pack.
As you can see, SQL Express with Tools includes the core database engine and the basic version of Management Studio; this is the ideal edition for people who want the right tools for developing relational database applications. The advanced features such as Integrated Full Text Search, Reporting Services and BIDS are available in SQL Express Advanced. SQL Express with Tools will be delivered in the same architectures and with the same prerequisites as SQL Express Advanced, which are as follows:
SQL Express with Tools Prerequisites
.NET Framework 3.5 SP1
Windows Installer 4.5
Windows PowerShell 1.0
As with previous releases (SQL Server 2005) we will be releasing SQL Express 2008 in stages. SQL Express core will be released first, along with Visual Studio 2008 SP1, and the other two editions will follow about a month later. As I said, watch the Express web site and this blog for information about the release of these two additional editions.
– M
I promised an update on SQL Express availability in my last post about the release of SQL Server 2008
how much more messier can MS be? gee….. just release one Edition and let us pick the components.
May i get the SQL Agent Server in Sql Server 2008 Express Edition? | https://blogs.msdn.microsoft.com/sqlexpress/2008/08/07/whats-up-with-sql-server-2008-express-editions/ | CC-MAIN-2016-50 | refinedweb | 379 | 68.81 |
Vue
After reading this guide, you’ll know:
- What Vue is, and why you would consider using it with Meteor
- How to install Vue in your Meteor application, and how to use it correctly
- How to integrate Vue with Meteor’s realtime data layer
- Meteor’s and Vue’s Style Guides and Structure
- How to use Vue’s SSR (Server-side Rendering) with Meteor
Vue already has an excellent guide with many advanced topics already covered. Some of them are SSR (Server-side Rendering), Routing, Code Structure and Style Guide and State Management with Vuex.
This documentation is purely focused on integrating it with Meteor.
Introduction has an excellent guide and documentation. This guide is about integrating it with Meteor.
Why use Vue with MeteorVue is a frontend library, like React, Blaze and Angular. Some really nice frameworks are built around Vue. Nuxt.js for example, aims to create a framework flexible enough that you can use it as a main project base or in addition to your current project based on Node.js. Though Nuxt.js is full-stack and very pluggable. It lacks the an API to communicate data from and to the server. Also unlike Meteor, Nuxt still relies on a configuration file. Meteor’s build tool and Pub/Sub API (or Apollo) provides Vue with this API that you would normally have to integrate yourself, greatly reducing the amount of boilerplate code you have to write.
Integrating Vue With MeteorTo start a new project:
To install Vue in Meteor 1.7, you should add it as an npm dependency:To install Vue in Meteor 1.7, you should add it as an npm dependency:
meteor create .
To support Vue’s Single File Components with the .vue file extensions, install the following Meteor package created by Vue Core developer Akryum (Guillaume Chau).To support Vue’s Single File Components with the .vue file extensions, install the following Meteor package created by Vue Core developer Akryum (Guillaume Chau).
meteor npm install --save vue
You will end up with at least 3 files: 1.You will end up with at least 3 files: 1.
meteor add akryum:vue-component
/client/App.vue(The root component of your app) 2.
/client/main.js(Initializing the Vue app in Meteor startup) 3.
/client/main.html(containing the body with the #app div) We need a base HTML document that has the
appid. If you created a new project from
meteor create ., put this in your
/client/main.html.
You can now start writing .vue files in your app with the following format. If you created a new project fromYou can now start writing .vue files in your app with the following format. If you created a new project from
<body> <div id="app"></div> </body>
meteor create ., put this in your
/client/App.vue.
You can render the Vue component hierarchy to the DOM by using the below snippet in you client startup file. If you created a new project fromYou can render the Vue component hierarchy to the DOM by using the below snippet in you client startup file. If you created a new project from
<template> <div> <p>This is a Vue component and below is the current date:<br />{{date}}</p> </div> </template> <script> export default { data() { return { date: new Date(), }; } } </script> <style scoped> p { font-size: 2em; text-align: center; } </style>
meteor create ., put this in your
/client/main.js.
Run your new Vue+Meteor app with this command:Run your new Vue+Meteor app with this command:
import Vue from 'vue'; import App from './App.vue'; import './main.html'; Meteor.startup(() => { new Vue({ el: '#app', ...App, }); });
NO_HMR=1 meteor
Using Vue with Meteor’s realtime data layer
One of the biggest advantages of Meteor is definitely it’s realtime data layer: reactivity, methods, publications, and subscriptions.
To integrate it with Vue, first install the
vue-meteor-tracker package from NPM:
meteor npm install --save vue-meteor-tracker
Next, the package needs to be plugged into the Vue object—just before Vue initialization in
/client/main.js:
import Vue from 'vue'; import VueMeteorTracker from 'vue-meteor-tracker'; // here! Vue.use(VueMeteorTracker); // here! import App from './App.vue'; import './main.html'; Meteor.startup(() => { new Vue({ el: '#app', ...App, }); });
Methods, Publications, and Subscriptions in Vue components
Currently our Vue application shows the time it was loaded.
Let’s add a button to update the time in the app. To flex Meteor’s plumbing, we’ll create:
- A Meteor Collection called
Timewith a
currentTimedoc.
- A Meteor Publication called
Timethat sends all documents
- A Meteor Method called
UpdateTimeto update the
currentTimedoc.
- A Meteor Subscription to
Time
- Vue/Meteor Reactivity to update the Vue component
The first 3 steps are basic Meteor:
- In
/imports/collections/Time.js
Time = new Mongo.Collection("time");
- In
/imports/publications/Time.js
Meteor.publish('Time', function () { return Time.find({}); });
- In
/imports/methods/UpdateTime.js
Meteor.methods({ UpdateTime() { Time.upsert('currentTime', { $set: { time: new Date() } }); }, });
Now, let’s add these to our server. First remove autopublish so our publications matter:
meteor remove autopublish
For fun, let’s make a
settings.json file:
{ "public": { "hello": "world" } }
Now, let’s update our
/server/main.js to use our new stuff:
import { Meteor } from 'meteor/meteor'; import '/imports/collections/Time'; import '/imports/publications/Time'; import '/imports/methods/UpdateTime'; Meteor.startup(() => { // Update the current time Meteor.call('UpdateTime'); // Add a new doc on each start. Time.insert({ time: new Date() }); // Print the current time from the database console.log(`The time is now ${Time.findOne().time}`); });
Start your Meteor app, your should see a message pulling data from Mongo. We haven’t made any changes to the client, so you should just see some startup messages.
NO_HMR=1 meteor
- and 5. Great, let’s integrate this with Vue using [Vue Meteor Tracker](].
<template> <div> <div v-Loading...</div> <div v-else> <p>Hello {{hello}}, <br>The time is now: {{currentTime}} </p> <button @Update Time</button> <p>Startup times:</p> <ul> <li v- {{t.time}} - {{t._id}} </li> </ul> <p>Meteor settings</p> <pre><code> {{settings}} </code></pre> </div> </div> </template> <script> import '/imports/collections/Time'; export default { data() { console.log('Sending non-Meteor data to Vue component'); return { hello: 'World', settings: Meteor.settings.public, // not Meteor reactive } }, // Vue Methods methods: { updateTime() { console.log('Calling Meteor Method UpdateTime'); Meteor.call('UpdateTime'); // not Meteor reactive } }, // Meteor reactivity meteor: { // Subscriptions - Errors not reported spelling and capitalization. $subscribe: { 'Time': [] }, // A helper function to get the current time currentTime () { console.log('Calculating currentTime'); var t = Time.findOne('currentTime') || {}; return t.time; }, // A Minimongo cursor on the Time collection is added to the Vue instance TimeCursor () { // Here you can use Meteor reactive sources like cursors or reactive vars // as you would in a Blaze template helper return Time.find({}, { sort: {time: -1} }) }, } } </script> <style scoped> p { font-size: 2em; } </style>
Restart your server to use the
settings.json file.
NO_HMR=1 meteor --settings=settings.json
Then refresh your browser to reload the client.
You should see:
- the current time
- a button to Update the current time
- startup times for the server (added to the Time collection on startup)
- The Meteor settings from your settings file
There may be better or alternative ways to do these things. Please PR if you have improvements.
Meteor’s and Vue’s Style Guides and Structure
Like code linting and style guides are tools for making code easier and more fun to work with.
These are practical means to practical ends.
- Leverage existing tools
- Leverage existing configurations
Meteor’s style guide and Vue’s style guide can be overlapped like this:
- Configure your Editor
- Configure eslint for Meteor
- Review the Vue Style Guide
- Open up the ESLint rules as needed.
Application Structure is documented here:
- Meteor’s Application Structure is the default start.
- Vuex’s Application Structure may be interesting.
SSR and Code SplittingVue has an excellent guide on how to render your Vue application on the server. It includes code splitting, async data fetching and many other practices that are used in most apps that require this.
Basic ExampleMaking Vue SSR to work with Meteor is not more complex then for example with Express. However instead of defining a wildcard route, Meteor uses its own server-render package that exposes an
onPageLoadfunction. Every time a call is made to the server side, this function is triggered. This is where we should put our code like how its described on the VueJS SSR Guide. To add the packages, run:
then connect to Vue inthen connect to Vue in
meteor add server-render meteor npm install --save vue-server-renderer
/server/main.js:
Luckily Akryum has us covered and provided us with a Meteor package for this: akryum:vue-ssr allows us to write our server-side code like below:Luckily Akryum has us covered and provided us with a Meteor package for this: akryum:vue-ssr allows us to write our server-side code like below:
import { Meteor } from 'meteor/meteor'; import Vue from 'vue'; import { onPageLoad } from 'meteor/server-render'; import { createRenderer } from 'vue-server-renderer'; const renderer = createRenderer(); onPageLoad(sink => { console.log('onPageLoad'); const url = sink.request.url.path; const app = new Vue({ data: { url }, template: `<div>The visited URL is: {{ url }}</div>` }); renderer.renderToString(app, (err, html) => { if (err) { res.status(500).end('Internal Server Error'); return } console.log('html', html); sink.renderIntoElementById('app', html); }) })
import { VueSSR } from 'meteor/akryum:vue-ssr'; import createApp from './app'; VueSSR.createApp = function () { // Initialize the Vue app instance and return the app instance const { app } = createApp(); return { app }; }
Server-side RoutingSweet, but most apps have some sort of routing functionality. We can use the VueSSR context parameter for this. It simply passes the Meteor server-render request url which we need to push into our router instance:
import { VueSSR } from 'meteor/akryum:vue-ssr'; import createApp from './app'; VueSSR.createApp = function (context) { // Initialize the Vue app instance and return the app + router instance const { app, router } = createApp(); // Set router's location from the context router.push(context.url); return { app }; }
Async data - An Interesting Nuxt Feature
Nuxt has a lovely feature called asyncData.
Note: As an alternative to Meteor’s pub/sub, it may not be more useful (given you probably picked Meteor for that feature). This documentation is left here as an alternative.
AsyncData allows developers to fetch data from an external source on the server side. Below follows a description of how to implement a similar method into your Meteor application which grants you the same benefits, but with Meteor’s ‘methods’ API.
Important reminder here is the fact that Server Rendering on its own is already worth a guide - which is exactly what the guys from Vue did. Most of the code is needed in any platform except Nuxt (Vue based) and Next (React based). We simply describe the best way to do this for Meteor. To really understand what is happening read that SSR guide from Vue.
SSR follows a couple of steps that are almost always the same for any frontend library (React, Vue or Angular).
- Resolve the url with the router
- Fetch any matching components from the router
- Filter out components that have no asyncData
- Map the components into a list of promises by return the asyncData method’s result
- Resolve all promises
- Store the resulting data in the HTML for later hydration of the client bundle
- Hydrate the clientside
Its better documented in code:
VueSSR.createApp = function (context) { // Wait with sending the app to the client until the promise resolves (thanks Akryum) return new Promise((resolve, reject) => { const { app, router, store } = createApp({ ssr: true, }); // 1. Resolve the URL with the router router.push(context.url); router.onReady(async () => { // 2, Fetch any matching components from the router const matchedComponents = router.getMatchedComponents(); const route = router.currentRoute; // No matched routes if (!matchedComponents.length) { reject(new Error('not-found')); } // 3. Filter out components that have no asyncData const componentsWithAsyncData = matchedComponents.filter(component => component.asyncData); // 4. Map the components into a list of promises // by returning the asyncData method's result const asyncDataPromises = componentsWithAsyncData.map(component => ( component.asyncData({ store, route }) )); // You can have the asyncData methods resolve promises with data. // However to avoid complexity its recommended to leverage Vuex // In our case we're simply calling Vuex actions in our methods // that do the fetching and storing of the data. This makes the below // step really simple // 5. Resolve all promises. (that's it) await Promise.all(asyncDataPromises); // From this point on we can assume that all the needed data is stored // in the Vuex store. Now we simply need to grap it and push it into // the HTML as a "javascript string" // 6. Store the data in the HTML for later hydration of the client bundle const js = `window.__INITIAL_STATE__=${JSON.stringify(store.state)};`; // Resolve the promise with the same object as the simple version // Push our javascript string into the resolver. // The VueSSR package takes care of the rest resolve({ app, js, }); }); }); };
Awesome. When we load our app in the browser you should see a weird effect. The app seems to load correctly. That’s the server-side rendering doing its job well. However, after a split second the app suddenly is empty again.
That’s because when the client-side bundle takes over, it doesn’t have its data yet. It will override the HTML with an empty app! We need to hydrate the bundle with the JSON data in the HTML.
If you inspect the HTML via the source code view, you will see the HTML source of your app accompanied by the
__INITIAL_STATE="" filled with the JSON string. We need to use this to hydrate the clientside. Luckily this is fairly easy, because we have only one place that needs hydration: the Vuex store!
import { Meteor } from 'meteor/meteor'; import createApp from './app'; Meteor.startup(() => { const { store, router } = createApp({ // Same function as the server ssr: false, }); // Hydrate the Vuex store with the JSON string if (window.__INITIAL_STATE__) { store.replaceState(window.__INITIAL_STATE__); } });
Now when we load our bundle, the components should have data from the store. All fine. However there is one more thing to do. If we navigate, our newly rendered clientside components will again not have any data. This is because the
asyncData method is not yet being called on the client side. We can fix this using a mixin like below as documented in the Vue SSR Guide. }) } } })
We now have a fully functioning and server-rendered Vue app in Meteor! | https://guide.meteor.com/vue.html | CC-MAIN-2019-09 | refinedweb | 2,416 | 56.96 |
Simple Question Answering (QA) Systems That Use Text Similarity Detection in Python
How exactly are smart algorithms able to engage and communicate with us like humans? The answer lies in Question Answering systems that are built on a foundation of Machine Learning and Natural Language Processing. Let's build one here.
By Andrew Zola, Content Manager at Artmotion
Artificial Intelligence (AI) is no longer an abstract idea that conjures up images from sci-fi movies. Today, AI has evolved considerably, and it’s now able to recognize speech, make decisions, and work alongside humans to complete tasks at a larger scale.
So instead of robots that are trying to take over the planet, we think about Alexa, Siri, or a customer service chatbot. But how exactly are these smart algorithms able to engage and communicate with us like humans?
The answer lies in Question Answering (QA) systems that are built on a foundation of Machine Learning (ML) and Natural Language Processing (NLP).
What are QA Systems?
QA systems can be described as a technology that provides the right short answer to a question rather than giving a list of possible answers. In this scenario, QA systems are designed to be alert to text similarity and answer questions that are asked in natural language.
But some also derive information from images to answer questions. For example, when you’re clicking on image boxes to prove that you’re not a robot, you’re actually teaching smart algorithms about what’s in a particular image.
This is only possible because of NLP technologies like Google’s Bidirectional Encoder Representations from Transformers (BERT). Anyone who wants to build a QA system can leverage NLP and train machine learning algorithms to answer domain-specific (or a defined set) or general (open-ended) questions.
There are plenty of datasets and resources online, so you can quickly start training smart algorithms to learn and process massive quantities of human language data.
To boost efficiency and accuracy, NLP programs also use both inference and probability to guess the right answer. Over time, they have become very good at it!
For businesses, the advantage of deploying QA systems is that they are highly user-friendly. Once the enterprise QA system is built, anyone can use it. In fact, if you have engaged with Alexa or used Google Translate, you have experienced NLP at work.
In an enterprise setting, they can be used for much more than chatbots and voice assistants. For example, smart algorithms can be trained to do the following:
- Administration (find and contextualize information to automate the process of searching, modifying, and managing documents)
- Customer service (with chatbots that can engage customers as well as identify new leads by analyzing profiles, phrases, and other data)
- Marketing (by being alert to mentions about the company or brand online)
But circling back to the topic at hand, let’s take a look at how it works.
How Do You Build a Robust QA System?
To answer the question in a manner that can be technical and easily understood, I’ll show you how to build a simple QA system based on string similarity measurement, and sourced using a closed domain.
The following example is based on Ojokoh and Ayokunle’s research, Fuzzy-Based Answer Ranking in Question Answering Communities.
QA system with approximate match function is simple as:
In this scenario, we’ll use a small set of data of question-answer pairs in a CSV file. In the real world, enterprises will use highly specialized databases with hundreds of thousands of samples.
Prerequisites
To run these examples, you need Python 3, Jupyter Lab and python-Levenshtein module.
First load the data:
import pandas as pd data = pd.read_csv('qa.csv') # this function is used to get printable results def getResults(questions, fn): def getResult(q): answer, score, prediction = fn(q) return [q, prediction, answer, score] return pd.DataFrame(list(map(getResult, questions)), columns=["Q", "Prediction", "A", "Score"]) test_data = [ "What is the population of Egypt?", "What is the poulation of egypt", "How long is a leopard's tail?", "Do you know the length of leopard's tail?", "When polar bears can be invisible?", "Can I see arctic animals?", "some city in Finland" ] data
In its simplest form, QA systems can only answer questions if the questions and answers are matched perfectly.
import re def getNaiveAnswer(q): # regex helps to pass some punctuation signs row = data.loc[data['Question'].str.contains(re.sub(r"[^\w'\s)]+", "", q),case=False)] if len(row) > 0: return row["Answer"].values[0], 1, row["Question"].values[0] return "Sorry, I didn't get you.", 0, "" getResults(test_data, getNaiveAnswer)
As you can see from the above, a small grammatical mistake can quickly derail the whole process. It’s the same result if you use string pre-processing of source and query texts like punctuation symbols removal,lowercasing, etc.
So how can we improve our results?
To improve results, let’s switch things up a little and use approximate string matching. In this scenario, our system will be enabled to accept grammatical mistakes and minor differences in the text.
There are many ways to deploy approximate string matching protocols, but for our example, we will use one of the implementations of string metrics called Levenshtein distance. In this scenario, the distance between two words is the minimum number of single-character edits (insertions, deletions, or substitutions) that are needed to change one word into the other.
Let’s deploy the Levenshtein Python module on the system. It contains a set of approximate string matching functions that we can experiment with.
from Levenshtein import ratio def getApproximateAnswer.8: return answer, max_score, prediction return "Sorry, I didn't get you.", max_score, prediction getResults(test_data, getApproximateAnswer)
As you can see from the above, even minor grammatical mistakes can generate the correct answer (and a score below 1.0 is highly acceptable).
To make our QA system even better, go ahead and adjust max_score coefficient of our function to be more accommodating.
from Levenshtein import ratio def getApproximateAnswer2.3: # threshold is lowered return answer, max_score, prediction return "Sorry, I didn't get you.", max_score, prediction getResults(test_data, getApproximateAnswer2)
The results above evidence that even when different words are used, the system can respond with the correct answer. But if you took a closer look, the 5th result looks like a false positive.
This means that we have to take it to the next level and leverage advanced libraries that have been made available by the likes of Facebook and Google to overcome these challenges.
The example above is a simple demonstration of how this works. The code is quite simple and impractical to use with large volumes and iterations on a massive dataset.
The well-known BERT library, developed by Google, is better suited for enterprise tasks. AI-powered QA systems that you’ve already engaged with use far more advanced databases and engage in continuous machine learning.
What’s your experience building enterprise QA systems? What challenges did you face? How did you overcome them? Share your thoughts and experiences in the Comments section below.
The source code for this article can be found HERE.
Bio: Andrew Zola (@DrewZola) is Content Manager at Artmotion: A bank for your data. He has many passions, but the main one is writing about technology. Furthermore, learning about new things and connecting with diverse audiences is something that has always amazed and excited Andrew.
Related:
- This Microsoft Neural Network can Answer Questions About Scenic Images with Minimum Training
- Salesforce Open Sources a Framework for Open Domain Question Answering Using Wikipedia
- Why you should NOT use MS MARCO to evaluate semantic search | https://www.kdnuggets.com/2020/04/simple-question-answering-systems-text-similarity-python.html | CC-MAIN-2022-21 | refinedweb | 1,279 | 54.63 |
# Replace YOUR FULL NAME with your full name then press SHIFT-ENTER name = 'YOUR FULL NAME'
# Type name.<TAB> to see what you can do with this string name
# Replace "lower" with "title" and press CTRL-ENTER name.lower()
# Capitalize the string using a single command
# Split the string into a list of words using a single command
# Get the number of characters in the string len(name)
xs = ['fox', 'rabbit', 'raccoon'] # Get the last item in the list xs[-1]
# Get the first item in the list
# Get the number of items in the list
from os.path import expanduser from pandas import read_csv name_table_path = expanduser('~/Experiments/Datasets/names-by-year.csv') name_table = read_csv(name_table_path) # Show the first two rows the dataset name_table[:2]
# Show the first five rows of the dataset
# Show the unique years in the table name_table['year'].unique()
# Count the number of unique years in the table len(name_table['year'].unique())
# Count the number of unique names in the table
# Select rows where the name starts with Tim and the year is less than 1915 t = name_table[ name_table['name'].str.startswith('Tim') & ( name_table['year'] < 1915)] t
# Sum counts t['count'].sum()
# Count how many people were born with your name between and including the years 1960 and 1969
# Split the large table into smaller tables grouped by year for year, table in t.groupby('year'): print(table) print()
# Sum counts for each year t.groupby('year')['count'].sum()
# Count how many people shared the first three letters of your name after the year 2000
# Plot the number of names per year %matplotlib inline name_table.groupby('year').sum().plot();
# Plot the number of babies born with the name Jake by year name_table[name_table['name'] == 'Jake'].groupby('year')['count'].sum().plot();
# Plot the number of babies born with your name by year
axes = name_table[name_table.name == 'Jake'].groupby('year').sum().plot() axes.get_figure().savefig('/tmp/plot.png')
ls /tmp
In the menu above, choose File > New Notebook > Python 3. Copy and paste each of the following code blocks into the new notebook. Press SHIFT-ENTER on each code block to make sure that the code runs properly.
name_table_path = '~/Experiments/Datasets/names-by-year.csv' name = 'jake' target_folder = '/tmp'
from pandas import read_csv name_table = read_csv(name_table_path) name = name.capitalize().split()[0])
Add a comment with the word CrossCompute to the first code block. Your first code block should look like the following:
# CrossCompute name_table_path = '~/Experiments/Datasets/names-by-year.csv' name = 'jake' target_folder = '/tmp' | https://crosscompute.com/n/EtZf9gpDfPevbcPfhzSEwTeLZReVsGlg/-/tutorials/1-beginner/plot-name-frequency-by-year-walkthrough | CC-MAIN-2020-05 | refinedweb | 416 | 70.84 |
OrderedSet
OrderedSet is a native Swift ordered set. It has the behavior and features of
Array and
Set in one abstract type.
var names: OrderedSet<String> = ["Brad", "Jake", "Susan"] names += ["Janice", "Brad"] // ["Jake", "Susan", "Janice", "Brad"] names.subtractInPlace(["Jake", "Janice"]) // ["Susan", "Brad"] names.insert("Robert", atIndex: 1) // ["Susan", "Robert", "Brad"] names.contains("Susan") // true names.isSupersetOf(["Susan", "Jake"]) // false
Installation
Swift Package Manager
You can build
OrderedSet using the Swift Package Manager. Just include
OrderedSet as a package in your dependencies:
.Package(url: "", majorVersion: 1)
Be sure to import the module at the top of your .swift files:
import OrderedSet
CocoaPods
OrderedSet is available through CocoaPods. To install, simply include the following lines in your podfile:
use_frameworks! pod 'SwiftOrderedSet'
Be sure to import the module at the top of your .swift files:
import SwiftOrderedSet
Carthage
OrderedSet is available through Carthage. Just add the following to your cartfile:
github "bradhilton/OrderedSet"
Be sure to import the module at the top of your .swift files:
import OrderedSet
Author
Brad Hilton, [email protected]
License
OrderedSet is available under the MIT license. See the LICENSE file for more info.
Latest podspec
{ "name": "SwiftOrderedSet", "version": "5.0.0", "summary": "Native Swift Ordered Set", "description": "A native Swift implementation of an ordered set. Supports the same behavior and functionality as native Swift arrays and sets, ensuring that each and every element in an ordered list only appears once.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Brad Hilton": "[email protected]" }, "source": { "git": "", "tag": "5.0.0" }, "swift_versions": "5.0", "platforms": { "ios": "8.0", "osx": "10.9" }, "source_files": [ "Sources", "Sources/**/*.{swift,h,m}" ], "requires_arc": true }
Mon, 01 Apr 2019 11:51:04 +0000 | https://tryexcept.com/articles/cocoapod/swiftorderedset | CC-MAIN-2019-43 | refinedweb | 275 | 61.22 |
Programming a Guide
Programming a Guide
Programming a Guide
Prepering the Segments
A guide runs different Segments.
A Segment requires some parameters:
- ID
- The ID of the Segment: 0, 1, 2, ...
- Text
- The text to send to the player.
- exitBySkip
- If the guide exits, when the segment should skipped.
- spaceable
- If the answer of the player can contain spaces.
- backSeg
- The ID of the Segment to go to, when the player uses "back".
- skipSeg
- The ID of the Segment to go to, when the player uses "skip" or "next".
Segments can look like this:);
- Using "back" at the first segment will exit the guide. "Skip" will continue with the second segment.
- Typing "cre a tive" at the second segment will be corrected to "creative", because spaces aren't allowed here.
- Using "back" at the second segment will go back to the first one. "Skip" is not allowed here.
- Using "skip" at the third segment will exit the guide.
- Make sure the skipSeg is 2, and nothing other!
The array of segments could look like this:
Segment[] segs = {seg1, seg2, seg3};
Easy, isn't it?
Prepareing the Receiver
Every time the player type in a message, the Guide will scan it.
There are some "commands" / words, which will have an effect on the Guide:
Any other message from the player will be send to the receiver!
package mypackage; import org.bukkit.command.CommandSender; import de.gcmclan.team.guides.Guide; import de.gcmclan.team.guides.GuideReceiver; import de.gcmclan.team.guides.Segment; public class MyGuide implements GuideReceiver { public MyGuide(){} @Override public int receive(Guide guide, Segment seg, String msg) { // Do something with the msg. return nextSeg; // The segment, which should now be called. /* * The return value can also be: * -1 To exit the Guide without any announcement. * -2 The finish the Guide. This will send the player a message: "You finished the Guide!" (Guide will be replace by the guideSyn) * -3 Exits the Guide, telling the player, that the Guide was exited by the plugin. */ } @Override public void finish(CommandSender s) { //Save data, if you wish to. } @Override public boolean usesId(String id) { //Does this Guide use a ID of an element? (ID of segments are not meant.) //If yes, is it the given String id equal to the ID in use? //If no, leave it blank. } }
Inside the receive(...) method, you should handle the message.
For our example it could look like this:
@Override public int receive(Guide guide, Segment seg, String msg) { switch(seg.getID()) { case 0: customPlayer.setName(msg); return 1; case 1: if(msg.equalsIgnoreCase("survial")) { MsgSender.sInfo(guide.getExecutor(), guide.getCmdSender(), "You'll play in survival mode!"); customPlayer.setGameMode(0); return 2; } else if(msg.equalsIgnoreCase("creative")) { MsgSender.sInfo(guide.getExecutor(), guide.getCmdSender(), "You'll play in creative mode!"); customPlayer.setGameMode(1); return 2; } else { MsgSender.sBug(guide.getExecutor(), guide.getCmdSender(), "You cannot play " + msg + "!"); return 1; } case 2: TrinaryAnswer answer = MethodPool.getAnswer(msg); if(answer == TrinaryAnswer.POSITIVE) { MsgSender.sInfo(guide.getExecutor(), guide.getCmdSender(), "Fine. :-)"); finish(guide.getCmdSender()); return -2; } else if(answer == TrinaryAnswer.NEGATIVE) { MsgSender.sInfo(guide.getExecutor(), guide.getCmdSender(), "Oh dear. :-("); finish(guide.getCmdSender()); return -2; } else { MsgSender.sBug(guide.getExecutor(), guide.getCmdSender(), "Your answer should be yes or no!"); MsgSender.sWarn(guide.getExecutor(), guide.getCmdSender(), "You typed: " + msg); return 2; } } return nextSeg; //The segment, which should now be called. }
Parameters of a Guide
A guide requires some parameters:
- GuideManager
- You can use the internal GuideManger
- You can also create your own guide manger
- GuideReceiver
- see below
- Executor
- The excecuting plugin -> Your plugin!
- GuideSyn
- A word for "Guide", like "Setup", "Editor" or "Welcome Guide".
- Player / Console
- The player or the console using the guide.
- Segments
- An array of segments.
Putting all together
Once, you prepared the Segments and the Receiver, you just have to fit all together:
public void runGuide(Plugin myPlugin, CommandSender sender) { GuidesPlugin gp = (GuidesPlugin) this.getServer().getPluginManager().getPlugin("GuidesPlugin");); Segment[] segs = {seg1, seg2, seg3}; GudieReceiver gr = new MyGuide(); Guide myGuide = new Guide(gp, myReceiver, myPlugin, "Welcome-Guide", sender, segs); gp.addGuide(myGuide, true); // True here, to deactivate possible running Guides of the player / sender. }
Now your're done!
Posts Quoted: | https://dev.bukkit.org/projects/guides/pages/programming-a-guide | CC-MAIN-2022-21 | refinedweb | 688 | 60.72 |
Proper Method to clear SerialElementbox86rowh Jun 16, 2010 1:57 PM
Hello,
If I have a player that is meant to be played multiple times with multiple videos, and I am playing them in a serialElement, what is th ebest way to stop playback and reset the serialElement so that it is fresh and ready to start again.
RIght now when I stop the video, swap the video out by rebuilding the serialElement, and start playing, i get both videos at the same time!
Has anyone else seen this?
Thanks
1. Re: Proper Method to clear SerialElementbringrags Jun 16, 2010 4:08 PM (in response to box86rowh)1 person found this helpful
I'd probably need to see the code to provide a diagnosis, but if I were trying to solve this problem I wouldn't use a SerialElement (though perhaps you have reason to do so). Instead, I would wait until the MediaPlayer dispatched the "complete" event, and then I would assign a new VideoElement to the MediaPlayer (first making sure that MediaPlayer.autoRewind is false so that you don't waste timing rewinding something that doesn't need to be rewound).
If you have good reason to use a SerialElement, the same approach would probably apply: when you get the "complete" event, construct a new SerialElement to assign to MediaPlayer.media.
2. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 7:28 AM (in response to bringrags)
I am using the serial element as it seemed the perfect way to easily get a pre and/or post roll ad or some other message into my video product.
Here is the code I am using to initialize the player (runs once at the beginning)
parentApplication.addEventListener(AdEvent.AD_CHANGE, changeAd);
player = new MediaPlayer();
player.volume = vol.value / 10;
player.addEventListener(TimeEvent.CURRENT_TIME_CHANGE, updateTime);
player.addEventListener(TimeEvent.COMPLETE, onDone);
player.addEventListener(MediaErrorEvent.MEDIA_ERROR, onError);
player.addEventListener(BufferEvent.BUFFERING_CHANGE, buffering);
player.addEventListener(DisplayObjectEvent.MEDIA_SIZE_CHANGE, sizeChange);
container = new MediaContainer();
container.width = width;
container.height = height;
par.addChild(container);
Here is the code that runs when play is requested:
public function play():void{
unloadMedia();
// prepare our playlist
comp = new SerialElement();
//setup preloader
if(preRoll != null){
comp.addChild(preRoll);
}
comp.addChild(getVideo(videoXml[videoIndex]));
if(postRoll != null){
comp.addChild(postRoll);
}
setMediaElement(comp);
}
private function setMediaElement(value:MediaElement):void
{
if (player.media != null)
{
container.removeMediaElement(player.media);
}
if (value != null)
{
container.addMediaElement(value);
}
player.media = value;
}
private function unloadMedia():void
{
setMediaElement(null);
comp = null;
}
And here is the code that runs when someone hits stop, or plays to the end of the sequence
public function stop():void{
player.stop();
}
My getVideo function returns a dynamic streaming videoElement. In the play function, I check if there are any pre and/or post rolls as I am building my serialElement (comp). If they are set, then they are added to the comp. They are very simple objects extending MediaElement using the display object trait. Just a single line of text in the middle of the stage.
If you need more code let me know, but I think this should do it as it covers the logic steps in my video product.
Thanks for any help!
Jason
3. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 7:30 AM (in response to box86rowh)
One more note, If I do not pass in my pre-roller and the composition is just the video and post roll, it works fine without any overlapping video playback. As soon as I add a mediaElement before the video, it acts up on the second play thru and on (first play is fine)
4. Re: Proper Method to clear SerialElementbringrags Jun 17, 2010 10:44 AM (in response to box86rowh)
Does your initial preroll have any traits other than the DisplayObjectTrait? Or are you wrapping it in a DurationElement to give it a fixed duration? I assume so, as the SerialElement wouldn't be able to play otherwise.
In any case, the code looks fine. One possibile cause might be that the SerialElement will be rewound (if MediaPlayer.autoRewind is true) and replayed (if MediaPlayer.loop is true). Not sure whether this is the case in your player (seems unlikely). You might want to set autoRewind to false, so that it won't be rewound when playback completes.
5. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 10:57 AM (in response to bringrags)
here is the code for my pre/post roll element
package classes
{
import flash.display.Graphics;
import flash.display.Sprite;
import flash.events.MouseEvent;
import flash.text.TextField;
import flash.text.TextFormat;
import flash.text.TextFormatAlign;
import org.osmf.media.MediaElement;
import org.osmf.metadata.Metadata;
import org.osmf.traits.DisplayObjectTrait;
import org.osmf.traits.MediaTraitType;
import com.greensock.TimelineMax;
import com.greensock.TweenMax;
import org.osmf.media.MediaElement;
public class CompNameRoller extends MediaElement
{
public var displayObject:Sprite;
public function CompNameRoller(compName:String, w:Number, h:Number)
{
super();
removeTrait(MediaTraitType.SEEK);
var format:TextFormat = new TextFormat();
format.font = "Arial";
format.color = 0xFFFFFF;
format.size = 18;
format.align = TextFormatAlign.CENTER;
var textField:TextField = new TextField();
textField.defaultTextFormat = format;
textField.text = compName;
textField.y = 10;
textField.width = w;
textField.height = h;
textField.selectable = false;
textField.alpha = .7;
displayObject = new Sprite();
displayObject.addChild(textField);
//var tl:TimelineMax = new TimelineMax();
//tl.append(new TweenMax(textField,1,{alpha: .7}));
//tl.insert(new TweenMax(textField,1,{alpha: 0}), 4);
//tl.play();
var displayObjectTrait:DisplayObjectTrait = new DisplayObjectTrait(displayObject, w, h);
addTrait(MediaTraitType.DISPLAY_OBJECT, displayObjectTrait);
}
}
}
I am indeed wrapping this in a 2 second durationElement, I set autoRewind to false and loop to false on the mediaPlayer object and I am still getting the same result.
6. Re: Proper Method to clear SerialElementbringrags Jun 17, 2010 11:28 AM (in response to box86rowh)
Hmm, looks fine. Nothing jumps out at me.
Some other thoughts... Does getVideo always return a new VideoElement? Or does it return the old VideoElement with a new resource? I can't imagine that that would be the problem. One thing to try would be to clear the resource on the VideoElement prior to switching to the new SerialElement. (This would obviously not be the ideal solution, but it might give us an indication of whether the original VideoElement is being played. Last, you could try adding event listeners (e.g. for playStateChange) to the SerialElement and/or its children, to get an idea of whether the old SerialElement or its children are still active even after you've cleared the SerialElement.
If none of the above works, feel free to send me the full source (briggs at you know where).
7. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 11:42 AM (in response to bringrags)
I will try the event listener idea now, but here is my getVideo function:
public function getVideo(vid:XML):MediaElement{
var bitrates:XMLList = vid.videoEncodes.videoEncode;
var resource:DynamicStreamingResource = new DynamicStreamingResource(vid.@server + "_definst_");
var vector:Vector.<DynamicStreamingItem> = new Vector.<DynamicStreamingItem> (bitrates.length());
for(var i:Number = 0; i < bitrates.length(); i++){
vector[i] = (new DynamicStreamingItem("MP4:" + String(bitrates[i].@fileName).substring(0, String(bitrates[i].@fileName).length - 4), Number(bitrates[i].@bitRate)));
}
resource.streamItems = vector;
var videoElement:VideoElement = new VideoElement(resource);
videoElement.smoothing = true;
return videoElement;
}
8. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 11:51 AM (in response to box86rowh)
Brian,
I modified the unloadMedia function like this as per your suggestion on clearing the resource and now it works great!
private function unloadMedia():void
{
if(comp != null){
for(var i:Number = 0; i < comp.numChildren; i++){
var me:MediaElement = comp.getChildAt(i);
if(me is VideoElement){
me.resource = null;
}
}
setMediaElement(null);
comp = null;
}
}
Is this the best way to do this? Thanks for your help in this issue.
Jason
9. Re: Proper Method to clear SerialElementbringrags Jun 17, 2010 12:01 PM (in response to box86rowh).
10. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 12:35 PM (in response to bringrags)
Good point, I will do that and let you know,
One more question for now, on my preroller that uses the displayObjectTrait, in that object, how can I trigger an animation when that element is displayed and kill it when it is off the stage?
Date: Thu, 17 Jun 2010 13:01:52 -0600
From: forums@adobe.com
To: jasonatsmtc@hotmail.com
Subject: Proper Method to clear SerialElement.
>
11. Re: Proper Method to clear SerialElementbringrags Jun 17, 2010 1:14 PM (in response to box86rowh)1 person found this helpful
You could try keying off of Event.ADDED_TO_STAGE and Event.REMOVED_FROM_STAGE.
12. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 1:23 PM (in response to bringrags)
I tried adding the lsuteners in the constructor on my preroller mediaelement :
addEventListener(Event.ADDED_TO_STAGE, startIt);
addEventListener(Event.REMOVED_FROM_STAGE, stopIt);
But the events never get triggered?
13. Re: Proper Method to clear SerialElementbox86rowh Jun 17, 2010 6:27 PM (in response to box86rowh)
OK, fixed this, I added a listener at the player level :
player.addEventListener(DisplayObjectEvent.DISPLAY_OBJECT_CHANGE, onChange);
and in that listener, I am checking the value of the currentChild in the serialElement, if it is a pre or post roll, i trigger my animation. works great.
Thanks again for the help.
Jason | https://forums.adobe.com/thread/661659 | CC-MAIN-2018-09 | refinedweb | 1,535 | 50.84 |
How to rewrite this list in Haskell?
I really miss haskell. But Python is not bad. It's easier to read than Ocaml. Not as fast as OCaml. But many more libraries. Haskell is for people who can think. I never quite made it over the hump with Haskell for some reason.
But anyway, I was wondering what a Haskell solution to this would look like. I'm sure it's just a one-liner making use of
concatMap() and a binary decision function that returns a list as a function of it's inputs:
def retokenize(l):
"""Given a list consisting of terms and the negation symbol, retokenize() creates a list which
(1) changes negation symbols to AND NOT and
(2) inserts AND between two terms without an interceding
conjunction symbol. Examples:
>>> sqlgen.retokenize( sqlgen.tokenize("cancer drug").asList() )
['cancer', 'AND', 'drug']
>>> sqlgen.retokenize( sqlgen.tokenize("cancer - drug").asList() )
['cancer', 'AND NOT', 'drug']
>>> sqlgen.retokenize( sqlgen.tokenize("cancer drug therapy").asList() )
['cancer', 'AND', 'drug', 'AND', 'therapy']
>>> sqlgen.retokenize( sqlgen.tokenize("cancer - drug - therapy").asList() )
['cancer', 'AND NOT', 'drug', 'AND NOT', 'therapy']
>>>
"""
o = []
for i, tok in enumerate(l):
if tok == neg:
o.append('AND NOT')
else:
if i == 0:
o.append(tok)
elif l[i-1] == neg:
o.append(tok)
else:
o.append('AND')
o.append(tok)
return o
I know my eyes are going to water when I see that elegant, purefuly functional solution :)
- metaperl's blog
- Login to post comments
I doubt this is the elegant solution you are looking for :)
-- stagger [a,b,c,d,e] = [(a,b), (b,c), (c,d), (d,e)]
-- so: map snd $ stagger xs == tail xs
-- map fst $ stagger xs == init xs
stagger :: [a] -> [(a,a)]
stagger precs@(x:xs) = zip precs xs
verbalize :: (String, String) -> [String]
verbalize ("-","-") = error "Double negative"
verbalize ("-",s) = ["AND NOT", s]
verbalize (_, "-") = [] -- FIXME: ignores final -
verbalize (_, s) = ["AND", s]
retokenize :: [String] -> [String]
retokenize [] = error "Empty list"
retokenize ("-":_) = error "Begins with -"
retokenize xs@(h:_) = [h] ++ concatMap verbalize (stagger xs)
I wrote this one out at work so I'm not 100% sure if I remember it correctly. I'll post later if I realise my mistake. But it went something like this:
I think there's a function a bit like breakRun in the MissingH package. So if you assume that, then yes, it's a one-liner! :) | http://sequence.complete.org/node/210 | CC-MAIN-2015-40 | refinedweb | 395 | 56.45 |
Recently.
There’s good reason why the django docs say not to do this… the path method is for when you want an absolute local path to the file; I can’t see how returning None in that method could possibly be good.
Woot! Thanks for figuring this out. So to be sure, you added the path method right after the url method, yeah?
I added what you suggested and still get an error when running collectstatic… traceback is here. Mind looking at it?
After I add this method, Django raise another error: AttributeError
‘NoneType’ object has no attribute ‘rfind’
Hey, just wanted to let everyone know how I got this working (without going against Django principles). Using .url or .name instead didn’t work, but S3Storage has a solution built in.
Instead of:
photo = Image.open(self.image.path)
Use:
from django.core.files.storage import default_storage as storage
photo = Image.open(storage.open(self.image.name))
Worked like a charm!
@Murph Murphy. I am having this exact problem, you post says you fixed it in a Dajngo way. Where exactly did you put those lines of code. I would love to get this working. I am rehosting an old site onto dotCloud and need a remote storage solution for uploaded photos.
Here is a snippet of code from my upload view:
uploaded_file = request.FILES['filedata']
photo = UserPhoto() # <– Photologue ImageModel
photo.user=profile.user
# Breaks here with the NotImplemented Error
photo.image.save(uploaded_file.name, uploaded_file)
photo.save()
Any advice?
@Wil Black: Check out this StackOverflow answer, I think this is what lead me to the solution I had above
This fork of django-photologue works with s3 perfectly: | http://blog.hardlycode.com/solving-django-storage-notimplementederror-2011-01/ | CC-MAIN-2015-11 | refinedweb | 280 | 59.3 |
anchor - UI constraints for Pythonista
Python wrapper for Apple iOS UI view layout constraints, available as
anchor.pyon GitHub. Run the file to see a sample constraint-driven layout.
Constraints?
Constraints are used to determine how views are laid out in your UI. They are an alternative to the
x,
y,
framemethod used in Pythonista by default.
Constraints are defined as equations, which are dynamically evaluated as the dimensions or views of your UI change. For example, the following constraint places the Cancel button always beside the Done button:
cancel_button.at.trailing == done_button.at.leading_padding
(Here, 'trailing' and 'leading' are same as 'right' and 'left', but automatically reversed if your device is set for a right-to-left language.)
Constraints can use the following attributes:
left, right, top, bottom, width, height
leading, trailing
center_x, center_y
last_baseline, first_baseline
left_margin, right_margin, top_margin, bottom_margin, leading_margin, trailing_margin
- Use these when you want to leave a standard margin between the view and the edge of its superview (inside margin).
left_padding, right_padding, top_padding, bottom_padding, leading_padding, trailing_padding
- Use these when you want to leave a standard margin between the view and the view next to it (outside margin).
Why would I need them?
It depends on your style and preferences regarding building UIs.
You can create pretty much all the same layouts and achieve the same level of dynamic behavior just using Pythonista's regular
frame,
flexattribute and the
layoutmethod.
The reason to consider constraints is that they, and the convenience methods in this wrapper, provide perhaps a more human way of expressing the desired layout. You can use one-liners for "keep this view below that other view, no matter what happens", or "this view takes over the top half of the screen, with margins", without fiddling with pixel calculations or creating several ui.Views just for the layout.
Anatomy of a constraint
Constraints have this syntax:
target_view.at.attribute == source_view.at.attribute * multiplier + constant
Notes:
targetview is now constrained and unaffected by setting
x,
y,
frameor
center- but you can read these values if you need to know the absolute shape and position of a view.
sourceview is unaffected and remains in the 'frame mode', until used on the left side of constraint.
- Relationship can be
==,
<=or
>=(but nothing else).
- You can also
/a multiplier or
-a constant, and have several multipliers and constants, but they will only be combined per type (i.e.
* 6 + 1 / 3 - 5is the same as
* 2 - 4).
- Multiplier can be zero or the source left out of the equation, but only if the target attribute is a size attribute, e.g.
target.at.height == 100
- Target and source attributes cannot mix:
- size and position attributes
- vertical and horizontal position attributes
- absolute and relative position attributes (e.g.
leadingand
left)
These are all Apple restrictions, and the wrapper checks for them to avoid an ObjC exception and a Pythonista crash. Please let me know if you find other crashing combos.
Enabling constraints
Pythonista UI views do not natively support constraints, of course, so we need to enable them.
The explicit option is to call
enableon the UI view, maybe at view creation. For example:
import anchor, ui label = anchor.enable(ui.Label(alignment=ui.ALIGN_CENTER)) label.at.width == 100
An alternative is to use already-enabled versions of every Pythonista UI view class, defined in anchor.py, so you can save a little typing by importing it like this:
from ui import * from anchor import * label = Label(alignment=ALIGN_CENTER) label.at.width == 100
Ambiguous constraints
When you constrain a view, you have to unambiguously constrain both its position and size. If you miss something, the view usually is not visible at all. To debug constraints, you can either check an individual view for problems with:
view.at.is_ambiguous
Or check your whole view hierarchy by:
anchor.check_ambiguity(root_view)
This will print out the whole hierarchy, and return any ambiguous views as a list.
To be continued
brilliant.
- shinyformica
Absolutely brilliant.
whaaaaaaaa
@JonB, @shinyformica, @cvp, thank you for the encouragement. Even though this was independently initiated, has different code and API, @zrzka has ”prior art” in the form of his proof of concept code from a while back.
Convenient view alignment
Enabled views have an
alignattribute that supports aligning matching attributes of views. For example, aligning the heights of two views:
search_field.align.height(search_button)
Using
alignis especially convenient when you need to align several views at once:
view_a.align.center_x(view_b, view_c)
In addition to all the regular constraint attributes like
heightand
center_xin the examples above,
alignsupports aligning the composite attributes
sizeand
center.
Convenient view placement within superview
Creating individual constraints can quickly become a bit of a bore. Thus the wrapper includes a number of methods for "docking" views.
For example, the following places constraints to the top and both sides, leaving height still undefined:
view.dock.top()
Following docking methods are available:
all, center, horizontal, vertical, horizontal_between, vertical_between, top, bottom, leading, trailing, top_leading, top_trailing, bottom_leading, bottom_trailing
The most specialized of these are the
_betweenmethods, which dock the view to the sides in one direction, and between the two given views in another. Here's an example:
result_area.dock.horizontal_between( search_button, done_button)
By default,
dockmethods leave a margin between the edges of the superview and the view. This can be adjusted with the
fitparameter:
Dock.MARGIN(the default) - standard margin
Dock.TIGHT- no margin
Dock.SAFE- align to the safe area insets, if applicable
You can also change the default by setting the
Dock.default_fitparameter, e.g.:
Dock.default_fit = Dock TIGHT
Many
dockmethods support
shareand
constantparameters.
shareparameter can be used to define how much of the superview's area the view should take:
view.dock.top(share=.5)
This is only exact if you use
TIGHTfit, as there is no way to dynamically account for the space taken by margins.
constantparameter can be used to adjust the margins manually, although I feel that this is probably bad layout design.
- Rajeshraj1990
Thanks for this information. best Anatomy apps for android and ios It useful
👍👌
—Peter | https://forum.omz-software.com/topic/5442/anchor-ui-constraints-for-pythonista/4 | CC-MAIN-2021-39 | refinedweb | 1,005 | 55.24 |
Handbook Talk:AMD64/Installation/Stage.
Removing stage3 after invoking tar xpf?
This is bloatware police, you ware caught leaving thrash behind, please fix your unemergefull behavior. •`_´•
Remove stage-3 after `tar xpf`is invoked and remove it from the finalizing.
--Kreyren (talk) 16:39, 15 September 2018 (UTC)
Read the handbook.... All the way through: Handbook:AMD64/Installation/Finalizing#Removing_tarballs. :P --Maffblaster (talk) 16:01, 10 October 2018 (UTC)
NTP
Maybe a pointer to use ntpdate (a la `ntpdate -s time.nist.gov`) should show up instead of just date? Not everybody has an accurate clock handy :) Hlzr (talk) 05:05, 7 August 2016 (UTC)
- Valid point, although this presumes the system as a connection to the internet, which I guess we're presuming in the first place since readers are instructed to download the stage 3 file. I'll consider adding an alternative here. Thanks for the input! --Maffblaster (talk) 19:39, 3 October 2016 (UTC)
Extended attributes while untarring
The --xattr option when untarring the stage3 tarball is ignored when using the minimal install iso for amd64 built on 20150709.--Bamapookie (talk) 21:57, 26 July 2015 (UTC)
- Grab a newer version of the ISO containing a newer version of tar. Should be fine at this point. --Maffblaster (talk) 19:36, 3 October 2016 (UTC)
Choosing a stage tarball
No-multilib (pure 64-bit)
Selecting a no-multilib tarball to be base of the new system will provides a complete 64-bit operating system environment.This effectively renders the ability able to switch to multilib profiles improbable (although it is not impossible).
A better (from grammar and context perspective) sentence would be :
Selecting a no-multilib tarball as base of the new system will provide only a complete 64-bit operating system environment. This effecti-vely renders the ability to switch to multilib profiles improbable ...
(Georgios Doumas)
- Georgios, I'll take a look and try to make some grammatical improvements. You can sign your messages on here by clicking the "Signature and timestamp" button in the format box above. :) --Maffblaster (talk) 20:34, 10 March 2016 (UTC)
- What about having a short mention in this section about the overview on the download page?--Charles17 (talk) 06:58, 11 March 2016 (UTC)
- Subpages open a bug on Bugzilla if you'd like. Main www is outside wiki scope. :) Kind regards, --Maffblaster (talk) 19:36, 3 October 2016 (UTC)
Portage and stage3 security recommendations
As outlined at bugs #597804 and #597800 portage does not operate securely by default. Changes that seem to be pending include:
- stage3 images will include cryptographic keys for the automated establishment of trust
- stage3 images will include a gentoo key management utility
While these are promising improvements, they have been pending for 4 years already and may not be completed soon, and are not enough to fully secure portage's operation, which currently requires manual processes.
Therefore, currently it would seem useful to add a pointer right here in the installation instructions to the secure portage configuration information already documented at Working with Gentoo / Portage Features / Fetching Files / Validated Portage tree snapshots. It might be better to move it here, rather than keep it there, since it's now critical.
Note that the secure sync only works for emerge-webrsync and no security is possible with traditional rsync. Walter (talk) 23:02, 29 October 2016 (UTC)
- Note that securing portage is pointless if the stage3 image downloaded has been compromised. Therefore I would suggest as a related change taking the current text regarding validation of stage3 and promoting it to its own subheading: Validating the stage tarball. In addition, the current text does not explain the problems with man in the middle attacks (in recent years well documented as utilized by state actors) that cannot be resolved with the current recommended process (ie. download stage3 and digests at same time over same network link from an official gentoo mirror - none of which are encrypted under any protocol = both digest and stage3 are compromised at the same time, therefore cannot be trusted). The current text is misleading. Suggested order of content:
- Big fat warning box saying that while the step is optional, if you skip this step there is absolutely no guarantee that you will ever have a secure system and it is highly recommended to bother.
- Rationale: Importance currently understated. New users perhaps unfamiliar with significance.
- Method of obtaining the Gentoo keys on a non-Gentoo host system being used as an install platform should be described. This uses HTTPS to obtain the key IDs, followed by the HKP GPG keyserver protocol (an unencrypted protocol based upon HTTP) to obtain the keys. Probably the keys themselves should be provided via HTTPS.
- Info box on additional high-assurance step of double-validating — re-fetching the same keys from another device/network connection or proxy server, preferably from a different mirror, eg. via smartphone with mobile data, Tor, a secure proxy, or an ssh tunnel (bold underlined super-obvious highlight that critically this must be run from outside the chroot - otherwise you are running potentially compromised code and giving it your remote server credentials!) to a network-geographically disparate server.
- Rationale: This protects against failures in SSL (eg. state-level attackers able to forge certificates to enable SSL MITM), locally compromised SSL certificate chains, MITM attacks on the current HKP (= HTTP = unencrypted) based GPG key acquisition process, and compromised mirrors.
- GPG validation of the stage3.
- Note: Text currently present.
- Rationale: Requirement for subsequent steps.
- Digest validation.
- Note: Currently before the above, and pointless from a security standpoint without first doing the above.
- Rationale: Validates downloaded binary.
- Immediate transition to a new top-level heading (between Installing a stage tarball and Configuring compile option) called Securing portage, with existing content from over here, and a big fat warning box that it is required to maintain system integrity (as per original point).
- -- Walter (talk) 23:18, 29 October 2016 (UTC)
- I saw your comment on bug #597800 pointing here. And I think you should also add your points as a new first chapter in the Security_Handbook even before the Pre installation concerns section.
--Charles17 (talk) 09:56, 28 December 2016 (UTC)
- Your suggestions and concerns did not go unnoticed. I did see them but have no gotten around to them at the moment. As for the present, it would probably be a good idea to capture the practical steps to integrate these changes in wiki markup in the Security Handbook as Charles17 suggested. I will review them and make determinations on each of them over the course of the next few days. Kind regards, --Maffblaster (talk) 21:53, 29 December 2016 (UTC)
Ping-back to 'Verifying the install ISO' when 'installing stage3'
I thought it might be handy to have a link in the Para. that starts "Just like with the ISO file..." back to the Installation/Media stage [1] for people like myself, who don't always download a fresh ISO image every install (we likely have one lying around!) but will always download an up-to-date clean stage3, and may not remember the GPG commands to download the Gentoo RelEng keys, and verify. There is a infobox for the verify command, alternatively we could repeat the 'downloading gentoo key' box into this para. if that was deemed appropriate instead.
-- veremit (talk) 21:43, 20 February 2017 (UTC)
HTTP to HTTPS
Please replace with https. Fturco (talk) 15:22, 16 April 2017 (UTC)
- This was completed some time ago. Perhaps I didn't see or forgot to close this discussion. Thanks, Francesco Turco (Fturco) . --Maffblaster (talk) 18:16, 29 December 2017 (UTC)
CFLAGS and CXXFLAGS
In the NOTE of section CFLAGS and CXXFLAGS, please add a link to Safe CFLAGS.
Note The GCC optimization guide has more information on how the various compilation options can affect a system.
Thanks. Luttztfz (talk) 12:43, 14 May 2017 (UTC)
- I still think it is good to also link to the GCC optimization guide, but I will add the additional reference to the Safe CFLAGs article and perhaps mention that Safe CFLAGs may be a more practical place for beginners to start. Thank you! --Maffblaster (talk) 17:06, 15 May 2017 (UTC)
Tarballs compressed via xz now
Starting this week, at least amd64 stage3's are now compressed via xz, not bz2. I'd adjust the tarball wildcard to stage3-*.tar.* or similar
Iamben (talk) 15:53, 29 December 2017 (UTC)
- Thanks, Ben! I have applyed a change that should work. Let me know if I missed something. --Maffblaster (talk) 20:55, 29 December 2017 (UTC)
- The command uses a wrong parameter because of the brace expansion (see "tar not work porperly"). --Feng (talk) 13:36, 31 December 2017 (UTC)
- I can second Feng's comment. Brace expansion and pathname expansion are two different things entirely. One should not use the former where the latter is intended. The present usage of brace expansion in the handbook constitutes a dangerous anti-pattern, as well as setting a bad example for users unfamiliar with shell mechanics. Consider the following code:
# mkdir empty && cd empty && touch foo.bz2 foo.bz2.DIGESTS # printf '<%s>' foo.{bz2,xz} # this is not a glob and cares not for existing pathnames <foo.bz2><foo.xz> # printf '<%s>' foo.* # this is a glob <foo.bz2><foo.bz2.DIGESTS> # shopt -s extglob # printf '<%s>' foo.@(bz2|xz) # this is a glob (of the extended variety) <foo.bz2> # printf '<%s>' foo.+([^.]) # and so is this <foo.bz2>
- In short, have the user enable the extglob shell option in order to exploit a relatively safe generic approach e.g.
tar xpf stage3-*.tar.@(bz2|xz). At present, the handbook contains no valid example of a command where the characteristics of brace expansion are appropriate. Also, the commands are not consistently generic. For example, the tar command exploits * as a globbing character, whereas the user is expected to fill in
<release>manually in other commands. EDIT: I just realised that Feng made a similar suggestion in Maffblaster's page. Indeed, his suggestion of using
stage3-*.tar.?(bz2|xz)is superior, as it virtually guarantees that one pathname, at most, is matched in the common case. --kerframil (talk) 04:23, 21 July 2018 (UTC)
The commands in the "Verifying and validating" chapter could be further improved IMHO. First, if extended globbing is to be used then a command `shopt -s extglob' should be added.
Second, the use of '<' and '>' causes problems if the command is just copypasted to a console window without editing. Why not replace <release> with a single asterisk, as already used for the `tar xvpf' command?
Finally, the `gpg --verify' command did not work for me. I got the complaint "gpg: not a detached signature" and the command exited with status 2 (the sha512 and whirlpool digests were OK though).
--Rafo (talk) 21:50, 1 January 2019 (UTC)
Properly preserve all xattrs like filecaps
For filecaps to be preserved on unpack (like for USE=filecaps on iputils so ping isn't setuid), --xattrs isnt quite enough, that won't preserve all xattr namespaces. --xattrs-include="*.*" should be enough, this will catch the user.* stuff for filecaps, security.* stuff for pax markings, and more.
Releng is currently disabling filecaps on affected pkgs for stage3 but would like to turn this caps on eventually. We need to get the handbook adjusted before that can (safely) happen.
Iamben (talk) 15:53, 29 December 2017 (UTC)
- Nice catch! Thank you for this information. I'll make the change now. --Maffblaster (talk) 20:57, 29 December 2017 (UTC)
Easy verify checksums via rhash
I propose add another way to check integrity of downloaded files. There is tool app-crypt/rhash which can automatically detect hash method from comments in DIGESTS files, so verifying downloaded files with SHA512 and WHIRLPOOL is simple as:
rhash -c stage3-*.tar.xz.DIGESTS
I found this as the most simple and universal method.
--Thaaxaco (talk) 20:07, 23 June 2018 (UTC)
Change command for stage3 to actual compression method
Currently, the command shown to extract the stage3 use the tar.bz2, but the stage3 is actually a tar.xz, so the instructions is outdated. — The preceding unsigned comment was added by Noamcore (talk • contribs) 19:08, 4 October 2018
It is 7DEC2019 and the commands in the handbook for AMD64 still show bz2 commands instead of -xvJpf for .tar.xz files.
RFC: Add big red beep about systemd stages
During years people keep asking "why do we have blocks installng gentoo with systemd???"
If you dig up you will find they are using "standard" openrc stages installing systemd-based systems. While eudev vs udev and openrc vs systemd blocks can be solved manually it looks missinformative for new users and they get confused. So I would like we add a note about systemd stages existence as well as this allows us to reduce amount of further questions. — The preceding unsigned comment was added by Zlogene (talk • contribs)
Missed format update for stage3 unpacking
All commands dealing with the stage 3 now correctly end in ?(bz2|xz) , except the actual extraction. As the up-to-date amd64 images at least are now all .xz this causes issues for people. Ideally add ?(bz2|xz) instead of bz2, or replace bz2 with xz.
Chymæra (talk) 02:10, 18 December 2018 (UTC)
The commands are still for bz2 extraction and not for xz.
Incomprehensible sentence: 'Depending on the installation medium, the only tool ...'
The following sentence doesn't make sense: 'Depending on the installation medium, the only tool necessary to download a stage tarball is a web browser.'
What does this sentence mean? Maybe something like: 'In order to download a stage tarball, you will need a web browser.'?
--Mike155 (talk) 20:46, 2 November 2019 (UTC)
Suggestions for Improvement
Quoted material is from "Installing the Gentoo installation files".
"Official Gentoo installation media includes the
ntpd command ..."
Ungrammatical. A plural subject demands a plural verb.
Official Gentoo installation media include the
ntpd command ...
"Official media includes a configuration file ..."
The same error. Media are plural.
Official media include a configuration file ...
"The
date command can also be used to perform a manual set on the system clock."
Very clumsy style. I suggest
The
date command can also be used to set the system clock.
"Depending on the installation medium, the only tool necessary to download a stage tarball is a web browser."
This is a non sequitur. I suggest either
Irrespective of the installation medium, the only tool necessary to download a stage tarball is a web browser.
or (and this is even better)
The only tool necessary to download a stage tarball is a web browser.
"To optimize Gentoo, it is possible to set a couple of variables which impacts the behavior of Portage ..."
Once again, a subject does not agree with its verb ("couple" is the implicit subject of the subordinate clause beginning "which ...").
... set a couple of variables which impact the behavior of Portage ...
"...too much optimization can make programs behave bad ..."
Adjectives are never verb modifiers, in the English language.
... too much optimization can make programs behave bad[ly] ...
Or, better yet,
... too much optimization can make programs misbehave ...
"The MAKEOPTS variable defines how many parallel compilations should occur when installing a package. A good choice is the number of CPUs (or CPU cores) in the system plus one, but this guideline isn't always perfect."
Linus Torvalds recommends setting this variable equal to twice the number of processors in the system. That way there is always one more compile job waiting to run when the previous one in that thread wraps up. It sort of makes sense, since Linux starts a separate thread running for each CPU core it finds fairly early during bootup.
user $
ps ux | grep cpuhp
root 13 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/0] root 14 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/1] root 19 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/2] root 24 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/3] root 29 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/4] root 34 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/5] root 39 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/6] root 44 0.0 0.0 0 0 ? S 08:59 0:00 [cpuhp/7]
"Update the
/mnt/gentoo/etc/portage/make.conf file to match personal preference and save (nano users would hit
Ctrl+
x)."
Users -- especially novice users -- probably ought to say
Ctrl+
o before they exit the program. Yes, I know that nano will prompt me to save the changes. But a guy can make mistakes that way. It's always safer to explicitly save your changes, then say "exit". I've learned that the hard way. --Davidbryant (talk) 16:48, 24 July 2020 (UTC)
Default download directory for stage3 tarball
The default location for downloading stage3 is /var/tmp/catalyst/builds/default it is written on page Catalyst I propose to mention this location on page and give link to the catalyst page and tool. Einstok Fair (talk) 03:17, 13 September 2020 (UTC)
- Most users are not going to use Catalyst particularly on their first install. Instead they will often download it from the gentoo.org website. --Grknight (talk) 13:13, 14 September 2020 (UTC)
Proposition to split stage3 into rstage3 and bstage3
For some systems only runtime dependencies are necessary. And for other systems full toolchains are necessary Einstok Fair (talk) 03:19, 13 September 2020 (UTC)
- This has nothing to do with improving the Handbook so not the correct place to discuss such sweeping changes. --Grknight (talk) 13:16, 14 September 2020 (UTC)
links browser
With the new admincd, using the browser "links" to view
suggests: "Verify the current date and time by running the date command:
root #date"
Rjc (talk) 00:16, 19 October 2020 (UTC) | https://wiki.gentoo.org/wiki/Handbook_Talk:AMD64/Installation/Stage | CC-MAIN-2021-04 | refinedweb | 3,019 | 64.61 |
Setuptools Examples¶
Simple Console Application¶
The most basic and simplest application to package is a simple console app with no dependencies:
print("Hello world")
Assuming this is saved as
main.py, we can use the following
requirements.txt file:
panda3d
The corresponding
setup.py file could look like so:
import setuptools setup( name="Hello World", options = { 'build_apps': { 'console_apps': {'hello_world': 'main.py'}, } } )
Then, we can build the binaries using
python setup.py build_apps.
A
build directory will be created and contain a directory for each platform
that binaries were built for. Since no platforms were specified, the defaults
were used (manylinux1_x86_64, macosx_10_6_x86_64, win_amd64).
Note, win32 is missing from the defaults. If a win32 build is desired, then
platforms must be defined in
setup.py and
win_amd64 added to the list:
import setuptools setup( name="Hello World", options = { 'build_apps': { 'console_apps': {'hello_world': 'main.py'}, 'platforms': [ 'manylinux1_x86_64', 'macosx_10_6_x86_64', 'win_amd64', 'win32', ], } } )
Asteroids Sample¶
Below is an example of a setup.py that can be used to build the Asteroids sample program.
from setuptools import setup setup( name="asteroids", options = { 'build_apps': { 'include_patterns': [ '**/*.png', '**/*.jpg', '**/*.egg', ], 'gui_apps': { 'asteroids': 'main.py', }, 'plugins': [ 'pandagl', 'p3openal_audio', ], } } )
With the setup.py in place, it can be run with:
python setup.py bdist_apps
The name field and options dictionary in the above setup.py can also be replaced by the following setup.cfg file:
[metadata] name = asteroids [build_apps] include_patterns = **/*.png **/*.jpg **/*.egg gui_apps = asteroids = main.py plugins = pandagl p3openal_audio | https://docs.panda3d.org/1.10/python/distribution/setuptools-examples | CC-MAIN-2020-05 | refinedweb | 237 | 52.46 |
Hi I have a code that plots 2D data from a .dat file (I'll call it filename.dat which is just a .dat file with 2 columns of numbers). It works fine, I just have some questions as to how to improve it.
How can I edit my code to make the axes label larger and add a title? This code is not so easy to edit the way I have it written now. I have tried adding the fontsize,title into the plotfile(...) command, but this did not work. Thanks for the help! My code is below.
import numpy as np
import matplotlib.pyplot as plt
#unpack file data
dat_file = np.loadtxt("filename.dat",unpack=True)
plt.plotfile('filename.dat', delimiter=' ',cols=(0,1), names=('x','y'),marker='0')
plt.show()
I assume you want to add them to the plot.
You can add a title with:
plt.title("MyTitle")
and you add text to the graph with
# the following coordinates need to be determined with experimentation # put them at a location and move them around if they are in # the way of the graph x = 5 y = 10 plt.text(x,y, "Your comment")
This can help you with the font sizes:
How to change the font size on a matplotlib plot | https://codedump.io/share/jY1Y26Ut0YC4/1/editing-plot---python | CC-MAIN-2017-39 | refinedweb | 215 | 77.64 |
problem link:
solution link:
where i am wrong i am getting the required answer, but while submiting it shows run time error.
problem link:
solution link:
where i am wrong i am getting the required answer, but while submiting it shows run time error.
Please put the correct submission link
Thats given naa. It is the link of my code of that particular problem.
There is quite a bit to fix, first off, the number of testcases can be 10000, whereas your array only stores 50. You can’t go forward and subtract 1 for each occurence as for example
1 2 1 1 3
You’ll subtract 2 when you see the first ‘1’ and one on the second ‘1’. So you subtract three instead of 2.
The submission link is not given. Click the number next to the verdict, and copy that link.
okk Array size can be problematic.
But in 1 2 1 1 3 it will subtract 2 not 3 because when first 1 encountered with 2nd and 3 rd 1 the values there become 101 and 102. As you can see i have used an extra variable any
which will set 101 and 102 at 2nd and 3rd one respectively.
I suggest plz go through it just once. you can run it by putting your test cases. it will give the required answer.
@everule1
now i have changes the array size now plz can you see where i am lagging.
link:
Take a guess
for(int i;i<num[z];i++){ cin>>arr[z][i]; }
#include<iostream> #include<bits/stdc++.h> using namespace std; int main(){ int t; cin>>t; while(t--){ } }
Solve for each test case inside the while loop. You overcomplicated the question. | https://discuss.codechef.com/t/runtime-error-cfrtest/55804 | CC-MAIN-2022-33 | refinedweb | 290 | 82.24 |
import io.vertx.core.AbstractVerticle; public class Server extends AbstractVerticle { public void start() { vertx.createHttpServer().requestHandler(req -> { req.response() .putHeader("content-type", "text/plain") .end("Hello from Vert.x!"); }).listen(8080); } }
import io.vertx.core.AbstractVerticle class Server : AbstractVerticle() { override fun start() { vertx.createHttpServer().requestHandler { req -> req.response() .putHeader("content-type", "text/plain") .end("Hello from Vert.x!") }.listen(8080) } }
vertx.createHttpServer().requestHandler({ req -> req.response() .putHeader("content-type", "text/plain") .end("Hello from Vert.x!") }).listen(8080)
Resource-efficient
Handle more requests with less resources compared to traditional stacks and frameworks based on blocking I/O. Vert.x is a great fit for all kinds of execution environments, including constrained environments like virtual machines and containers.
Don’t waste resources, increase deployment density and save money!
Concurrent and asynchronous
People told you asynchronous programming is too hard for you? We strive to make programming with Vert.x an approachable experience, without sacrifying correctness and performance.
You pick the model that works best for the task at hand: callbacks, promises, futures, reactive extensions, and (Kotlin) coroutines.
Flexible
Vert.x is a toolkit, not a framework, so it is naturally very composable and embeddable. We have no strong opinion on what your application structure should be like.
Select the modules and clients you need and compose them as you craft your application. Vert.x will always adapt and scale depending on your needs.
Vert.x is fun
Forget complexity and costly abstractions. With Vert.x, what you write is actually what you get to execute! Get back to simple designs, forget some of the established “best practices”, and enjoy writing code that is comprehensible and that won’t let you down in the future.
We also have a friendly community, so you can learn from people who have used Vert.x in very diverse settings.
Ecosystem
Web APIs, databases, messaging, event streams, cloud, registries, security… you name it. Vert.x has you covered with a comprehensive end-to-end reactive clients stack for modern applications.
And if you can’t find what you are looking for, there is a very strong chance that someone else has done it in the wider Vert.x open-source ecosystem. Vert.x is a safe investment for your technology stack.
Read about Vert.x
Vert.x in Action teaches you how to build responsive, resilient, and scalable JVM applications with Vert.x using well-established reactive design patterns. | https://vertx.io/ | CC-MAIN-2021-31 | refinedweb | 403 | 52.05 |
Hi
I write code for microcontroller I want to write my own scheduler for microcontroller. But I am not getting any idea even after searching a lot
scheduling is process by which operating...
Hi
I write code for microcontroller I want to write my own scheduler for microcontroller. But I am not getting any idea even after searching a lot
scheduling is process by which operating...
Thank you laserlight and salem
okay So I made a list just to store student fees.
#include<stdio.h>#include<stdlib.h>
struct node{
int Fees;
struct node *next;
I want to write a C program that stores the records of student studying in school. I would like to take less memory to store record
format for record
Name :
Class :
Fees :
Year :
...
That I know malloc allocate memory at run time but again there is sentence " allocate memory at run time " That's not clear for me
When we write a program, we build and compile it, then the program runs on the PC, which is called the run time.
we say allocate memory at RUN time, This confused me a lot, What's meaning of...
constant qualifier indicate that value of variable will not change while volatile variable indicate that value of variable may be change
Can we modify the const qualifier with volatile?
compile time means that we have written the program and we have to do compile it to check the error. Run time means that the program has been compiled and now it will run on the device
Whenever...
I am looking for example that prove dynamic memory will be useful instead of static memory. Dynamic memory is allocated at run time where as static is allocated at compile time.
I found example...
#include <stdio.h>
struct point
{
int x;
char y;
float z;
}var;
That's what I am trying to do it ?
so can you give idea , pesudo code be batter
so can you give me idea How it can be achieve without sorting
in simple way if i ask to someone who doesn't have programming knowledge how his brain work to find repeated number, what does he...
my second option is sorting because I don't know much about it
I have practiced with basics It would to early to jump on sorting method, ofcourse I will learn but not now. so that's why i want to...
just for basic understanding, do you know how that algorithm can be implement in program ?
Note : not asking complete program just asking process or pescudo code
I have no idea how to write program to find repeated number in the given sequence
Numbers[ ] = [ 6, 4, 2, 1, 3, 1 ]
Program output : Number 1 repeats 2 times
My attempt to make algorithm ...
Embedded is the combination hardware and software,
How do you practicing c/c++ ?
Do you have any development board ?
Which micro you were asking , there are many 8051, PIC, ARM ?
There are so...
As much as I have seen, we keep the micro in the header file. So I created two files main.c other.h
What could be best example for #ifdef, #if, #defined, #else and #elseif directives ?
I do not understand what should be in main function that's why left main empty. I don't found complete code that's why tried with my own code
I have seen in many link there is more theoretical description about preprocessor
C Preprocessor and Macros
I want to experiment by writing some code
#include <stdio.h>#include "other.h"
What you said in post 2 is given in page integer types" table
suppose we want to store number -32768 to 32767
#include<stdio.h> int main(void)
{
short a = 32767;
printf("max...
That information given on this page C syntax - Wikipedia
Size qualifiers short, long
Sign qualifiers signed, unsigned
Data types - int, char, float, double
<Sign qualifiers >...
I am having trouble to understanding this combination to store variable. I do not understand which one should be use for specific reason
let's suppose if I want to store integer number then I...
Okay so I will leave it here
I was looking some sample to create basic scheduler in c programming.
I am trying to run code given in the example GitHub - EmbeddedApprentice/TaskTurner: A first, very short beginning for the TaskTurner.
When I unzip folder I get only three file taskrunner.h...
laserlight
I did changes
#include<stdio.h>
//#include "../include/errors.h"
#include<windows.h>
#include "../include/tasks.h"
#include "../include/taskrunner.h"
I am trying to run sample code on my PC
I have following three files taskrunner.h task.h runtask.c
#ifndef TASKRUNNER_H#define TASKRUNNER_H
int taskrunner(Task * TaskList, unsigned int... | https://cboard.cprogramming.com/search.php?s=4034049aa1b526c9381d3a7af26c751e&searchid=7058638 | CC-MAIN-2021-31 | refinedweb | 790 | 69.92 |
In this tutorial, we will go over several ways that you can use to subset a dataframe. If you are importing data into Python then you must be aware of Data Frames. A DataFrame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns.
Subsetting a data frame is the process of selecting a set of desired rows and columns from the data frame.
You can select:
- all rows and limited columns
- all columns and limited rows
- limited rows and limited columns.
Subsetting a data frame is important as it allows you to access only a certain part of the data frame. This comes in handy when you want to reduce the number of parameters in your data frame.
Let’s start with importing a dataset to work on.
Importing the Data to Build the Dataframe
In this tutorial we are using the California Housing dataset.
Let’s start with importing the data into a data frame using pandas.
import pandas as pd housing = pd.read_csv("/sample_data/california_housing.csv") housing.head()
Our csv file is now stored in housing variable as a Pandas data frame.
Select a Subset of a Dataframe using the Indexing Operator
Indexing Operator is just a fancy name for square brackets. You can select columns, rows, and a combination of rows and columns using just the square brackets. Let’s see this in action.
1. Selecting Only Columns
To select a column using indexing operator use the following line of code.
housing['population']
This line of code selects the column with label as ‘population’ and displays all row values corresponding to that.
You can also select multiple columns using indexing operator.
housing[['population', 'households' ]]
To subset a dataframe and store it, use the following line of code :
housing_subset = housing[['population', 'households' ]] housing_subset.head()
This creates a separate data frame as a subset of the original one.
2. Selecting Rows
You can use the indexing operator to select specific rows based on certain conditions.
For example to select rows having population greater than 500 you can use the following line of code.
population_500 = housing[housing['population']>500] population_500
You can also further subset a data frame. For example, let’s try and filter rows from our housing_subset data frame that we created above.
population_500 = housing_subset[housing['population']>500] population_500
Note that the two outputs above have the same number of rows (which they should).
Subset a Dataframe using Python .loc()
.loc indexer is an effective way to select rows and columns from the data frame. It can also be used to select rows and columns simultaneously.
An important thing to remember is that .loc() works on the labels of rows and columns. After this, we will look at .iloc() that is based on an index of rows and columns.
1. Selecting Rows with loc()
To select a single row using .loc() use the following line of code.
housing.loc[1]
To select multiple rows use :
housing.loc[[1,5,7]]
You can also slice the rows between a starting index and ending index.
housing.loc[1:7]
2. Selecting rows and columns
To select specific rows and specific columns out of the data frame, use the following line of code :
housing.loc[1:7,['population', 'households']]
This line of code selects rows from 1 to 7 and columns corresponding to the labels ‘population’ and ‘housing’.
Subset a Dataframe using Python iloc()
iloc() function is short for integer location. It works entirely on integer indexing for both rows and columns.
To select a subset of rows and columns using iloc() use the following line of code:
housing.iloc[[2,3,6], [3, 5]]
This line of code selects row number 2, 3 and 6 along with column number 3 and 5.
Using iloc saves you from writing the complete labels of rows and columns.
You can also use iloc() to select rows or columns individually just like loc() after replacing the labels with integers.
Conclusion
This tutorial was about subsetting a data frame in python using square brackets, loc and iloc. We learnt how to import a dataset into a data frame and then how to filter rows and columns from the data frame. | https://www.askpython.com/python/examples/subset-a-dataframe | CC-MAIN-2021-31 | refinedweb | 705 | 57.06 |
24 February 2010 10:01 [Source: ICIS news]
SINGAPORE (ICIS news)--Chinese domestic styrene butadiene rubber (SBR) prices rose by about 6% this week, lifted by the rebound in values of crude and natural rubber futures, traders said on Wednesday.
Non-oil grade 1502 SBR jumped yuan (CNY) 1,000/tonne ($146.4/tonne) this week to CNY17,800-18,000/tonne ex-warehouse (EXWH) as crude topped $80/bbl early this week.
Natural rubber prices, meanwhile, surged CNY2,000/tonne from the week ending 12 February to CNY25,000/tonne currently, traders said.
The Chinese market was closed the whole of last week for the Lunar New Year celebrations.
The price spike in the domestic Chinese market has spurred SBR producers to hike offers for non-oil grade 1502 imports to $2,200/tonne CIF (cost, freight and insurance) ?xml:namespace>
For more on styrene butadiene rubber | http://www.icis.com/Articles/2010/02/24/9337309/china-domestic-sbr-prices-surge-6-post-lunar-new-year.html | CC-MAIN-2014-52 | refinedweb | 148 | 61.56 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
import invoices directly into database
I'm migrating data from openerp 6 to openerp 7, I've chosen to do it directly into database because it's very fast. But I'm having a problem with invoices, they are all imported with the correct data, but I can't cancel them or pay them (actually I can pay them but the state doesn't change).
I found this link, the author of the topic had the same a problem as me, but I don't quite understand the answer. And why the problem doesn't occure with sales orders, delivery orders...?
I've tried using xml-rpc, but the sate is not imported, all my invoices are in draft state.
Any help is welcomed.
We have faced the same thing. Basically if you just import data into the database the ORM will not have attached any workflow to it. In other words, it will be a "dead" object.
The workflow is attached by the ORM when you create an object through the regular OpenERP way. The create does much more, but the most important thing is the workflow.
If you use the XML-RPC connection to create new invoices, they will be just that; new invoices (so in draft state). A possibility would be to create all the invoices with XML-RPC and then using the newly created ids trigger the workflow. So basically you will be using the XML-RPC onnection to invoke the create method and then afterwards the method for the button to confirm the invoices.
The biggest downside to this import is the fact that some business logic might alter your invoices when buttons are pressed. You must always fill in the correct data (like in the case of the invoice; the invoice date) otherwise OpenERP uses the defaults for that, which is almost always incorrect in the case of historical import.
In short:
- Use XML-RPC to create invoices
- Use XML-RPC in a for loop over all the newly create invoices to trigger the method that confirms said invoices, just like you pressed the buttons by hand.
@Ludo,
Thanks for the info. Do you have any suggestions on how to import [historical] invoices into Odoo 8.0 through the standard import/export functionality, or does this type of operation require bypassing the default import/export tools?
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/import-invoices-directly-into-database-54898 | CC-MAIN-2017-17 | refinedweb | 447 | 62.07 |
Flash Player 9 and later, Adobe AIR 1.0 and
later
Understanding Event class properties
Understanding Event class methods
Subclasses of the Event class
Event objects
serve two main purposes in the new event-handling system. First, event
objects represent actual events by storing information about specific events
in a set of properties. Second, event objects contain a set of methods
that allow you to manipulate event objects and affect the behavior
of the event-handling system.
To facilitate
access to these properties and methods, the Flash Player API defines an
Event class that serves as the base class for all event objects.
The Event class defines a fundamental set of properties and methods
that are common to all event objects.
This section begins with a discussion of the Event class properties,
continues with a description of the Event class methods, and concludes
with an explanation of why subclasses of the Event class exist.
The Event class defines a number of read-only properties and
constants that provide important information about an event object.The
following are especially important:
Event object types are represented
by constants and stored in the Event.type property.
Whether an event’s default
behavior can be prevented is represented by a Boolean value and
stored in the Event.cancelable property.
Event flow information is contained in the remaining properties.
Every event object has an associated event
type. Event types are stored in the Event.type property
as string values. It is useful to know the type of an event object
so that your code can distinguish objects of different types from
one another. For example, the following code specifies that the clickHandler() listener
function should respond to any mouse click event objects that are passed
to myDisplayObject:
myDisplayObject.addEventListener(MouseEvent.CLICK, clickHandler);
Some two dozen event types are
associated with the Event class itself and are represented by Event
class constants, some of which are shown in the following excerpt
from the Event class definition:
package flash.events
{
public class Event
{
// class constants
public static const ACTIVATE:String = "activate";
public static const ADDED:String= "added";
// remaining constants omitted for brevity
}
}:
rather
than:
myDisplayObject.addEventListener("click", clickHandler);
Your code can check whether the default
behavior for any given event object can be prevented by accessing
the cancelable property. The cancelable property
holds a Boolean value that indicates whether or not a default behavior can
be prevented. You can prevent, or cancel, the default behavior associated with
a small number of events using the preventDefault() method.
For more information, see Canceling default event behavior under Understanding Event class methods.
The remaining Event class properties contain
important information about an event object and its relationship
to the event flow, as described in the following list:
The bubbles property contains information
about the parts of the event flow in which the event object participates.
The eventPhase property indicates the current
phase in the event flow.
The target property stores a reference to
the event target.
The currentTarget property stores a reference
to the display list object that is currently processing the event
object.
An event is said to bubble
if its event object participates in the bubbling phase of the event
flow, which means that the event object is passed from the target
node back through its ancestors until it reaches the Stage. The Event.bubbles property
stores a Boolean value that indicates whether the event object participates
in the bubbling phase. Because all events that bubble also participate
in the capture and target phases, any event that bubbles participates
in all three of the event flow phases. If the value is true,
the event object participates in all three phases. If the value
is false, the event object does not participate
in the bubbling phase.
You can determine the
event phase for any event object by investigating its eventPhase property.
The eventPhase property contains an unsigned integer
value that represents one of the three phases of the event flow.
The Flash Player API defines a separate EventPhase class that contains
three constants that correspond to the three unsigned integer values,
as shown in the following code excerpt:
package flash.events
{
public final class EventPhase
{
public static const CAPTURING_PHASE:uint = 1;
public static const AT_TARGET:uint = 2;
public static const BUBBLING_PHASE:uint= 3;
}
}
These constants correspond to the three valid values
of the eventPhase property. You can use these constants
to make your code more readable. For example, if you want to ensure
that a function named myFunc() is called only if the
event target is in the target stage, you can use the following code
to test for this condition:
if (event.eventPhase == EventPhase.AT_TARGET)
{
myFunc();
}.
There
are three categories of Event class methods:
Utility methods, which can create copies of an event
object or convert it to a string
Event flow methods, which remove event objects from the event
flow
Default behavior methods, which prevent default behavior
or check whether it has been prevented
There are two utility methods in the Event
class. The clone() method allows you to create
copies of an event object. The toString() method
allows you to generate a string representation of the properties
of an event object along with their values. Both of these methods
are used internally by the event model system, but are exposed to
developers for general use.
For advanced developers creating
subclasses of the Event class, you must override and implement versions
of both utility methods to ensure that the event subclass will work
properly.
You
can call either the Event.stopPropagation() method
or the Event.stopImmediatePropagation() method
to prevent an event object from continuing on its way through the
event flow. The two methods are nearly identical and differ only
in whether the current node’s other event listeners are allowed
to execute:
The Event.stopPropagation() method
prevents the event object from moving on to the next node, but only
after any other event listeners on the current node are allowed
to execute.
The Event.stopImmediatePropagation() method
also prevents the event object from moving on to the next node,
but does not allow any other event listeners on the current node
to execute.
Calling either of these methods has
no effect on whether the default behavior associated with an event
occurs. Use the default behavior methods of the Event class to prevent
default behavior.
The two methods that pertain to canceling
default behavior are the preventDefault() method
and the isDefaultPrevented() method. Call the preventDefault() method
to cancel the default behavior associated with an event. To check
whether preventDefault() has already been called
on an event object, call the isDefaultPrevented() method, which
returns a value of true if the method has already
been called and false otherwise.
The preventDefault() method
will work only if the event’s default behavior can be cancelled.
You can check whether this is the case by referring to the API documentation
for that event type, or by using ActionScript to examine the cancelable property
of the event object.
Canceling the default behavior has no
effect on the progress of an event object through the event flow.
Use the event flow methods of the Event class to remove an event
object from the event flow.
For many
events, the common set of properties defined in the Event class
is sufficient. Other events, however, have unique characteristics
that cannot be captured by the properties available in the Event
class. For these events, ActionScript 3.0 defines several subclasses
of the Event class.
Each subclass
provides additional properties and event types that are unique to that
category of events. For example, events related to mouse input have
several unique characteristics that cannot be captured by the properties
defined in the Event class. The MouseEvent class extends the Event
class by adding ten properties that contain information such as
the location of the mouse event and whether specific keys were pressed
during the mouse event.
An Event subclass also contains constants that represent the
event types that are associated with the subclass. For example,
the MouseEvent class defines constants for several mouse event types,
include the click, doubleClick, mouseDown,
and mouseUp event types.
As described in the section on Event class utility methods under Event objects, when creating an Event subclass you must override
the clone() and toString() methods
to provide functionality specific to the subclass.
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | https://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7e55.html | CC-MAIN-2017-47 | refinedweb | 1,402 | 50.97 |
Details
Description
Dear sir/madam,
We are using Log4Net 1.2.10. We encounter the problem that Log4net doesn't continue logging after an event that triggers an appdomain recycle/restart.
In the global.asax we start the logging with:
private static readonly ILog log = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);
Logging works flawless when the application is started for the first time. After sometime it might occur that the appdomain gets recycled due to inactivity of the web application. We use the following code in Application_end():
log.Info("*** Application end ***");
log4net.LogManager.Shutdown();
After this function the application gets restarted and the Application_start() method executes and writes new lines to the log. The problem is that the log4net doesn't write the new lines after the restart. Could you explain why log4net might stop working after an appdomain restart of an asp.net2.0 web application? If I want log4net to work properly again I need to restart IIS manually.
Looking forward to your reply.
Best regards,
Richard Nijkamp
Issue Links
- is duplicated by
LOG4NET-200 Log4Net Dll stops logging after 1 or 2 days
- Resolved
Activity
- All
- Work Log
- History
- Activity
- Transitions
We have also similar situation. We're using rollingfileappender.
What has helped on some cases is to call on global.asax Xmlconfigurator.ConfigureAndWatch -method on Init(), but this trick does not work 100% of the time. It just stops logging.
Even restarting IIS does not help. Sometimes logging resumes after renaming the logfile, sometimes not. Same thing happens when adding the processid into it.
Appender in setup like this:
<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="Webservice.service.txt" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<appendToFile value="true" />
<rollingStyle value="Size" />
<maxSizeRollBackups value="10" />
<maximumFileSize value="1000KB" />
<staticLogFileName value="false" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %-5level %logger (%file:%line) - %message%newline" />
</layout>
</appender>
Nothing really helpfull in the log4net debug after
log4net: Hierarchy: Shutdown called on Hierarchy [log4net-default-repository]
Xmlconfigurator entires are never shown and logging stopped.
On some systems this does not happen at all, on some places it happens all the time. Took a while before I got it in our test environment, but now it happens in there all the time as well...
I have a similar issue in that logging functions fine for an IIS 6.0 ASP.NET 2.0 web service, however all logging ends after a recycle event. Manually recycling the Application Pool causes the failure every time.
Like Richard, I initiate logging in the global application start event, and log events on application start and end (as well as other places).
I am using the Rhino.Commons.Logging.RollingSqlliteAppender which inherits from Rhino.Commons.Logging.RollingEmbeddedDatabaseAppender which in turn inherits from the Log4Net AdoNetAppender.
You can see the code here:
Thanks
Can you check whether the problem still exists in 1.2.11?
Sorry, I should have indicated as much. Yes, the problem still exists in 1.2.11.
I recompiled the Rhino.Commons.Logging code against the new assembly as well.
Are you using attributes for configuration or are you invoking Configure programmatically?
Probably the latter (I just re-read your "initiate logging" part again).
If you do so in code: the configure methods in 1.2.11 return a collection of diagnostic messages now (they used to be void methods). Anything useful in there?
We have found a workaround: in your webservice class constructor call
log4net.Config.XmlConfigurator.ConfigureAndWatch(new System.IO.FileInfo(AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "log4net.config"));
That will reinitialize logging and make it work.
I have a logging.config file, which was at one time part of the web.config, but I have recently split it out in a separate file while trying to debug this issue. I have a tag for the XmlConfigurator in my assembly info file, with a referential path to "Logging.config".
While trying to debug this issue, I also have tried calling log4net.Config.XmlConfigurator.ConfigureAndWatch and log4net.Config.XmlConfigurator.Configure within the global application start event. That is what you would have read above.
When calling these, I have referenced just "Logging.config" and System.IO.Path.Combine(HttpRuntime.AppDomainAppPath, "Logging.config")) just in case it was losing its path to the application root.
I'd be happy to try applying the configuration programmatically in a different manner if you could point me to some documentation or a quick example to get me started.
Global app events won't work, put the ConfigureAndWatch into your webservice class contructor.
Note that once it has already failed, you need to reset iis or better yet restart whole computer. Otherwise it will fail again.
like this:
public Service(){ log4net.Config.XmlConfigurator.ConfigureAndWatch(new System.IO.FileInfo(AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "log4net.config")); }
@Matthiew: ConfigureAndWatch in your application start event was what I was thinking about, and using the full path won't hurt.
@Ilpo: this is strange as it indicates the re-configuration inside the application start event didn't work.
I assume turning on internal debugging of log4net still doesn't provide anything useful.
There is still something unfinished when these global app events are triggered on IIS side, class constructor works 100% and we have been now using that successfully for years.
Debugging just showed that it stops processing these configurations and thinks that is has already done it. It also sometimes caused whole service to hang.
Just to add my bit. I have (only) one IIS hosted service and I blindly put the call to configure log4Net in the class constructor and it has been working fine across app pool restarts and changes to web.config which also restart the app pool.
Neither Configure or ConfigureAndWatch are working for me in the class constructor. However, I am unable to reset IIS at the moment as Ilpo has suggested; I will try this later.
The comment "Note that once it has already failed, you need to reset iis or better yet restart whole computer. Otherwise it will fail again." is concerning.
I'm also curious about the impact of the "AndWatch" functionality if you are triggering it in the class constructor upon every instantiation.
I have a singleton class. If you don't, have a singleton, then look at a static flag indicating you have configured log4Net.
Also, note the AndWatch is simply so that if you edit the config file, log4Net will reconfigure. If you do not want that behavour, leave off the AndWatch.
I can easily see how the Application_End event may end up not cleaning up everything if stopping the appenders takes too long.
But so far I've always trusted the Application_Start events. Ilpo and Roy, any idea why this wouldn't work?
No idea why we got it working in class constructor, but it is working and has been working. We have app pools recycled quite often due to issus on another 3rd party component.
When you are testing this, have a clean IIS process. If it has already failed, restart whole computer and try only then. It leaves something behind. Have not debugged what that is, but there is something which is not cleaned by app pool restart.
Ilpo and Roy, how and where are you instantiating your loggers? Like Richard did above, and only in global? Multiple places? What is the scope of your logger?
I wonder how IIS is managing the lifetime of the object, and if this is affecting how it behaves during a recycle?
On all webservice classes that need logging like this:
public class Service : System.Web.Services.WebService
{
private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
public Service(){ log4net.Config.XmlConfigurator.ConfigureAndWatch(new System.IO.FileInfo(AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "log4net.config")); }
Other classes logging work just fine after this.
Stefan, I have no idea. I never tried it the other way.
Matthew, I do not even have the global appstart and append methods in my code. I have a IIS hosted web service (.svc). In the constructor for the class that implments the service, I make reference to a singleton class that holds a lot of redundant and static data. The singleton class constructor calls the log4Net configure as its first operation. I create a static logger for each class that I have logging in. I have wrapper code that can determine the object instansation of the class so that each instansation of the class is displayed differently in the log.
An interesting, but not extermely suprising, note is that even though I create the logger for the service implementation class before configuration log4Net, it still works and logs entries correctly for the first creation of the service class.
BTW. I do NOT call the log4Net "shutdown" method. I have used log4Net in about 20 self-hosted services and several desktop applications and now one IIS hosted service, and I have never called that method. I have not looked at the code to see what it does and why I would call it, but I cannot say that I have seen a down side to not calling it.
I rebooted, the logging started fine. I manually recycled the application pool, and it no longer logs.
I see this issue as well. Exactly the same, IIS 6, asp.net 4.0, log4Net 1.2.11. I'm using the standard RollingFileAppender.
We've just come across this issue when hosting in Azure (IIS7.5) - logging stops on all our instances until we "kick" the application by changing the web.config.
A bit concerned that this bug > 12 months old and makes the module useless on a daily basis.
I am no expert on IIS web applications, but it would look like a misconfigured log4net configuration and/or a missing initialization. If logging starts by changing the web.config, log4net is "watching" for changes in the web.config and respawns the appenders when that happens. This means that either:
- log4net gets never started
- log4net gets stopped and not restarted
Thus - in theory - adding an initialization in the right spot alike the one in comment could fix the issue.
Despite of these IIS settings processModel.idleTimeout = 0 and recycling.periodicRestart.time = 0, we have the same issue (Azure, IIS 7.5, ADONetAppender, separate log4net.config to web.config). Our temporary solution is pinging the page every N minutes.
Hi Everyone,
Thanks the tip from Dominik, he's right, it's completely about CONFIGURATION.
We turned on the log4net internal logging (log4net.Internal.Debug=true...etc.) for more details. The root of our problem is about forgetting this code log4net.Config.XmlConfigurator.Configure() in Global.asax.cs (Application_Start). REMOVE IT!
We have separate log4net.config from web.config and in the AssemblyInfo.cs we've already have this code [assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", Watch = true)]. It must be enough.
When the recycle occured (Application_Start is activated) the log4net tried to looking for its configuration in web.config instead of log4net.config (because of the mentioned code log4net.Config.XmlConfigurator.Configure()). It's certainly failed, therefore stop logging.
Glad to hear you were able to sort out this nasty problem. Can this bug therefore be closed?
Above configuration is not the whole truth. What we originally had was exactly like it is for desktop application, using this [assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", Watch = true)]. It does not really matter what the file name is, logging just ends at some point when IIS is recycling application pools.
There is some timing issue how this configuration/shutdown is handled. Application start seems to happen before previous shutdown is called thus even new instance gets logging shutdown. Using configuration call from class constructor is late enough so that it really does the initialization for new instance and does not get this shutdown call from previous application pool. We have used this method now for years in hundreds of installations on different environments and works 100% of the time.
Sometimes this happens, sometimes not. On some environments this happens immediately, on some it takes a while to reappear, but it will eventually be there.
So the general consensus is that the issue is:
1] IIS (7.5?) specific
2] a timing problem caused by misplaced shutdown / start events
3] solvable with a "manual" reconfiguration of log4net in the right spot
4] not solvable in log4net itself since log4net has to handle events when they are raised
If these conditions are met I would close this issue with resolution "invalid" and add a new FAQ entry at describing the issue and a rather detailed description how it can be fixed. To be able to do that, would one of you guys please write down an answer to the question:
"Why does my IIS hosted web application stop logging after some time?"
By the way: has someone tested this scenario for IIS 8.0?
We tested it in these environments (and it's working):
DEV/INT: Windows 2008 R2, IIS 7.5, SQL 2008 R2
UAT/LIVE: Azure Cloud Services (Windows 2012, IIS 8), SQL Azure - it should be working the normal IIS as well.
All I can suggest is turning on the log4net debugging features and fix all ERROR (in our case it was incorrect configuration). If there's no ERROR shows in the log4net debug log then it should be working.
Dominik:
1] This definitely happens in IIS 6.0 as well as 7.5, I can't speak to 8.0.
2] This is likely true, it probably has something to do with the fact that IIS runs a (shadow?) copy of the deployed code, not the code itself. Thus it might be starting a new instance (copy) while a previous instance (different copy) is terminating.
3] I've not had any luck with Ilpo's approach, but perhaps with another attempt, I might... (I just don't have the time at the moment).
4] I don't know that I agree that this is not solvable in log4net itself. I might agree that it is cheaper or easier to work around the issue (and that might be the best thing to do), but I suspect there is a way to cope with this if the right domain expert were to understand the problem correctly. It seems like this is a valid bug, just with a workaround instead of a fix.
The general agreement is that this issue is better solved with workarounds. Therefore I just commited a new FAQ entry to the website as revision 839974:
If you run into this issue please follow the steps there. If those do not solve your problem or you think there is something missing in the FAQ entry or you know a log4net-only solution that works feel free to reopen the bug and post a patch.
What appenders are you using? Maybe you have an exclusive lock on something that is still being kept after the restart. Have you tried using the AdoNetAppender and/or appending the processid to your filename so every restart gets its own file? | https://issues.apache.org/jira/browse/LOG4NET-178?focusedCommentId=13504094&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-22 | refinedweb | 2,520 | 58.69 |
Investors in Sony Corp (Symbol: SNE) saw new options begin trading today, for the September 18th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the SNE options chain for the new September 18th contracts and identified one put and one call contract of particular interest.
The put contract at the $70.00 strike price has a current bid of $1.65. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $70.00, but will also collect the premium, putting the cost basis of the shares at $68.35 (before broker commissions). To an investor already interested in purchasing shares of SNE, that could represent an attractive alternative to paying $76.55.36% return on the cash commitment, or 13.44% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Sony Corp, and highlighting in green where the $70.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $77.50 strike price has a current bid of $3.60. If an investor was to purchase shares of SNE stock at the current price level of $76.55/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $77.50. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 5.94% if the stock gets called away at the September 18's trailing twelve month trading history, with the $77.50 strike highlighted in red:
Considering the fact that the $77.70% boost of extra return to the investor, or 26.82% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 38%, while the implied volatility in the call contract example is 35%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 252 trading day closing values as well as today's price of $76.55) to be 32%.. | https://www.nasdaq.com/articles/interesting-sne-put-and-call-options-for-september-18th-2020-07-16 | CC-MAIN-2021-43 | refinedweb | 361 | 65.01 |
!
Hi- this program is supposed to generate 100 random numbers between 100 and 1000, print out a message every 10th number printed to screen, and sum the total of the numbers. It does generate "random" numbers, but the message is not showing up and the numbers are not totalled at the end. Someone please help!
using System; using System.Collections.Generic; using System.Linq; using System.Text; /*This program finds 100 3 digit numbers between 100 and 1000, prints the * "Cha, Cha, Cha!" after every 10th number, and outputs the sum of the numbers */ namespace MikeVertreeseRandom { class RandomNumbers //using the random class for number generation { static void Main(string[] args) { Random r = new Random(); int number = r.Next(100, 1000); //r.Next() finds the next random # bet 100 and 1000 int numberTotal = number; //declaring the variable "numberTotal" for the sum numberTotal += number; //the sum increases by new number with each pass int i = 1; //i is the index counter for (i = 1; i <= 100; i++) //the program will run through 100 iterations Console.WriteLine(r.Next(100, 1000)); //program prints the next random # numberTotal += number; //need to keep a running sum of the numbers found if ((i % 10) == 0) //every 10th iteration, do something { Console.WriteLine("Cha, Cha, Cha!"); //prints this message ea 10th number } Console.WriteLine("The sum is: ", +numberTotal); Console.ReadLine(); } } } | https://www.daniweb.com/programming/software-development/threads/428127/random-number-generation-in-c | CC-MAIN-2018-34 | refinedweb | 223 | 56.55 |
Actions and Tasks¶
Actions: A Start and a Finish¶
A higher-level construct than messages is the concept of an action. An action can be started, and then finishes either successfully or with some sort of an exception. Success in this case simply means no exception was thrown; the result of an action may be a successful response saying “this did not work”. Log messages are emitted for action start and finish.
Actions are also nested; one action can be the parent of another.
An action’s parent is deduced from the Python call stack and context managers like
Action.context().
Log messages will also note the action they are part of if they can deduce it from the call stack.
The result of all this is that you can trace the operation of your code as it logs various actions, and see a narrative of what happened and what caused it to happen.
Logging Actions¶
Here’s a basic example of logging an action:
from eliot import start_action with start_action(action_type=u"store_data"): x = get_data() store_data(x)
This will log an action start message and if the block finishes successfully an action success message. If an exception is thrown by the block then an action failure message will be logged along with the exception type and reason as additional fields. Each action thus results in two messages being logged: at the start and finish of the action. No traceback will be logged so if you want a traceback you will need to do so explicitly. Notice that the action has a name, with a subsystem prefix. Again, this should be a logical name.
Note that all code called within this block is within the context of this action.
While running the block of code within the
with statement new actions created with
start_action will get the top-level
start_action as their parent.
Logging Functions¶
If you want to log the inputs and results of a function, you can use the
log_call decorator:
from eliot import log_call @log_call def calculate(x, y): return x * y
This will log an action of type
calculate with arguments
x and
y, as well as logging the result.
You can also customize the output:
from eliot import log_call @log_call(action_type="CALC", include_args=["x"], include_result=False) def calculate(x, y): return x * y
This changes the action type to
CALC, logs only the
x argument, and doesn’t log the result.
Tasks: Top-level Actions¶
A top-level action with no parent is called a task, the root cause of all its child actions.
E.g. a web server receiving a new HTTP request would create a task for that new request.
Log messages emitted from Eliot are therefore logically structured as a forest: trees of actions with tasks at the root.
If you want to ignore the context and create a top-level task you can use the
eliot.start_task API.
From Actions to Messages¶
While the logical structure of log messages is a forest of actions, the actual output is effectively a list of dictionaries (e.g. a series of JSON messages written to a file). To bridge the gap between the two structures each output message contains special fields expressing the logical relationship between it and other messages:
task_uuid: The unique identifier of the task (top-level action) the message is part of.
task_level: The specific location of this message within the task’s tree of actions. For example,
[3, 2, 4]indicates the message is the 4th child of the 2nd child of the 3rd child of the task.
Consider the following code sample:
from eliot import start_action, start_task, Message with start_task(action_type="parent"): Message.log(message_type="info", x=1) with start_action(action_type="child"): Message.log(message_type="info", x=2) raise RuntimeError("ono")
All these messages will share the same UUID in their
task_uuid field, since they are all part of the same high-level task.
If you sort the resulting messages by their
task_level you will get the tree of messages:
task_level=[1] action_type="parent" action_status="started" task_level=[2] message_type="info" x=1 task_level=[3, 1] action_type="child" action_status="started" task_level=[3, 2] message_type="info" x=2 task_level=[3, 3] action_type="child" action_status="succeeded" task_level=[4] action_type="parent" action_status="failed" exception="exceptions.RuntimeError" reason="ono"
Action Fields¶
You can add fields to both the start message and the success message of an action.
from eliot import start_action with start_action(action_type=u"yourapp:subsystem:frob", # Fields added to start message only: key=123, foo=u"bar") as action: x = _beep(123) result = frobinate(x) # Fields added to success message only: action.add_success_fields(result=result)
If you want to include some extra information in case of failures beyond the exception you can always log a regular message with that information. Since the message will be recorded inside the context of the action its information will be clearly tied to the result of the action by the person (or code!) reading the logs later on.
Using Generators¶
Generators (functions with
yield) and context managers (
with X:) don’t mix well in Python.
So if you’re going to use
with start_action() in a generator, just make sure it doesn’t wrap a
yield and you’ll be fine.
Here’s what you SHOULD NOT DO:
def generator(): with start_action(action_type="x"): # BAD! DO NOT yield inside a start_action() block: yield make_result()
Here’s what can do instead:
def generator(): with start_action(action_type="x"): result = make_result() # This is GOOD, no yield inside the start_action() block: yield result
Non-Finishing Contexts¶
Sometimes you want to have the action be the context for other messages but not finish automatically when the block finishes.
You can do so with
Action.context().
You can explicitly finish an action by calling
eliot.Action.finish.
If called with an exception it indicates the action finished unsuccessfully.
If called with no arguments it indicates that the action finished successfully.
from eliot import start_action action = start_action(action_type=u"yourapp:subsystem:frob") try: with action.context(): x = _beep() with action.context(): frobinate(x) # Action still isn't finished, need to so explicitly. except FrobError as e: action.finish(e) else: action.finish()
The
context() method returns the
Action:
from eliot import start_action with start_action(action_type=u"your_type").context() as action: # do some stuff... action.finish()
You shouldn’t log within an action’s context after it has been finished:
from eliot import start_action, Message with start_action(action_type=u"message_late").context() as action: Message.log(message_type=u"ok") # finish the action: action.finish() # Don't do this! This message is being added to a finished action! Message.log(message_type=u"late")
As an alternative to
with, you can also explicitly run a function within the action context:
from eliot import start_action action = start_action(action_type=u"yourapp:subsystem:frob") # Call do_something(x=1) in context of action, return its result: result = action.run(do_something, x=1)
Getting the Current Action¶
Sometimes it can be useful to get the current action.
For example, you might want to record the current task UUID for future reference, in a bug report for example.
You might also want to pass around the
Action explicitly, rather than relying on the implicit context.
You can get the current
Action by calling
eliot.current_action().
For example:
from eliot import current_action def get_current_uuid(): return current_action().task_uuid | https://eliot.readthedocs.io/en/latest/generating/actions.html | CC-MAIN-2022-40 | refinedweb | 1,223 | 53.61 |
Common functions in Python are grouped under modules. If the module is not found in python, the error “ModuleNotFoundError: No module named ‘Decimal'” error will be thrown. In this post, what this exception is and how to fix it if it happens.
The modules in the python have one or more functions. Commonly used functions are created and grouped into modules. Some of the modules are part of the python that will come with the installation of the python. They’re all core modules.
Custom modules are created by the user. Commonly used functions are added to these custom modules. These modules are used for all projects used by the user. These modules are company-specific or domain-specific. It helps to re-use the code.
Traceback (most recent call last): File "D:\pythonexample.py", line 2, in <module> import Decimal ModuleNotFoundError: No module named 'Decimal' >>>
Root Cause
Modules are plugged into the code using the import statement. If there is any issue in the import statement, this ModuleNotFoundError error will occur. The python program attempts to link the module specified in the import statement. If the program is unable to link the module for any reason, this error will be shown in the console window.
The reasoncould be that the name of the module is incorrect. Otherwise, the module is not included in the library of python modules. There might be a linking problem with the modules. There are several reasons for this error.
How to reproduce this issue
The easy way to replicate this problem is to import a module that does not exist in the python. In the python program, add a name for the module that is wrong. In the code below, the import statement includes the name of the module as Decimal. Decimal does not exist in the python as a module.
import math import Decimal Decimal(math.factorial(171))
Solution 1
In the python program, check the import statement. If the name of the module is wrong, change the name of the module. The correct name of the module will link to the python module. This will correct the error.
import math from decimal import * Decimal(math.factorial(171))
Solution 2
If the name of the module in the import statement is right, test the paths linking the python installation and module. Make sure the module is available for installation on the python. In the case of custom module, make sure that all steps are followed.
1 Comment | https://www.yawintutor.com/modulenotfounderror-no-module-named-decimal/ | CC-MAIN-2021-39 | refinedweb | 414 | 67.96 |
Introduction
"Repetition renders the ridiculous reasonable." -- Norman Wildberger
This document is a tutorial for the advanced constructs of the Nim programming language. Note that this document is somewhat obsolete as the manual contains many more examples of the advanced language features.
Pragmas
Pragmas are Nim's method to give the compiler additional information/ commands without introducing a massive number of new keywords. Pragmas are enclosed in the special {. and .} curly dot brackets. This tutorial does not cover pragmas. See the manual or user guide for a description of the available pragmas.
Object Oriented Programming
While Nim's support for object oriented programming (OOP) is minimalistic, powerful OOP techniques can be used. OOP is seen as one way to design a program, not the only way. Often a procedural approach leads to simpler and more efficient code. In particular, preferring composition over inheritance is often the better design.
Inheritance.
type Person = ref object of RootObj name*: string # the * means that `name` is accessible from other modules age: int # no * means that the field is hidden from other modules Student = ref object of Person # Student inherits from Person id: int # with an id field var student: Student person: Person assert(student of Student) # is true # object construction: student = Student(name: "Anton", age: 5, id: 2) echo student[]
Inheritance is done with the object of syntax. Multiple inheritance is currently not supported. If an object type has no suitable ancestor, RootObj can be used as its ancestor, but this is only a convention. Objects that have no ancestor are implicitly final. You can use the inheritable pragma to introduce new object roots apart from system.RootObj. (This is used in the GTK wrapper for instance.)
Ref objects should be used whenever inheritance is used. It isn't strictly necessary, but with non-ref objects, assignments such as let person: Person = Student(id: 123) will truncate subclass fields.
Note: Composition (has-a relation) is often preferable to inheritance (is-a relation) for simple code reuse. Since objects are value types in Nim, composition is as efficient as inheritance.
Mutually recursive types
Objects, tuples and references can model quite complex data structures which depend on each other; they are mutually recursive. In Nim these types can only be declared within a single type section. (Anything else would require arbitrary symbol lookahead which slows down compilation.)
Example:
type Node = ref object # a reference to an object with the following field: le, ri: Node # left and right subtrees sym: ref Sym # leaves contain a reference to a Sym Sym = object # a symbol name: string # the symbol's name line: int # the line the symbol was declared in code: Node # the symbol's abstract syntax tree
Type conversions
Nim distinguishes between type casts and type conversions. Casts are done with the cast operator and force the compiler to interpret a bit pattern to be of another type.
Type conversions are a much more polite way to convert a type into another: They preserve the abstract value, not necessarily the bit-pattern. If a type conversion is not possible, the compiler complains or an exception is raised.
The syntax for type conversions is destination_type(expression_to_convert) (like an ordinary call):
proc getID(x: Person): int = Student(x).id
The InvalidObjectConversionDefect exception is raised if x is not a Student. var n = Node(kind: nkFloat, floatVal: 1.0) # the following statement raises an `FieldDefect` exception, because # n.kind's value does not fit: n.strVal = ""
As can been seen from the example, an advantage to an object hierarchy is that no conversion between different object types is needed. Yet, access to invalid object fields raises an exception.
Method call syntax
There is a syntactic sugar for calling routines: The syntax obj.methodName(args) can be used instead of methodName(obj, args). If there are no remaining arguments, the parentheses can be omitted: obj.len (instead of len(obj)).
This method call syntax is not restricted to objects, it can be used for any type:
import std/strutils echo "abc".len # is the same as echo len("abc") echo "abc".toUpperAscii() echo({'a', 'b', 'c'}.card) stdout.writeLine("Hallo") # the same as writeLine(stdout, "Hallo")
(Another way to look at the method call syntax is that it provides the missing postfix notation.)
So "pure object oriented" code is easy to write:
import std/[strutils, sequtils] stdout.writeLine("Give a list of numbers (separated by spaces): ") stdout.write(stdin.readLine.splitWhitespace.map(parseInt).max.`$`) stdout.writeLine(" is the maximum!")
Properties
As the above example shows, Nim has no need for get-properties: Ordinary get-procedures that are called with the method call syntax achieve the same. But setting a value is different; for this a special setter syntax is needed:
type Socket* = ref object of RootObj h: int # cannot be accessed from the outside of the module due to missing star proc `host=`*(s: var Socket, value: int) {.inline.} = ## setter of host address s.h = value proc host*(s: Socket): int {.inline.} = ## getter of host address s.h var s: Socket new s s.host = 34 # same as `host=`(s, 34)
(The example also shows inline procedures.)
The [] array access operator can be overloaded to provide array properties:
type Vector* = object x, y, z: float proc `[]=`* (v: var Vector, i: int, value: float) = # setter case i of 0: v.x = value of 1: v.y = value of 2: v.z = value else: assert(false) proc `[]`* (v: Vector, i: int): float = # getter case i of 0: result = v.x of 1: result = v.y of 2: result = v.z else: assert(false)
The example is silly, since a vector is better modelled by a tuple which already provides v[] access.
Dynamic dispatch
Procedures always use static dispatch. For dynamic dispatch replace the proc keyword by method:
type Expression = ref object of RootObj ## abstract base class for an expression Literal = ref object of Expression x: int PlusExpr = ref object of Expression a, b: Expression # watch out: 'eval' relies on dynamic binding method eval(e: Expression): int {.base.} = # override this base method quit "to override!" method eval(e: Literal): int = e.x method eval(e: PlusExpr): int = eval(e.a) + eval(e.b) proc newLit(x: int): Literal = Literal(x: x) proc newPlus(a, b: Expression): PlusExpr = PlusExpr(a: a, b: b) echo eval(newPlus(newPlus(newLit(1), newLit(2)), newLit(4)))
Note that in the example the constructors newLit and newPlus are procs because it makes more sense for them to use static binding, but eval is a method because it requires dynamic binding.
Note: Starting from Nim 0.20, to use multi-methods one must explicitly pass --multimethods:on when compiling.
As the example demonstrates, invocation of a multi-method cannot be ambiguous: Collide 2 is preferred over collide 1 because the resolution works from left to right. Thus.
Exceptions
In Nim exceptions are objects. By convention, exception types are suffixed with 'Error'. The system module defines an exception hierarchy that you might want to stick to. Exceptions derive from system.Exception, which provides the common interface.
Exceptions have to be allocated on the heap because their lifetime is unknown. The compiler will prevent you from raising an exception created on the stack. All raised exceptions should at least specify the reason for being raised in the msg field.
A convention is that exceptions should be raised in exceptional cases, they should not be used as an alternative method of control flow.
Raise statement
Raising an exception is done with the raise statement:
var e: ref OSError new(e) e.msg = "the request to the OS failed" raise e
If the raise keyword is not followed by an expression, the last exception is re-raised. For the purpose of avoiding repeating this common code pattern, the template newException in the system module can be used:
raise newException(OSError, "the request to the OS failed")
Try statement
The try statement handles exceptions:
from std/strutils import parseInt # read the first two lines of a text file that should contain numbers # and tries to add them var f: File if open(f, "numbers.txt"): try: let a = readLine(f) let b = readLine(f) echo "sum: ", parseInt(a) + parseInt(b) except OverflowDefect: echo "overflow!" except ValueError: echo "could not convert string to integer" except IOError: echo "IO error!" except: echo "Unknown exception!" # reraise the unknown exception: raise finally: close(f)
The statements after the try are executed unless an exception is raised. Then the appropriate except part is executed.
The empty except part is executed if there is an exception that is not explicitly listed. It is similar to an else part in if statements.
If there is a finally part, it is always executed after the exception handlers.
The exception is consumed in an except part. If an exception is not handled, it is propagated through the call stack. This means that often the rest of the procedure - that is not within a finally clause - is not executed (if an exception occurs).
If you need to access the actual exception object or message inside an except branch you can use the getCurrentException() and getCurrentExceptionMsg() procs from the system module. Example:
try: doSomethingHere() except: let e = getCurrentException() msg = getCurrentExceptionMsg() echo "Got exception ", repr(e), " with message ", msg
Annotating procs with raised exceptions
Through the use of the optional {.raises.} pragma you can specify that a proc is meant to raise a specific set of exceptions, or none at all. If the {.raises.} pragma is used, the compiler will verify that this is true. For instance, if you specify that a proc raises IOError, and at some point it (or one of the procs it calls) starts raising a new exception the compiler will prevent that proc from compiling. Usage example:
proc complexProc() {.raises: [IOError, ArithmeticDefect].} = ... proc simpleProc() {.raises: [].} = ...
Once you have code like this in place, if the list of raised exception changes the compiler will stop with an error specifying the line of the proc which stopped validating the pragma and the raised exception not being caught, along with the file and line where the uncaught exception is being raised, which may help you locate the offending code which has changed.
If you want to add the {.raises.} pragma to existing code, the compiler can also help you. You can add the {.effects.} pragma statement to your proc and the compiler will output all inferred effects up to that point (exception tracking is part of Nim's effect system). Another more roundabout way to find out the list of exceptions raised by a proc is to use the Nim doc command which generates documentation for a whole module and decorates all procs with the list of raised exceptions. You can read more about Nim's effect system and related pragmas in the manual.
Generics
Generics are Nim's means to parametrize procs, iterators or types with type parameters. Generic parameters are written within square brackets, for example Foo[T]. They are most useful for efficient type safe containers:
type BinaryTree*[T] = ref object # BinaryTree is a generic type with # generic param `T` le, ri: BinaryTree[T] # left and right subtrees; may be nil data: T # the data stored in a node proc newNode*[T](data: T): BinaryTree[T] = # constructor for a node new(result) result.data = data proc add*[T](root: var BinaryTree[T], n: BinaryTree[T]) = # insert a node into the tree if root == nil: root = n else: var it = root while it != nil: # compare the data items; uses the generic `cmp` proc # that works for any type that has a `==` and `<` operator var c = cmp(it.data, n.data) if c < 0: if it.le == nil: it.le = n return it = it.le else: if it.ri == nil: it.ri = n return it = it.ri proc add*[T](root: var BinaryTree[T], data: T) = # convenience proc: add(root, newNode(data)) iterator preorder*[T](root: BinaryTree[T]): T = # Preorder traversal of a binary tree. # This uses an explicit stack (which is more efficient than # a recursive iterator factory). var stack: seq[BinaryTree[T]] = @[root] while stack.len > 0: var n = stack.pop() while n != nil: yield n.data add(stack, n.ri) # push right subtree onto the stack n = n.le # and follow the left pointer var root: BinaryTree[string] # instantiate a BinaryTree with `string` add(root, newNode("hello")) # instantiates `newNode` and `add` add(root, "world") # instantiates the second `add` proc for str in preorder(root): stdout.writeLine(str)
The example shows a generic binary tree. Depending on context, the brackets are used either to introduce type parameters or to instantiate a generic proc, iterator or type. As the example shows, generics work with overloading: the best match of add is used. The built-in add procedure for sequences is not hidden and is used in the preorder iterator.
There is a special [:T] syntax when using generics with the method call syntax:
proc foo[T](i: T) = discard var i: int # i.foo[int]() # Error: expression 'foo(i)' has no type (or is ambiguous) i.foo[:int]() # Success
Templates
Templates are a simple substitution mechanism that operates on Nim's abstract syntax trees. Templates are processed in the semantic pass of the compiler. They integrate well with the rest of the language and share none of C's preprocessor macros flaws.
To invoke a template, call it like a procedure.
Example:
template `!=` (a, b: untyped): untyped = # this definition exists in the System module not (a == b) assert(5 != 6) # the compiler rewrites that to: assert(not (5 == 6))
The !=, >, >=, in, notin, isnot operators are in fact templates: this has the benefit that if you overload the == operator, the != operator is available automatically and does the right thing. (Except for IEEE floating point numbers - NaN breaks basic boolean logic.)
a > b is transformed into b < a. a in b is transformed into contains(b, a). notin and isnot have the obvious meanings.
Templates are especially useful for lazy evaluation purposes. Consider a simple proc for logging:
const debug = true proc log(msg: string) {.inline.} = if debug: stdout.writeLine(msg) var x = 4 log("x has the value: " & $x)
This code has a shortcoming: if debug is set to false someday, the quite expensive $ and & operations are still performed! (The argument evaluation for procedures is eager).
Turning the log proc into a template solves this problem:
const debug = true template log(msg: string) = if debug: stdout.writeLine(msg) var x = 4 log("x has the value: " & $x)
The parameters' types can be ordinary types or the meta types untyped, typed, or type. type suggests that only a type symbol may be given as an argument, and untyped means symbol lookups and type resolution is not performed before the expression is passed to the template.
If the template has no explicit return type, void is used for consistency with procs and methods.
To pass a block of statements to a template, use untyped for the last parameter:
template withFile(f: untyped, filename: string, mode: FileMode, body: untyped) = let fn = filename var f: File if open(f, fn, mode): try: body finally: close(f) else: quit("cannot open: " & fn) withFile(txt, "ttempl3.txt", fmWrite): txt.writeLine("line 1") txt.writeLine("line 2")
In the example the two writeLine statements are bound to the body parameter. The withFile template contains boilerplate code and helps to avoid a common bug: to forget to close the file. Note how the let fn = filename statement ensures that filename is evaluated only once.
Example: Lifting Procs
import std/math template liftScalarProc(fname) = ## Lift a proc taking one scalar parameter and returning a ## scalar value (eg `proc sssss[T](x: T): float`), ## to provide templated procs that can handle a single ## parameter of seq[T] or nested seq[seq[]] or the same type ## ## .. code-block:: Nim ## liftScalarProc(abs) ## # now abs(@[@[1,-2], @[-2,-3]]) == @[@[1,2], @[2,3]] proc fname[T](x: openarray[T]): auto = var temp: T type outType = typeof(fname(temp)) result = newSeq[outType](x.len) for i in 0..<x.len: result[i] = fname(x[i]) liftScalarProc(sqrt) # make sqrt() work for sequences echo sqrt(@[4.0, 16.0, 25.0, 36.0]) # => @[2.0, 4.0, 5.0, 6.0]
Compilation to JavaScript
Nim code can be compiled to JavaScript. However in order to write JavaScript-compatible code you should remember the following:
- addr and ptr have slightly different semantic meaning in JavaScript. It is recommended to avoid those if you're not sure how they are translated to JavaScript.
- cast[T](x) in JavaScript is translated to (x), except for casting between signed/unsigned ints, in which case it behaves as static cast in C language.
- cstring in JavaScript means JavaScript string. It is a good practice to use cstring only when it is semantically appropriate. E.g. don't use cstring as a binary data buffer.
Part 3
The next part is entirely about metaprogramming via macros: Part III | https://nim-lang.github.io/Nim/tut2.html | CC-MAIN-2021-39 | refinedweb | 2,821 | 56.15 |
Passing Functions to Components
How do I pass an event handler (like onClick) to a component?
Pass event handlers and other functions as props to child components:
<button onClick={this.handleClick}>
If you need to have access to the parent component in the handler, you also need to bind the function to the component instance (see below).
How do I bind a function to a component instance?
There are several ways to make sure functions have access to component attributes like
this.props and
this.state, depending on which syntax and build steps you are using.
Bind in Constructor (ES2015)
class Foo extends Component { constructor(props) { super(props); this.handleClick = this.handleClick.bind(this); } handleClick() { console.log('Click happened'); } render() { return <button onClick={this.handleClick}>Click Me</button>; } }
Class Properties (Stage 3 Proposal)
class Foo extends Component { // Note: this syntax is experimental and not standardized yet. handleClick = () => { console.log('Click happened'); } render() { return <button onClick={this.handleClick}>Click Me</button>; } }
Bind in Render
class Foo extends Component { handleClick() { console.log('Click happened'); } render() { return <button onClick={this.handleClick.bind(this)}>Click Me</button>; } }
Note:
Using
Function.prototype.bindin render creates a new function each time the component renders, which may have performance implications (see below).
Arrow Function in Render
class Foo extends Component { handleClick() { console.log('Click happened'); } render() { return <button onClick={() => this.handleClick()}>Click Me</button>; } }
Note:
Using an arrow function in render creates a new function each time the component renders, which may break optimizations based on strict identity comparison.
Is it OK to use arrow functions in render methods?
Generally speaking, yes, it is OK, and it is often the easiest way to pass parameters to callback functions.
If you do have performance issues, by all means, optimize!
Why is binding necessary at all?
In JavaScript, these two code snippets are not equivalent:
obj.method();
var method = obj.method; method();
Binding methods helps ensure that the second snippet works the same way as the first one.
With React, typically you only need to bind the methods you pass to other components. For example,
<button onClick={this.handleClick}> passes
this.handleClick so you want to bind it. However, it is unnecessary to bind the
render method or the lifecycle methods: we don’t pass them to other components.
This post by Yehuda Katz explains what binding is, and how functions work in JavaScript, in detail.
Why is my function being called every time the component renders?
Make sure you aren’t calling the function when you pass it to the component:
render() { // Wrong: handleClick is called instead of passed as a reference! return <button onClick={this.handleClick()}>Click Me</button> }
Instead, pass the function itself (without parens):
render() { // Correct: handleClick is passed as a reference! return <button onClick={this.handleClick}>Click Me</button> }
How do I pass a parameter to an event handler or callback?
You can use an arrow function to wrap around an event handler and pass parameters:
<button onClick={() => this.handleClick(id)} />
This is equivalent to calling
.bind:
<button onClick={this.handleClick.bind(this, id)} />
Example: Passing params using arrow functions
const A = 65 // ASCII character code class Alphabet extends React.Component { constructor(props) { super(props); this.state = { justClicked: null, letters: Array.from({length: 26}, (_, i) => String.fromCharCode(A + i)) }; } handleClick(letter) { this.setState({ justClicked: letter }); } render() { return ( <div> Just clicked: {this.state.justClicked} <ul> {this.state.letters.map(letter => <li key={letter} onClick={() => this.handleClick(letter)}> {letter} </li> )} </ul> </div> ) } }
Example: Passing params using data-attributes
Alternately, you can use DOM APIs to store data needed for event handlers. Consider this approach if you need to optimize a large number of elements or have a render tree that relies on React.PureComponent equality checks.
const A = 65 // ASCII character code class Alphabet extends React.Component { constructor(props) { super(props); this.handleClick = this.handleClick.bind(this); this.state = { justClicked: null, letters: Array.from({length: 26}, (_, i) => String.fromCharCode(A + i)) }; } handleClick(e) { this.setState({ justClicked: e.target.dataset.letter }); } render() { return ( <div> Just clicked: {this.state.justClicked} <ul> {this.state.letters.map(letter => <li key={letter} data-letter={letter} onClick={this.handleClick}> {letter} </li> )} </ul> </div> ) } }. This can be done by using:
- throttling: sample changes based on a time based frequency (eg
_.throttle)
- debouncing: publish changes after a period of inactivity (eg
_.debounce)
requestAnimationFramethrottling: sample changes based on
requestAnimationFrame(eg
raf-schd)
See this visualization for a comparison of
throttle and
debounce functions.
Note:
_.debounce,
_.throttleand
raf-schdprovide a
cancelmethod to cancel delayed callbacks. You should either call this method from
componentWillUnmountor check to ensure that the component is still mounted within the delayed function.
Throttle
Throttling prevents a function from being called more than once in a given window of time. The example below throttles a “click” handler to prevent calling it more than once per second.
import throttle from 'lodash.throttle'; class LoadMoreButton extends React.Component { constructor(props) { super(props); this.handleClick = this.handleClick.bind(this); this.handleClickThrottled = throttle(this.handleClick, 1000); } componentWillUnmount() { this.handleClickThrottled.cancel(); } render() { return <button onClick={this.handleClickThrottled}>Load More</button>; } handleClick() { this.props.loadMore(); } }
Debounce
Debouncing ensures that a function will not be executed until after a certain amount of time has passed since it was last called. This can be useful when you have to perform some expensive calculation in response to an event that might dispatch rapidly (eg scroll or keyboard events). The example below debounces text input with a 250ms delay.
import debounce from 'lodash.debounce'; class Searchbox extends React.Component { constructor(props) { super(props); this.handleChange = this.handleChange.bind(this); this.emitChangeDebounced = debounce(this.emitChange, 250); } componentWillUnmount() { this.emitChangeDebounced.cancel(); } render() { return ( <input type="text" onChange={this.handleChange} placeholder="Search..." defaultValue={this.props.value} /> ); } handleChange(e) { this.emitChangeDebounced(e.target.value); } emitChange(value) { this.props.onChange(value); } }
requestAnimationFrame throttling
requestAnimationFrame is a way of queuing a function to be executed in the browser at the optimal time for rendering performance. A function that is queued with
requestAnimationFrame will fire in the next frame. The browser will work hard to ensure that there are 60 frames per second (60 fps). However, if the browser is unable to it will naturally limit the amount of frames in a second. For example, a device might only be able to handle 30 fps and so you will only get 30 frames in that second. Using
requestAnimationFrame for throttling is a useful technique in that it prevents you from doing more than 60 updates in a second. If you are doing 100 updates in a second this creates additional work for the browser that the user will not see anyway.
Note:
Using this technique will only capture the last published value in a frame. You can see an example of how this optimization works on
MDN
import rafSchedule from 'raf-schd'; class ScrollListener extends React.Component { constructor(props) { super(props); this.handleScroll = this.handleScroll.bind(this); // Create a new function to schedule updates. this.scheduleUpdate = rafSchedule( point => this.props.onScroll(point) ); } handleScroll(e) { // When we receive a scroll event, schedule an update. // If we receive many updates within a frame, we'll only publish the latest value. this.scheduleUpdate({ x: e.clientX, y: e.clientY }); } componentWillUnmount() { // Cancel any pending updates since we're unmounting. this.scheduleUpdate.cancel(); } render() { return ( <div style={{ overflow: 'scroll' }} onScroll={this.handleScroll} > <img src="/my-huge-image.jpg" /> </div> ); } }
Testing your rate limiting
When testing your rate limiting code works correctly it is helpful to have the ability to fast forward time. If you are using
jest then you can use
mock timers to fast forward time. If you are using
requestAnimationFrame throttling then you may find
raf-stub to be a useful tool to control the ticking of animation frames. | https://reactjs.org/docs/faq-functions.html | CC-MAIN-2021-39 | refinedweb | 1,292 | 52.56 |
Hi,
I want to access a link starting with clients_test on a particular webpage. I have the following code
client_re = re.compile(r"clients_test")
def get_client_url():
url = get_artifacts_url()
for link in get_links(url):
if client_re.search(link["href"]):
return "" + link["href"]
Sometimes the page has many links beginnng with clients_test for e.g: clients_test4, clients_test5, clients_test7. The above code just gives me the first link. so in this case i get clients_test4. I want it to return me all the links beginning with clients_test on the python shell so that the user can then specify the from which one he wants to download the exe file.
Kindly help me as i am from QA. | https://www.daniweb.com/programming/software-development/threads/282531/interactive-code | CC-MAIN-2018-13 | refinedweb | 114 | 84.98 |
I have followed the article about Write and read to cmd line from GUI on the post located here.
I have downloaded the files RedirectStandardOutput.zip posted by Diamonddrake. I found these to be exactly what i wanted to include in my own form and attempted to copy the code into my existing form. My problem is this:
Instead of this: this.console1 = new RedirectStandardOutput.Console();
I have: this.console1 = new System.Windows.Forms.TextBox();
My form is called MainForm1 and so I have renamed my namespace in Console1.cs from RedirectStandardOutput to MainForm1 to reflect this.
My form fails to find the reference.
In Form1.Designer.cs I have: private System.Windows.Forms.TextBox console1;
But in the downloaded files there is: private Console console1;
as well as other code relating to console.
I hope someone understands what I have done wrong and offer some tips. | https://www.daniweb.com/programming/software-development/threads/322101/write-and-read-to-cmd-line-from-gui-redirectstandardoutput | CC-MAIN-2016-50 | refinedweb | 147 | 61.73 |
ARM TCM (Tightly-Coupled Memory) handling in Linux¶
Written by Linus Walleij <linus.walleij@stericsson.com>
Some ARM SoCs have a so-called TCM (Tightly-Coupled Memory). This is usually just a few (4-64) KiB of RAM inside the ARM processor.
Due to being embedded inside the CPU, the TCM has a Harvard-architecture, so there is an ITCM (instruction TCM) and a DTCM (data TCM). The DTCM can not contain any instructions, but the ITCM can actually contain data. The size of DTCM or ITCM is minimum 4KiB so the typical minimum configuration is 4KiB ITCM and 4KiB DTCM.
ARM CPUs have special registers to read out status, physical location and size of TCM memories. arch/arm/include/asm/cputype.h defines a CPUID_TCM register that you can read out from the system control coprocessor. Documentation from ARM can be found at, search for "TCM Status Register" to see documents for all CPUs. Reading this register you can determine if ITCM (bits 1-0) and/or DTCM (bit 17-16) is present in the machine.
There is further a TCM region register (search for "TCM Region Registers" at the ARM site) that can report and modify the location size of TCM memories at runtime. This is used to read out and modify TCM location and size. Notice that this is not a MMU table: you actually move the physical location of the TCM around. At the place you put it, it will mask any underlying RAM from the CPU so it is usually wise not to overlap any physical RAM with the TCM.
The TCM memory can then be remapped to another address again using the MMU, but notice that the TCM if often used in situations where the MMU is turned off. To avoid confusion the current Linux implementation will map the TCM 1 to 1 from physical to virtual memory in the location specified by the kernel. Currently Linux will map ITCM to 0xfffe0000 and on, and DTCM to 0xfffe8000 and on, supporting a maximum of 32KiB of ITCM and 32KiB of DTCM.
Newer versions of the region registers also support dividing these TCMs in two separate banks, so for example an 8KiB ITCM is divided into two 4KiB banks with its own control registers. The idea is to be able to lock and hide one of the banks for use by the secure world (TrustZone).
TCM is used for a few things:
- FIQ and other interrupt handlers that need deterministic timing and cannot wait for cache misses.
- Idle loops where all external RAM is set to self-refresh retention mode, so only on-chip RAM is accessible by the CPU and then we hang inside ITCM waiting for an interrupt.
- Other operations which implies shutting off or reconfiguring the external RAM controller.
There is an interface for using TCM on the ARM architecture in <asm/tcm.h>. Using this interface it is possible to:
- Define the physical address and size of ITCM and DTCM.
- Tag functions to be compiled into ITCM.
- Tag data and constants to be allocated to DTCM and ITCM.
- Have the remaining TCM RAM added to a special allocation pool with
gen_pool_create()and
gen_pool_add()and provice tcm_alloc() and tcm_free() for this memory. Such a heap is great for things like saving device state when shutting off device power domains.
A machine that has TCM memory shall select HAVE_TCM from arch/arm/Kconfig for itself. Code that needs to use TCM shall #include <asm/tcm.h>
Functions to go into itcm can be tagged like this: int __tcmfunc foo(int bar);
Since these are marked to become long_calls and you may want to have functions called locally inside the TCM without wasting space, there is also the __tcmlocalfunc prefix that will make the call relative.
Variables to go into dtcm can be tagged like this:
int __tcmdata foo;
Constants can be tagged like this:
int __tcmconst foo;
To put assembler into TCM just use:
.section ".tcm.text" or .section ".tcm.data"
respectively.
Example code:
#include <asm/tcm.h> /* Uninitialized data */ static u32 __tcmdata tcmvar; /* Initialized data */ static u32 __tcmdata tcmassigned = 0x2BADBABEU; /* Constant */ static const u32 __tcmconst tcmconst = 0xCAFEBABEU; static void __tcmlocalfunc tcm_to_tcm(void) { int i; for (i = 0; i < 100; i++) tcmvar ++; } static void __tcmfunc hello_tcm(void) { /* Some abstract code that runs in ITCM */ int i; for (i = 0; i < 100; i++) { tcmvar ++; } tcm_to_tcm(); } static void __init test_tcm(void) { u32 *tcmem; int i; hello_tcm(); printk("Hello TCM executed from ITCM RAM\n"); printk("TCM variable from testrun: %u @ %p\n", tcmvar, &tcmvar); tcmvar = 0xDEADBEEFU; printk("TCM variable: 0x%x @ %p\n", tcmvar, &tcmvar); printk("TCM assigned variable: 0x%x @ %p\n", tcmassigned, &tcmassigned); printk("TCM constant: 0x%x @ %p\n", tcmconst, &tcmconst); /* Allocate some TCM memory from the pool */ tcmem = tcm_alloc(20); if (tcmem) { printk("TCM Allocated 20 bytes of TCM @ %p\n", tcmem); tcmem[0] = 0xDEADBEEFU; tcmem[1] = 0x2BADBABEU; tcmem[2] = 0xCAFEBABEU; tcmem[3] = 0xDEADBEEFU; tcmem[4] = 0x2BADBABEU; for (i = 0; i < 5; i++) printk("TCM tcmem[%d] = %08x\n", i, tcmem[i]); tcm_free(tcmem, 20); } } | https://dri.freedesktop.org/docs/drm/arm/tcm.html | CC-MAIN-2020-24 | refinedweb | 844 | 56.49 |
Calling Methods Inside Your App From Appium
Appium is traditionally considered a "black box" testing tool, meaning it has no access to your application's internal methods or state. We use Appium correctly by thinking like a user would (interacting with the surface of the app), not thinking like an app developer would (calling internal code directly).
Black box testing has its limitations, primarily in requiring the automation to go through user steps many times, even when it would be more convenient to skip to a certain known state. This is one reason that some modern testing technologies, like Espresso, allow for a white box testing approach, where internal app methods are accessible from the automation context.
Thanks to Appium's Espresso driver, Appium can now take advantage of this approach. (If you recall, Espresso is also what enabled us to make our elements flash on screen). To make this more general white box strategy work, you need two things:
- Knowledge of a particular public method located on your Android application, activity, or UI element. This is the method that your test script will ultimately trigger in the course of your automation. You can either code this method up yourself or ping your Android app developer to add one that meets your specifications.
- The new
mobile: backdoormethod available on the Appium Espresso driver. This is the actual method you will call in your test code, which will tell the Espresso driver what to run inside your app. (It's called "backdoor" because Appium is getting inside of your app through the "back door" of Espresso, not the "front door" of the UI that a user would use. This approach was first publicly suggested by Rajdeep Varma in his AppiumConf 2018 talk, and Rajdeep was the one who contributed the code to make it a reality in Appium today. Thanks!)
To illustrate how this works, I've added a method to the application class of The App, in its
MainApplication.java:
public void raiseToast(String message) {
Toast.makeText(this, message, Toast.LENGTH_LONG).show();
}
This method simply takes an arbitrary string and uses it to make an Android Toast message appear on the screen. The result of calling this method looks like this:
With a normal Appium test, there's no way I would be able to trigger this toast to appear, unless the developer had hooked it up to a text field and a button. But with
mobile: backdoor, I can simply designate the name of the method I want to call in my app, the types and values of its parameters, and off we go! So here's how I would do this in Java:
ImmutableMap<String, Object> scriptArgs = ImmutableMap.of(
"target", "application",
"methods", Arrays.asList(ImmutableMap.of(
"name", "raiseToast",
"args", Arrays.asList(ImmutableMap.of(
"value", "Hello from the test script!",
"type", "String"
))
))
);
driver.executeScript("mobile: backdoor", scriptArgs);
It's a little verbose, so let's look at the parameter I'm passing to
mobile: backdoor as a JSON object instead:
{
"target": "application",
"methods": [{
"name": "raiseToast",
"args": [{
"value": "Hello from the test script!",
"type": "String"
}]
}]
}
I specify two main bits of information: the target of my backdoor (which type of thing am I calling the method on), and the methods I want to call on it. In this case I have implemented my method on the application class, so I specify
application as the target. Other possible values are
activity (for methods implemented on the current activity), or
element (for methods implemented on a specific UI element--in this case, an
elementId parameter is also required).
The most important information resides in
methods. In our case I'm just calling one method, though we could call multiple. For each method, we have to specify its name (
raiseToast--it must exactly match the name in my Android code), and the potentially multiple arguments we want to pass in to call the method with. For each of these arguments in turn we must specify both its type and value (the type is necessary because Java!).
Once we've got all this put together, we simply bundle it up and pass it as the parameter to
executeScript("mobile: backdoor"). Using the
ImmutableMap.of construction as I've shown above is the most concise way I've found so far to do this in Java.
That's all there is to it! This method in conjunction with the Espresso driver frees us from the shackles of the UI, and enables us to target specific methods inside our app. The possibilities here are endless, so please write to us and let us know what cool things you find to do with the feature. But remember, with great power comes great responsibility! It would probably be very easy to use this feature to crash your app during testing, too.
Here's a full example that shows the toast message being raised on our test app:
import com.google.common.collect.ImmutableMap;
import io.appium.java_client.AppiumDriver;
import java.io.IOException;
import java.net.URL;
import java.util.Arrays;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.remote.DesiredCapabilities;
public class Edition051_Android_Backdoor {
private String APP = "";
private AppiumDriver driver;
public void setUp() throws IOException {
DesiredCapabilities caps = new DesiredCapabilities();
caps.setCapability("platformName", "Android");
caps.setCapability("deviceName", "Android Emulator");
caps.setCapability("automationName", "Espresso");
caps.setCapability("app", APP);
driver = new AppiumDriver(new URL(""), caps);
}
public void tearDown() {
try {
driver.quit();
} catch (Exception ign) {}
}
public void testBackdoor() {
ImmutableMap<String, Object> scriptArgs = ImmutableMap.of(
"target", "application",
"methods", Arrays.asList(ImmutableMap.of(
"name", "raiseToast",
"args", Arrays.asList(ImmutableMap.of(
"value", "Hello from the test script!",
"type", "String"
))
))
);
driver.executeScript("mobile: backdoor", scriptArgs);
try { Thread.sleep(2000); } catch (Exception ign) {} // pause to allow visual verification
}
}
(As always, the full code sample is also up on GitHub) | https://appiumpro.com/editions/51 | CC-MAIN-2019-13 | refinedweb | 969 | 56.66 |
When.
Why write our own GUI tool?
Why did we choose to write our own tool? In the beginning, we decided our tool should include the following five requirements:
- Easy to use
- Little or no experience with layout managers required
- No experience with Swing required
- No need to output Swing code
- The UI could be separated from the business logic.
Why did Abacus Research require separation of the UI from the business logic? Because the company deals with ever-changing government regulations, such as taxes and payroll calculations, we need the ability to keep the UI intact while maintaining the rules and formulas in a separate wrapper module so, when the formulas change, the application JAR (UI) doesn't. Therefore, only the changing formulas must be tested, thus streamlining the release process.
To fulfill this requirement, the GUI builder was designed to compile the applications and events into an application JAR that hides all the GUI Swing code and requires a renderer class (
AbaRenderer) to execute the application at runtime. With this approach, no Java code is written or manufactured.
No Swing experience required
To ensure a true WYSIWYG development tool, we decided to implement an XY layout manager with anchor support similar to the paradigm found in Delphi and VB. With this in place, the Swing
JFrame becomes a canvas where the developer drops in Swing components to the
JFrame's XY layout. With the XY layout manager, our developers don't need to understand Swing's unfamiliar and complicated layout managers, thus allowing our application developers to concentrate on the application UI and its business logic.
In fact, the AbaGUIBuilder features most of the Swing visual components, from panels to tab pages, as well as menu and menu items, database-aware components using Java Database Connectivity, JFreeChart component support, and the ability to import third-party visual libraries. All these components can be dragged from the component palette to the empty frame to create your GUI application.
Sample project: Tracking developers' contact information
The best way to demonstrate AbaGUIBuilder's rapid application development is to create a sample multipage tab panel application with a menu bar. First, you must place a
JFrame object onto the empty form panel by selecting JFrame from the class palette and dragging it to the application canvas. Then, drag and drop all the visual components on to
JFrame. Keep in mind that a
JFrame must be placed first when you start a new project—the frame becomes your object canvas, as shown in Figure 2.
Next, add a tab pane and two tab pages. Choose JTabbedPane from the Containers section in the component palette and drag it to JFrame1. Once the
JTabbedPane is in place, activate
JTabbedPane's pop-up menu and choose Add JATabPage to add the two tab pages to the panel.
Next, set the titles for each tab page by selecting TabTitle from the property table. Figure 4 illustrates the finished pages.
At this point, you have empty tab pages, where you can drop any of the Swing components from the class palette. In our sample application, the Developers tab page contains two panels with a series of
JLabel and
JTextField objects. You should be able to create an application form similar to the one in Figure 5 within minutes. As a visual application developer, you can appreciate the ease with which you can create a sophisticated GUI application with the Abacus GUI builder.
It turns out that the IDE Rendering Mode is one of the most useful features in the GUI builder since it allows you to preview the application as you work. To activate IDE Rendering Mode, select the Run option from the toolbar or press the F9 key.
To save your work at this point, press Ctrl-S or choose the Save option, name the project devteam, and store it in the samples directory. After you've saved the project, look in the AbaGUIBuilder's samples directory and locate the file devteam.proj. The proj file is an XML file definition of all the classes, objects, objects' properties, and event code within your application. It is the flat representation of your visual project.
Adding event handlers
Adding event handlers to individual objects is a straightforward process. All you have to do is point to the object, pick the event you wish to trap from the event list, and write the appropriate Java code for the event. Once again, AbaGUIBuilder was designed to match the Delphi and VB paradigm. Our goal was to hide UI implementation details such as listeners from the application developer. For example, as shown in Figure 7, to activate a confirm dialog when the Exit button is pressed, select the
actionPerformed event for the Exit button and add the following Java code:
String message = "Are you sure you wish to exit";
int answer = JOptionPane.showConfirmDialog(JFrame1, message, "Development Team Contact", JOptionPane.YES_NO_OPTION);
if(answer == JOptionPane.YES_OPTION) { // User clicked YES. System.exit( 0 ); } else if (answer == JOptionPane.NO_OPTION) { // User clicked NO. }
Keep in mind that event handlers are not active during IDE Rendering Mode, but they are executed at runtime, so you must compile and render the application to check the results.
Visually adding a menu and its menu items is another powerful and time-saving feature. All you have to do is select JMenuBar from the component palette and place it anywhere on the frame. Since
JMenuBar always attaches itself to the top, XY coordinates are irrelevant. Next, right-click on top of the
JMenuBar, and a pop-up, as shown in Figure 8, will activate. Add
JMenu, the
JMenuItem(s), and the event handlers with this menu.
Then, add the same
actionPerformed code from the Exit button to the Exit menu item so a confirm dialog pops up when either the Exit button or Exit menu item is selected. And last, choose Save (With Compile) from the tool bar. Now you have produced your first AbaGUIBuilder application.
Where is the code?
We often hear the question: Where's the code? AbaGUIBuilder does not create Java code, but simply internally manufactures the Java code never to be seen or used. The GUI builder compiles the internal code into an application jar file that requires a separate wrapper program (renderer) to run. In fact, you can check the manufactured Java code the AbaGUIBuilder produces in the \bin\output directory. But remember, this code is not for external use; it's used only as a support tool, in case we have a problem in the manufacturing phase.
After compiling the project file (
.proj), the GUI builder outputs two files, your application jar file and a declaration file (
.decl). The decl file helps you later when you have to write the wrapper program. It contains all the object definitions and a general access function,
getReferences(), and loads all the visual elements from the UI form to private variables. Later, we will use the private variables in the decl file in the section "Separating the Business Logic from the UI."
While developing, we recommend that you run your application jar file with the
runproz script located in the sample directory. This script file sets the classpath, adds all the required JARs, and runs the application JAR with the sample Java wrapper. As illustrated in Figure 9, when running your devteam application, the command line looks like the following:
runproz \abaguibuilder-1.7\samples\devteam.jar
It is important to understand the mechanism, albeit simple, behind the
runproz script, so let's focus on the line
"%JAVA_HOME%\bin\java.exe" exec %1. The
exec is the sample
AbaRenderer wrapper provided as the default loading mechanism, and the
%1 parameter refers to the name of the AbaGUIBuilder application JAR. When the JAR path and name is passed to
exec.java, it loads and renders the specified application JAR. That's how your application JAR runs.
Writing your wrapper
An
AbaRenderer wrapper is a Java program that loads the application jar file using an
AbaRenderer object. The wrapper is a simple program requiring a few lines of code:
public class exec { public static AbaRenderer m_AbaRenderer ;
//(); } catch(Exception e) { e.printStackTrace(); } } }
When deploying an application, you can choose to deliver it using the
runproz and the sample wrapper, or you may manually write your own wrapper and classpath. We recommend writing your own wrapper when you want to add other features to your wrapper such as third-party library listeners, manual object initialization, such as adding data to a combo box, or, finally, to separate the UI from the business logic.
Separate the business logic from the UI
To separate the application UI from the business logic, you must write your own application wrapper. A simple example of separation is the initialization of the visual components in the application manually in the wrapper, outside the development tool and its project. If you recall, the GUI builder outputs a declaration file. This file has an access method that initializes a set of private variables that reference the visual objects on the form. The
getReferences() method allows developers to access and manipulate each visual object on the form.
The
exec2 program, shown below, initializes the combo box on the first tab, demonstrating a simple separation of application UI and business logic:
public class exec2 { public static AbaRenderer m_AbaRenderer ; // Declarations of variables for this user interface. ……. private JComboBox JStComboBox; // Assignments for this user interface
public void getReferences()
{ ….. //Loads the visual object JComboBox1 to private data JComboBox1 JStComboBox= (JComboBox)m_AbaRenderer.getObject("JComboBox1"); ….. }
//(); getReferences(); // Sample access to objects JStComboBox.addItem("FL"); JStComboBox.addItem("CA"); JStComboBox.addItem("WA"); JStComboBox.addItem("MD"); JStComboBox.addItem("PA"); } } catch(Exception e) { e.printStackTrace(); } } }
The example above demonstrates a simple and clear separation between UI and business logic. The distinct advantage is that you can easily change the initialization routines (the business logic) without changing the application UI. You will find this feature increasingly essential as your applications become larger and more sophisticated—it is a good practice to follow on all development projects.
Conclusion
The AbaGUIBuilder was designed from the start to satisfy the needs of our Delphi application developers, has saved many hours of development time, and eased our GUI development transition. It can do the same for many other Delphi and VB developers.
Learn more about this topic
- Download the source code that accompanies this article
- Download AbaGUIBuilder
- Screen shots and other files
- Abacus Research
- For more articles on GUI development, browse the User Interface Design section of JavaWorld's Topical Index | https://laptrinhx.com/494040641/ | CC-MAIN-2021-31 | refinedweb | 1,752 | 52.6 |
Hello,
I am self-teaching java to myself with a library book. The book I'm working through suggests that I create a package called BookPack, it defines the following class:
package BookPack; public class Book { private String title; private String author; private int pubDate; public Book(String t, String a, int d) { title = t; author = a; pubDate = d; } public void show() { System.out.println(title); System.out.println(author); System.out.println(pubDate); System.out.println(); } }
Then the book tells me to create a package called BookPackB, which defines the main() class. The code is listed below:
package BookPackB; // Use the Book Class from BookPack. public class HSUseBook { public static void main(String[] args) { BookPack.Book books[] = new BookPack.Book[5]; books[0] = new BookPack.Book("Java: A Beginner's Guide", "Schildt", 2005); books[1] = new BookPack.Book("Java: The Complete Reference", "Schildt", 2005); books[2] = new BookPack.Book("The Art of Java", "Schildt and Holmes", 2003); books[3] = new BookPack.Book("Red Storm Rising", "Clancy", 1986); books[4] = new BookPack.Book("On the Road", "Kerouac", 1955); for(int i=0; i < books.length; i++) books[i].show(); } }
I have compiled this code in a NetBeans IDE and I didn't believe the errors, so I ran it from the command prompt. I will tell you what I did at the command prompt because I think it has something to do with the location of the directories for each package/class. When creating "projects" in the NetBeans IDE, the software creates a package and directory named after the main() class, I have tried and tried to make these simple pieces of code work in the NetBeans IDE, but to no avail. I have gone the the command prompt to see if I can learn something there and I did learn some valuable things. By the way do professional programmers use a source code editor and command prompt or an IDE, or a bit of everything?
The compiled code should output the following I am going to list the first object books[0] output, to list the rest seems trivial:
I get none of the output I desire, not even the generation of a .class file after compiling. Let me tell you what I did.I get none of the output I desire, not even the generation of a .class file after compiling. Let me tell you what I did.Java: A Beginner's Guide Schildt 2005
From the command prompt, I used javac java\BookPackB\HSUseBook.java, HSUseBook. In case you would like to know, I have the package BookPack with book class at C:\java\BookPack\Book.java Should one package be in a higher directory than another?
After the "javac" command listed above, I get an error at this syntax in the code just below main(), BookPack.Book books[] = new BookPack.Book[5]; where BookPack is underlined. The error syntax for this statement is:
The same errors exist for the subsequent declarations for book[] objects in the main(), for example an error for the broken statement books[0] = new BookPack.Book("Java: A Beginner's Guide",....., ....), where BookPack is underlined, says that BookPack does not exist.
After running the "java" command in command prompt java java\BookPackB\HSUseBook.java I get the following errors:.Exception in thread "main" java.lang.NoClassDefFoundError: java\BookPackB/HSUseBook Caused by: java.lang.ClassNotFoundException: java\BookPackB/HSUseBook at java.net.URLClassLoader$1.run<URLClassLoader.java:202> at java.security.AccessController.doPrivileged<Native Method> at java.net.URLClassLoader.findClass<URLClassLoader.java:190> at java.lang.ClassLoader.loadClass<ClassLoader.java:306> at sun.miscLauncher$AppClassLoader.loadClass<ClassLoader.java:247> Could not find main class: java\BookPackB.HSUseBook. Program will exit.
Well I am sorry for the length, hopefully, this is just a simple fix. It should be it is in a book, but with my luck it will be the only faulty example.
Cheers | http://www.javaprogrammingforums.com/object-oriented-programming/10508-simple-package-problem-package-does-not-exist-error.html | CC-MAIN-2015-22 | refinedweb | 650 | 59.4 |
Arbitrage betting forum Seo Site With Forum, Vbulletin or another form system
we are looking betting website dabba market with already worked on betting site only prefer dont bid those who not work on betting site before we cant explain all things
.....
I have been in soccer betting for 10 years and keep refining predicting models that will yield the most outcome in soccer betting. My project involves excel spread sheet that is linked to soccer data website like [log ind for at se URL] or [log ind for at se URL] and should be able to pull a teams data from the website, calculate the teams average goal scored
.....
...BEFORE (and
looking for betfair api for betting website those who alredy work betting site prefer
.....
I need betting website like [log ind for at se URL] [log ind for at se URL] Plz reply if u have alrrady ecoerience and done such kind if site itherwise ignore Must have skill in php angular.js
I want you to make a website with original betting odds integrated api's for sports betting, casino, teen patti, etc
Need a landing page and forum, phpbb preferred
build a betting site for cricket teniss horse race teenpati with readymade script only readymade script we need project very early. dont bid dont work on betting before
looking betting website like 9wickets readymade script. dont bid those not worked on betting site we looking for betting website
looking betting website like 9wickets readymade script. dont bid those not worked on betting site we looking for betting website
we are looking for betfair API for realtime .. we looking for those alredy worked on betting site. dont bid dont work before on betfair
..
I need my website re-configured.I already have a design, I just need you to build my personal website.
I need a new website. I need you to design and build a website similar to turnoverbnb and properly. Excluding the market place where cleaners can create a profile and sea...of the calendar what properties to clean each day This will require an IOS app and Android. Please go on the recommended websites to view the features required before betting.
We are looking to creating a crypto arbitraging platform that has a MLM element. We would like for it to be in the form of an mobile app.
..
Hi I’m doing amazon arbitrage automation. I need help with product research to run my stores I have all the tools on the software to run the stores. I need virtual assistants to list up to 10,000+ Hot products per month
more personally after credentialing
..
Hello, I will give you the project details during our chat. But it is basically developing a simple database where I can predict 4 results for each match between two football clubs and then these four predictions will be placed each in various tickets (where each ticket will have about 15 matches). So the four predicted results for a single match between two football clubs will be spread out bet...
I am looking to have a Livescore and Odds comparison website built to compliment by other website, which is a sports betting preview site. A comparable website would be [log ind for at se URL] (just the section of Football, Live Scores, Insights and Free Bets). I have the API feed already from [log ind for at se URL] which includes a wide range of
Looking for person who can write sports betting articles in English. Words per article: 800-1000 Details about the product: SEO articles about sports betting content Tone: Formal/Professional Outline & Structure: Information about odds, predictions, tips, games..
Hi, I need someone to build me a script in ....
Need to build a sports betting software The software should be similar to bet365 Or [log ind for at se URL] with live field preview. Only interested with ready made software and demo to be provided immediately. Source code must be available with software. Key features: Html5 version Android / IOS Version NOTICE PLEASE CONTACT ONLY IF YOU HAVE DEMO AVAILABLE.
import this web page/table into excel same format [log ind for at se URL]
...Keyword Research • Meta Tags Optimization • Competitor Analysis • Article Submissions • Blog Commenting • Directory Submission. • Link Building • Press Release Submission. • Forum Posting. • Search Engine Submissions • Press Release Submissions • Social Bookmarking Profiles Creation & Bookmarking • Classified Ads Submission.
we looking for skyexchange betting website readymade
I need to download a daily odds page into EXCEL that has the following info. For NFL, NCAAF, NBA, NCAABB, MLB and NHL. This web page has what I want. Everyday this pa...NCAAF, NBA, NCAABB, MLB and NHL. This web page has what I want. Everyday this page changes and it has th data I want in a table form. [log ind for at se URL]
we are looking for readymade betting website php script skyexchange bet365 lotus book those who have readymade script only prefer
I need a browser extension developer for a very urgent provably fair project. The online betting verification project is for crypto gambling websites and it will be developed as a browser extension, preferably chrome. ONLY bid if you have done one or two projects with provably fair or browser extension. You can review the requirements and screens
can who make me a latency arbitrage EA
Pages required 1. Matched Betting explained and is it legal? Explained 2. The referral process 3. A form allowing people to put in name, number, email, address and to upload a JPEG picture of their ID, selfie and proof of address. 4. The form details will then need to be accessible by myself. | https://www.dk.freelancer.com/work/arbitrage-betting-forum/ | CC-MAIN-2019-35 | refinedweb | 945 | 70.84 |
Character code constants.
These libraries define symbolic names for some character codes.
This is not an official Goggle package, and is not supported by Google.". E
xamples:
$plus,
$exclamation
The
ascii.dart library defines a symbolic name for each ASCII character.
For some chraceters, it has more than one name. For example the common
$tab
and the official
$ht for the horisontal tab.
The
html_entity.dart library defines a constant for each HTML 4.01 character
entity, using the standard entity abbreviation, incluing).
Add this to your package's pubspec.yaml file:
dependencies: charcode: ^1.0.0+1
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:charcode/charcode. | https://pub.dartlang.org/packages/charcode/versions/1.0.0+1 | CC-MAIN-2018-51 | refinedweb | 138 | 62.24 |
C Programming/C Reference/time.h/time_t
The time_t datatype is a data type in the ISO C library defined for storing system time values. Such values are returned from the standard
time() library function. This type is a typedef defined in the standard <time.h> header. ISO C defines time_t as an arithmetic type, but does not specify any particular type, range, resolution, or encoding for it. Also unspecified are the meanings of arithmetic operations applied to time values.
Implementation[edit]
Unix and POSIX-compliant systems implement time_t as an integer or real-floating type [1] (typically a 32- or 64-bit integer) which represents the number of seconds since the start of the Unix epoch: midnight UTC of January 1, 1970 (not counting leap seconds). Some systems correctly handle negative time values, while others do not. Systems using a signed 32-bit
time_t type are susceptible to the Year 2038 problem.[2]
In addition to the time() function, ISO C also specifies other functions and types for converting time_t system time values into calendar times and vice versa.
Example[edit]
The following C code retrieves the current time, formats it as a string, and writes it to the standard output.
#include <stdio.h> #include <time.h> /* * The result should look something like * Fri 2008-08-22 15:21:59 WAST */ int main(void) {); puts(buf); return 0; }
Conversion to civil time[edit]
Using GNU date, a given
time_t value can be converted into its equivalent calendar date:
$ date -ud@1234567890 Fri Feb 13 23:31:30 UTC 2009
Similarly, using BSD date:
$ date -ur 1234567890 Fri Feb 13 23:31:30 UTC 2009
References[edit]
- ↑ The Open Group Base Specifications Issue 7 sys/types.h. Retrieved on 2009-02-13.
- ↑ The Year 2038 problem, Roger M. Wilcox. Retrieved on 2011-03-11. | http://en.wikibooks.org/wiki/C_Programming/C_Reference/time.h/time_t | CC-MAIN-2014-52 | refinedweb | 304 | 54.32 |
You must specify Endpoints auth following the directions provided on
this page. Note that you cannot set a user login requirement following the
instructions provided under
Security and Authentication
to configure the
web.xml file,
because this will result in a deployment failure.
For information on authentication from the perspective of keeping the backend secure, see the blog post Verifying Back-End Calls from Android Apps.
Requirements
The instructions on this page assume that you have a Google Developers Console project for your application. If you don't have one yet, here's how to create one:
- In the Developers Console, go to the Projects page.
- Select a project, or click Create Project to create a new Developers Console project.
- In the dialog, name your project. Make a note of your generated project ID.
- Click Create to create a new project.
Adding authorization to an API backend
If you wish to restrict all or part of your API to only authorized apps, you must:
- Specify the client IDs (
clientIds) of apps authorized to make requests to your API backend.
- Add a User parameter to all exposed methods to be protected by authorization.
- Generate the client library again for any Android clients
- Redeploy your backend API.
- If you have an Android client, update the regenerated jar file to your Android project.
- If you have an iOS client, regenerate the Objective-C library.
Specifying authorized clients in the API backend
You must specify which clients are allowed to access the API backend by means of a whitelist of client IDs. A client ID is generated by the Google Developers Console from a client secret, such as the SHA1 fingerprint of a key used to secure an Android app, or from the Bundle ID/Apple Store ID pair for an iOS app, as described in Creating OAuth 2.0 Client IDs. At runtime, a client app is granted the authorization token it needs to send requests to the API backend if its client secret matches one contained in a client ID within the API backend's client ID whitelist.
To specify which clients can be authenticated by your API backend:
- Get a complete list of the client IDs of the clients that you want to grant access to. This list is a list of OAuth 2.0 client IDs obtained for your project following the instructions provided below under Creating OAuth 2.0 Client IDs.
- In the
clientIdsattribute of the @Api or @ApiMethod annotations for your API, supply the list of client IDs that you want to authenticate:
- For an iOS app, supply its iOS client ID in the
clientIdswhitelist.
- For a javascript app, supply its web client ID in the
clientIdswhitelist.
- For an Android app, you must supply both its Android client ID and a web client ID in
clientIdswhitelist. (You must add that same web client ID to the
audienceslist as shown next.)
- If you have an Android client, you must also supply the
audiencesattribute for the @Api annotation, set to the web client ID mentioned in the preceding step. The same web client ID must be in both
clientIdsand
audiences.
The following sample snippet shows how to supply client IDs for a javascript client, for an iOS client, and an Android client:
@Api( name = "tictactoe", version = "v1", scopes = {Constants.EMAIL_SCOPE}, clientIds = {Constants.WEB_CLIENT_ID, Constants.ANDROID_CLIENT_ID, Constants.IOS_CLIENT_ID}, audiences = {Constants.ANDROID_AUDIENCE} ) public class Constants { public static final String WEB_CLIENT_ID = "1-web-apps.apps.googleusercontent.com"; public static final String ANDROID_CLIENT_ID = "2-android-apps.googleusercontent.com"; public static final String IOS_CLIENT_ID = "3-ios-apps.googleusercontent.com"; public static final String ANDROID_AUDIENCE = WEB_CLIENT_ID; public static final String EMAIL_SCOPE = ""; }
In the snippet, note the use of a class to map constants to the actual client ID values so you need only make changes in one place whenever you change the client IDs. Notice also that
audiences is set to the web client ID value.
Adding a user parameter to methods for auth
When you declare a parameter of type
User in your API method, the API backend framework automatically authenticates the user and enforces the authorized
clientIds whitelist, ultimately by supplying the valid
User or not. If the request from the app has a valid auth token or is in the list of authorized
clientIDs, the framework supplies a valid
User to the parameter. If the incoming request does not have a valid auth token or if the client is not on the
clientIDs whitelist, the framework sets the
User parameter to null. Your own code must handle the null case and the non-null case, as shown below.
To add a
User param to your method:
Import the App Engine User API in your API class:
import com.google.appengine.api.users.User;
Add a parameter of type
Userto each class method you wish to require authorization for, as shown in the following snippet:
/** * Provides the ability to insert a new Score entity. */ @ApiMethod(name = "scores.insert") public Score insert(Score score, User user) throws OAuthRequestException, IOException { ... }
If an incoming client request has no authorization token or an invalid one,
user is
null. In your code, you need to check whether
user is
null and do ONE of the following, depending on the condition:
- If the user is non-null, perform the authorized action.
- If the user is null, throw an
OAuthRequestException.
- Alternatively, if the user is null, perform some action for an unauthorized client access if some sort of unauthorized access is desired.
Providing authorization from clients
If your API backend requires authorization, you need to also support this authorization in your Android, iOS, or JavaScript client. Instructions for this vary slightly by client type, so more information is provided in these client-specific pages:
- Making Authenticated Calls from an Android Client
- Using Endpoints in an iOS, iOS app, or javascript app. For details, see Specifying Authorized Clients in the API Backend.
Creating an OAuth 2.0 web client ID
To create a web client ID:
- Make sure you are logged in to the Google account you used to create the Google Developers Console project.
- Open the Credentials page for your project, and select Web application as the application type.
Fill out the form that is displayed:
- Specify a name for the web client.
If you are testing the backend locally, specify the textbox labeled Authorized JavaScript origins. If you are deploying your backend API to production App Engine, specify the App Engine URL of your backend API in the textbox labeled Authorized JavaScript origins; for example,, replacing
your_project_idwith your actual App Engine project ID.
Click Create.
Note the client ID that is generated. This is the client ID you need to use in your backend and in your client application.
Creating an OAuth 2.0 Android client ID
In order to create the OAuth 2.0 Android client ID, you'll need to have a certificate key fingerprint. If you use Eclipse with the Android Developer Tools (ADT) plugin, Eclipse with the Android Developer Tools plugin, Eclipse.
- Open the Credentials page for your project and select Android as the application type.
- In the textbox labeled Signing-certificate fingerprint, enter the fingerprint you obtained above.
- In the textbox labeled Package name enter the Android application package name, as specified in your
AndroidManifest.xmlfile.
- Click Create.
Note the client ID that is generated as you'll need to use it later in your client code. You can always revisit your project later in the console to locate this client ID.
Creating an OAuth 2.0 iOS client ID
- Open the Credentials page for your project and select iOS as the application type.
- Specify a name for the client.
- In the textbox labeled Bundle ID, specify your application’s bundle identifier as listed in your application's
.plistfile (e.g.
com.example.myapp).
- In the textbox labeled App Store ID, optionally enter the App Store ID if the app was published in the Apple iTunes® App Store.
- Click Create.
Note the client ID that is generated as you'll need to use it later in your client code. You can always revisit your project later in the console to locate this client ID. | https://cloud.google.com/appengine/docs/java/endpoints/auth | CC-MAIN-2015-48 | refinedweb | 1,359 | 62.27 |
Hello :) this project the ultrasonic sensor read and write the distance between the sensor and the object in front of it in the LCD display, It’s really simple.
My goal is to help you understand how this sensor works and then you can use this example in your own projects.
Step 1: Parts Required
- 1x HC-SR04 Ultrasonic Sensor
- 1x LCD Display (I made this project with JHD162A)
- Some wires
Step 2: Connect the Components
Step 3: Upload the Code
Upload the sketch to your Arduino and watch the measurement.
For more projects about the HC-SR04 ultrasonic ranging sensor visit my GitHub repository
Good luck.
4 People Made This Project!
Vijayraswanth made it!
sandeshm7 made it!
JessieS19 made it!
MC_GamingPro made it!
41 Discussions
3 years ago on Introduction
How Much Distance it can measure????
Reply 10 months ago
Upto 5 meters...but it can generally measures upto 3meters with great precision!
Reply 23 days ago
Do you have the code?
Mi gmail is dany.arnez@gmail.com
Question 5 weeks ago
hi could you send me the code for this please ?
Question 5 weeks ago on Step 2
where is the code
Question 7 weeks ago on Step 3
instead of wire, can we use the pins?
1 year ago
I implemented this code ass you show above with 16x2 lcd and ultrasonic sensor with arduino but in that code distance dosent show 0-cm if i put my object very next to sensor so how it is possible to show distance 0 cm and also how to print distance in float value?
Reply 7 weeks ago
Did you have to adjust code in order to use the 16x2 display ? If so can you send it to me please. chrisg@alertalarminc.com
Question 2 months ago
My LCD display isn't showing anything how do I fix this?
2 months ago
Please send the code my mail is awaisali.29@iiee.edu.pk
3 months ago
Sir im a beginner please send me the code. Here's my email jackaver1337@gmail.com Thankyou!
3 months ago
I do not work the LCD backlight
Question 3 months ago
He need a resistor?
4 months ago
sir please send me code? my email is
ddviolet0516@gmail.com
9 months ago
o/p is coming randomly
12 months ago
Thanks for code
1 year ago on Step 3
sir please send me code? my email is,
dhanukadulanjana184@gmail.com
1 year ago
/*
HC-SR04 Ultrasonic Sensor with LCD dispaly
HC-SR04 Ultrasonic Sensor
VCC to Arduino 5V
GND to Arduino GND
Echo to Arduino pin 12
Trig to Arduino pin 13
LCD Display (I used JHD162A)
VSS to Arduino GND
VCC to Arduino 5V
VEE to Arduino GND
RS to Arduino pin 11
R/W to Arduino pin 10
E to Arduino pin 9
DB4 to Arduino pin 2
DB5 to Arduino pin 3
DB6 to Arduino pin 4
DB7 to Arduino pin 5
LED+ to Arduino 5V
LED- to Arduino GND
Modified by Ahmed Djebali (June 1, 2015).
*/
#include <LiquidCrystal.h> //Load Liquid Crystal Library
LiquidCrystal LCD(11,10,9,2,3,4,5); //Create Liquid Crystal Object called LCD
#define trigPin 13 //Sensor Echo pin connected to Arduino pin 13
#define echoPin 12 //Sensor Trip pin connected to Arduino pin 12
//Simple program just for testing the HC-SR04 Ultrasonic Sensor with LCD dispaly
//URL:
void setup()
{() {
long duration, distance;
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance = (duration/2) / 29.1;
LCD.setCursor(0,1); //Set cursor to first column of second row
LCD.print(" "); //Print blanks to clear the row
LCD.setCursor(0,1); //Set Cursor again to first column of second row
LCD.print(distance); //Print measured distance
LCD.print(" cm"); //Print your units.
delay(250); //pause to let things settle
}
Reply 1 year ago
sir..........does this code work for the above connections.......
1 year ago
Thanks really helpful nice | https://www.instructables.com/id/Arduino-LCD-Project-for-Measuring-Distance/ | CC-MAIN-2019-26 | refinedweb | 666 | 72.05 |
Azure Mobile Apps iOS SDK 3.2.0 release
13. oktober 2016
We are excited to bring you the latest release of our Mobile Apps iOS client SDK 3.2.0. We've added Refresh Token feature, updated with iOS 10 support, and made performance improvement.
13. oktober 2016
We are excited to bring you the latest release of our Mobile Apps iOS client SDK 3.2.0. We've added Refresh Token feature, updated with iOS 10 support, and made performance improvement.
12. oktober 2016
We just rolled out .NET Mobile Client SDK 3.0.1 and Mobile SQLiteStore 3.0.1! We are out of beta, fixed the Android SQLiteStore dependency issue, and unified our .NET client SDK versions!
19. september 2016
Azure Mobile Apps baked in refresh tokens to its authentication feature, and it is now so simple to keep your app users logged in.
25. august 2016
Now you can get detailed error feedback per push request from Platform Notification Systems when you push with Notification Hubs.
14. juni 2016
Work with apns-priority with Notification Hubs to send prioritized APNS pushes to iOS devices.
13. juni 2016
Azure Notification Hubs is announcing the Batch Direct Send feature, allowing notification sends directly to device tokens/channel uris.
12. maj 2016
Notification Hubs recently enabled namespace-level tiers so customers can allocate resources tailored to each namespace’s expected traffic and usage patterns.
14. april 2016
Notification Hubs' per message telemetry feature now supports scheduled send and the allowed device expiry (time to live) is extended to infinity.
10. november 2015
Here is an update on what our team has been working on for the past few months to deliver you a smoother developing experience and a set of handy features including: Per Message Telemetry, Multi-tenancy, Push Variables, Direct Send and more.
26. maj 2015
Update Notification Hubs/Service Bus Namespace Type to "Notification Hub" and "Messaging" from "Mixed." | https://azure.microsoft.com/da-dk/blog/author/yuaxu/ | CC-MAIN-2020-10 | refinedweb | 323 | 55.13 |
Making Hearts Fly With Unity VFX Graph
Learn how to use Unity’s VFX Graph to make stunning visual effects, like a flock of flying butterflies! The VFX Graph uses simple, node-based visual logic.
Version
- C# 7.3, Unity 2020.3, Unity
Have you ever wanted to make your apps look more spectacular without spending a lot of time and effort on graphics? Here, you’ll learn how to use Unity’s VFX Graph to make stunning visual effects. You’ll use simple, node-based visual logic to control complex simulations, like a flock of flying butterflies. You’ll then see how to adapt the techniques learned to a wide range of other effects you might want to create.
In this tutorial, you’ll learn how to:
- Create and navigate a VFX Graph
- Design background visual effects
- Spawn on-demand visual effects
- Control properties of visual effects as they run
- Author and use Point Caches
For this tutorial, you’ll need Unity 2020.3 (LTS).
Getting Started
Download the starter project by clicking the Download Materials button at the top or bottom of the tutorial. To begin this project, you’ll first need to prepare the needed template and folders.
Starting With the HDRP Template
You must make several important choices when setting up a new project, so take it slowly. First, create a new project using Unity 2020.3.
The next decision is to pick a template. Select High Definition RP. This template has many of the resources and project settings you’ll use in this tutorial. The project’s name and location are up to you.
After everything loads, you’ll find a shiny, gold ball in the SampleScene. There’s some great stuff in that sample, which you’ll get to shortly. But it’s best to organize the project a bit first.
Setting up Project Folders
To manage the files and assets that you’re about to make, create some room in the project Hierarchy. First, create Assets/RW to store everything you need for the project. Next, create subfolders for the following types of assets:
- PointCaches: Holds the Point Cache you’ll create.
- Scenes: Holds the scene you’ll create.
- Scripts: Holds the script you’ll create.
- Sprites: Contains the sample art.
- VFX: Holds the VFX Graphs you’ll create.
Now that you’ve set up your project, it’s time to add the sample artwork.
Importing Sprites
Open the starter project from the materials you downloaded earlier. It contains two sprites, butterheart and butterheartfull, that you need to create these visual effects.
Drag these sprites into the RW/Sprites folder. Once they’re imported, select each sprite and set Texture Type to Sprite (2D and UI). Next, set Read/Write Enable to True and click Apply.
Now save the project and you’re ready to start hunting for some butterflies. :]
Observing a Box of Butterflies
Unity now provides templates for many common scenarios and some excellent example visual effects. These assets are a great starting place. To find some butterflies, you don’t have to go far, because there are some real beauties right there in the SampleScene.
Finding the Butterflies
With the SampleScene open, find and select VFX ▸ ButterFlies in the Hierarchy. Switch to Scene view and press F to focus on your selection. Do you see them? They’re pretty small, so try zooming in and panning the camera to get a closer look.
Selecting the ButterFlies highlights them and provides access to the Play Controls dialog. Experiment with the controls. You can pause the butterflies in mid-flap and make them zip by increasing the playback Rate.
The Visual Effect itself also contains controls. In the inspector for the ButterFlies, under Properties, you can change the
Count,
Radius and even
Wing Animation Speed. Try it! Just remember to Stop and Play the Visual Effect after changing its properties to see them take effect.
Although these sample butterflies are awesome, you don’t want Unity just giving you cool stuff, right? You want to create that cool stuff yourself. So it’s time to find out how to make these butterflies. After you have a handle on that, you’ll try creating your own effects.
Examining Butterfly Art
The first thing you need to create this effect is art for the wings. Navigate to Assets/Art/Textures/VFX/ButterFlies_8x1 and select the texture to examine it in the inspector. All the butterflies you see in the scene come from this texture. And although all butterflies have two wings, you need only one of each style to create the effect.
Opening the ButterFlies VFX Graph
The second thing you need to create a visual effect is a VFX Graph Asset. In the inspector for the ButterFlies GameObject, look for the General section of the Visual Effect component. Click the Edit button to open the ButterFlies VFX Graph Asset. Maximize the window to get a better view. Some handy controls include MouseScroll to zoom and holding Alt (or Option) (or Middle Mouse Button drag) to pan the graph.
There’s a lot of complexity behind the simple beauty of a butterfly, so it’s worth a stroll through each section to understand how it all works. Before doing that, it’s helpful to understand the basic approach to this kind of animation.
Understanding a Flapping Animation
Like most complex things made using Unity, these butterflies were created by connecting and reusing smaller, simpler things. Complex visual animations are no different. For example, if you wanted to create a character clapping, you could start by making one hand that makes a clapping motion. Then, you could duplicate the arm and mirror it. And voila! A clapping character.
For this butterfly animation, you use the same technique. You start by animating a single wing that flaps. Then, you duplicate it and mirror it. What do you get? A flying butterfly!
That’s why you need only one wing for the art.
Animating a single wing is pretty straightforward. If you’ve animated a door, you’re already familiar with the approach. Pick an edge to be the axis and rotate the door around the axis. Set some limits so the door doesn’t swing too far in one direction or the other, then let it ping-pong between the limits, as needed.
So if you can swing a door in Unity, then you’re halfway to making a butterfly… well, fly!
Understanding the VFX Graph Using the Butterfly Example
Before you dive into the ButterFlies graph, copy it. Find the graph at Assets/Art/Particles/ButterFlies, then duplicate it (Control-D on Windows, Command-D on Mac), name it ButterfliesNotes and place it in the RW/VFX folder.
This new graph asset doesn’t live in any scene, but you can still edit it. Select the graph asset and select Open in the inspector, or just double-click it.
While you explore the graph, feel free to add some notes directly to the graph as you learn what each block and node does. Add notes by right-clicking and selecting Create Sticky Note.
You can also group nodes into logical sections: Click-drag to select a group, then right-click one of the nodes, select Group Selection and name the grouping. This is a good way to organize your VFX Graph and make it more maintainable.
Now, you’ll take a look at each of the settings you need to create the butterfly animation. After you have a good grasp of how the butterflies work, you can create your own visual effect.
Creating Particles With the Spawn Block
Start by examining the Spawn Block at the top of the graph. Zoom in to get a better look.
Each time you play, or trigger, the visual effect, it spawns a burst of particles. The number of particles it spawns depends on
Count, which you can find on the Blackboard. This variable is an exposed float, which is why you were able to change it before in the SampleScene by adjusting the ButterFlies‘ properties.
This is the basic setup for controlling visual effects in real-time. The VFX Graph needs to use variables, which are exposed via the Blackboard and manipulated via the GameObjects’ Visual Effect component.
Setting an Initial Motion With the Initialize Particle Block
Immediately after spawning, each particle initializes with the settings provided in the Initialize Particle Block. In this example, there’s a limit on the system Capacity of 31. Now, no matter how many times the system triggers or how high you set the Count, no more than 31 butterflies will appear on the screen at a time. The particles are also constrained with a bounding cube of size 4.
Each butterfly spawns in a sphere with a
Radius controlled by another exposed float variable in the Blackboard. So again, it’s important to note that it’s possible to create values in Properties that are outside the limitations of the graph. When tweaking values, this is the cause of many head-scratching results.
Next, each butterfly receives an initial direction and a random velocity based on that direction.
Finally, a Tex Index is set to correspond with one of the possible styles provided in the art texture, so each butterfly is a different style.
In most systems, you also set the lifetime of the particle in the Initialize Particle Block. Without that, as is the case here, the particle will live forever or until the system stops.
Now, you’ve had a look at the initial settings for the butterflies. Next, you’ll see how to change them over time.
Changing the Particles With the Update Particle Block
A visual affect that doesn’t change looks unnaturally static. To prevent this, the particle changes a little bit in every frame. You control those changes in the Update Particle Block. Take a look at this block to see how these butterflies to have a natural appearance:
- To keep the butterflies flying near the source, Force is applied, nudging the particle toward the inside of the system.
- Turbulence adds variety.
- The Velocity updates with logic that draws on its most recent velocity, so each butterfly can follow a unique flight path.
- The Scale is adjusted to accommodate the original art dimensions, giving the butterflies a broader wingspan.
- The Pivot.X creates a pivot point to use. Just like you’d assign which edge of the door should have the hinge.
Now the Butterfly is moving, it’s time to get the wings flapping.
Moving the Wings With the Output Particle Block
Although there are two Output Particle Blocks in this graph, one is just a mirror of the other. Think “left wing” and “right wing”, each supplied by either common or offset inputs.
Have a look a the settings in each Output Particle Block:
- The UV Mode is set to Flipbook to take advantage of the variety provided in the wing art texture. Flip Book Size is set to match that texture, which is supplied as the Base Color Map.
- Orientation is the same for both wings.
- Position controls the subtle vertical lift associated with the flapping motion. It’s also the same for both wings.
- Finally, Angle.Y and Angle.X are where things get mirrored between the left and right wings and to actually flap the wings.
Flapping Visual Logic With Wing Animation Speed
There’s one more variable in the Blackboard, called
Wing Animation Speed. Each butterfly starts with a default flapping speed between 20 and 30. The
Wing Animation Speed is then multiplied by the random starting value to get the actual speed.
The graph uses the age of the particle, which constantly increases, and controls the rate of increase with the modified
Wing Animation Speed. The ever-increasing number is then converted to its
Sine value, which ping-pongs between -1 and 1. These numbers control the bottom and top of the flap.
That sine value is used in two ways. First, it drives the small up-and-down bucking motion of the butterfly via Add Position. Second, it drives the large variation in the angle of the wing, the flap itself, via Set Angle.Y. This flap input is inversed for the second wing to create the mirrored effect.
Now that you understand a working example of a butterfly effect, it’s time to create something new.
Creating the Scene
Create a new scene using the Basic Indoors (HDRP) scene template. Call it MakingHeartsFly and save it in RW/Scenes.
In the Hierarchy, find the default Plane GameObject and delete it.
Adjusting the Camera Settings
Select the Main Camera and set its Position to (X:0, Y:1, Z:-6.5) and its Rotation to (X:-30, Y:0, Z:0). In the Background Type drop-down, select Color. Then, choose a soothing Background Color like (R:200, G:50, B:100, A:255, Intensity:0).
Sky and Fog Volume Settings
Now, you’re going to make a romantic setting for your flying hearts.
Select Sky and Fog Volume to change its settings. In the Visual Environment, set the Type to None and the Ambient Mode to Static. For the Exposure, set the Mode to Fixed and set the Fixed Exposure to -1. Make sure you enable both of those checkboxes.
Next, remove the Fog Override. Finally, add a Vignette Override and set its Mode to Procedural and its Intensity to 0.4. Be sure to enable each.
Scene Light Settings
For your next step, you’ll add some mood lighting.
Select the default Spot Light and rename it HeartLight. Next, set its Position to (X:-0.75, Y:3, Z:-5). Then, change the Light Type to Point and the Mode to Realtime.
Change the Shape Radius to 1. Finally, for Intensity, select Ev 100 in the drop-down and set the Intensity to 11. Yes, you can laugh if you want.
VFX GameObject
Now that the mood is set, it’s time to settle the butterflies!
In the VFX folder, find the ButterfliesNotes graph and duplicate it. Name the new graph asset BoxOfHearts.
In the Hierarchy, add a Visual Effects ▸ Visual Effect GameObject and call it BoxOfHearts. Next, set its Position to (X:0, Y:2, Z:-5). Then, set the Asset Template to the BoxOfHearts VFX Graph asset found in the VFX folder.
Try toggling the HeartLight on and off to see the impact of the light on the particles, and make sure it’s positioned near the top left of the screen. To see it better, try toggling the Volumetrics for the light as well.
Now that you have a new Visual Effect GameObject connected to a new VFX Graph Asset, it’s time for those butterflies to metamorphose into something new: a box of hearts!
Replacing the Wing Art
First, you’ll replace the butterflies with something more your style: butterhearts, pretty winged hearts.
In the Hierarchy, select the BoxOfHearts. Then, in the inspector, click Edit for the BoxOfHearts graph asset. To better see changes with each step, work from the bottom to the top of the graph.
Setting the Output Block
In the Output Block for both the left and right wings, change the Uv Mode to Default and Use Alpha Clipping to False. For Base Color Map, replace the default butterfly art by assigning the butterheart from the RW/Sprites folder.
Update Block
In the Update Block, remove the Turbulence and Force block and related node inputs. Next, add a Tile/Warp Positions Block and set the Size to (X:3, Y:3, Z:3).
Setting the Initialize Block
In the Initialize Block, you have several changes to make. Start by removing the Tex Index Block and related node inputs. Then, do the following:
- Boost the Capacity to 75
- Add a Set Color Block. Instead of choosing a single, boring color for all the particles, you’ll use a Gradient and some simple logic to add a lot of variety.
- Open the Blackboard, add a new Gradient variable and call it Wing Color. Then, set Exposed to True so you can edit the gradient in the scene.
- Drag the
Wing Colorvariable from the Blackboard to somewhere close to the Set Color Block, but don’t connect them yet. To add variety, you first need to apply some randomness.
- Add a Sample Gradient Node and connect the
Wing Colorvariable to the input and the output to the Set Color Block.
- Add a Random Number Node with (Min:0, Max:1) and Constant set to false. Then, connect that to the Time input. Now, each particle will select a random color from the gradient you provide in the GameObject.
Setting the Output Block Inspector
To make the colors pop, there’s one more setting to change. First, arrange your windows so you can see both the inspector panel and the graph panel. Next, select one of the Output Blocks and look at the inspector, which reveals a hidden trove of extra settings. Then, find the Color Mode drop-down and select Base Color and Emissive, which makes full use of the HDR colors. Repeat this step with the Output Block of the other wing.
Save the graph and close the window to return to the Game view.
Setting the Visual Effect Properties
Select the BoxOfHearts in the Hierarchy and set new Properties in the inspector. Check each property to enable it. Next, increase the Count to 75 and set the
Radius to 2.
Click the Gradient to open the Gradient Editor. Setting a gradient is an art, so take some time to experiment and tune the results to your liking. These example values are set to create a broad mix of darker and lighter shades, with a few highlights of ultra-bright color.
Set the Mode to Blend and create four pins for color on the bottom of the Gradient bar at Location 15, 70, 90, 95. Use these color values for each pin:
- 15%: (R:191, G:0, B:29, Intensity:1).
- 70%: (R:191, G:29, B:48, Intensity:3).
- 90%: (R:191, G:29, B:48, Intensity:3).
- 95%: (R:191, G:10, B:29, Intensity:10).
Toggle BoxOfHearts on and off to refresh the Visual Effect with your new property values.
Return to the Game view to bask in the glow of your new BoxOfHearts.
Toggle HeartLight on and off to see the impact the light has on the color of the nearby butterhearts.
Experiment with the Exposure settings in Sky and Fog Volume to see the impact across all the butterhearts.
When you’re done experimenting, return the values to their defaults, and save the scene and the project. Your background visual effect is complete. But wouldn’t it be cool to have the effect spawn when the user performs an action? Now, it’s time to create an on-demand visual effect.
Controlling the Launch
First, create another VFX Graph asset, starting with a copy of the BoxOfHearts graph and calling it HeartOfHearts. Save it to the RW/VFX folder.
In the Hierarchy, add a Visual Effects ▸ Visual Effect GameObject and call it HeartOfHearts. Set its Position to (X:0, Y:2, Z:-5). Then, set the Asset Template to the HeartOfHearts VFX Graph asset found in the VFX folder.
Unlike the BoxOfHearts graph, you want to control when and where the HeartOfHearts graph spawns butterhearts. To control the visual effect, you need two things. First, you need a script that receives and processes input. Second, you need control variables in the VFX Graph asset that receive and apply changes to the graph.
Creating a Control Script
Inside RW/Scripts, create a new C# script named HeartManager. Next, attach that script to the HeartOfHearts GameObject.
Open the HeartManager script in your code editor, then add this declaration to the script, above the class declaration:
using UnityEngine.VFX; //for VFX graph
This lets you access
VisualEffect features.
Script References
Next, just inside the class declaration, add these variables:
//1 private VisualEffect visualEffect; //2 private VFXEventAttribute eventAttribute; //3 private bool heartsAreFlying; //4 private int flyingBoolID;
Here’s what this code does. It:
- Creates a private reference to
VisualEffect, which you can use throughout the script.
- Declares the attribute,
VFXEventAttribute, to use in the script.
- Creates a private variable to change the state of the visual effect.
- Declares a private variable,
flyingBoolID, that allows the script to access the exposed variable in the graph’s Blackboard.
Initializing the Script
Next, you’ll use
Awake to set up the
VisualEffect component:
void Awake() { //1 visualEffect = GetComponent<VisualEffect>(); eventAttribute = visualEffect.CreateVFXEventAttribute(); //2 flyingBoolID = Shader.PropertyToID("Flying"); //3 heartsAreFlying = false; //4 visualEffect.SetBool(flyingBoolID, false); //5 visualEffect.Stop(); }
In this code, you:
- Cache the reference to the attached
VisualEffectand the required
VFXEventAttributeto access it.
- Configure
flyingBoolIDto the name you’ll assign, in a bit, to the matching variable in the Blackboard.
- Initialize the variable
heartsAreFlyingto false so the butterhearts are resting to start
- Initialize the matching variable
flyingBoolIDin the Blackboard to the same.
- Stop the visual effect from playing at the start of the scene.
Adding Action Functions
With initialization complete, create a function to trigger the visual effect:
private void SpawnHearts() { visualEffect.Play(); }
Next, create a function to toggle the intensity of the visual effect:
private void ToggleHeartFlight() { //1 heartsAreFlying = !heartsAreFlying; //2 visualEffect.SetBool(flyingBoolID, heartsAreFlying); }
In this code, you:
- Reverse the private Boolean used to track the state of the visual effect.
- Apply the updated state to the matching variable in the graph’s Blackboard.
These are all the functions you need. Now, you just need a way to call the functions.
Tracking Input
For this tutorial, an easy way to call the functions is to track key inputs with
Update:
void Update() { //1 if (Input.GetKeyDown(KeyCode.Space)) { SpawnHearts(); } //2 if (Input.GetKeyDown(KeyCode.F)) { ToggleHeartFlight(); } }
In this code, you listen for the user to:
- Press Space, then call
SpawnHearts.
- Press F, then call
ToggleHeartFlight.
You’ve now finished the script. Save it and return to the editor.
Now that you can track inputs when playing the scene, it’s time to use that input to control the VFX Graph.
Adding Controls to the VFX Graph
First, select HeartOfHearts in the Hierarchy and click Edit to open the HeartOfHearts graph in the inspector. Make the window full screen so you have plenty of room to work.
In the Initialize Particle Block, increase the Capacity to 10,000. So many butterhearts shouldn’t fly forever, so add a Set Lifetime Random Block with (A:3, B:10).
Then, for a graceful exit, add a Set Alpha over Life Block to the Output Particle Block for both wings and set the animation curve to your liking.
To control whether the butterheart is flying or resting, add a new Boolean variable to the Blackboard and call it
Flying. This variable modifies both the
Wing Animation Speed and the particle velocity, so drag it onto the graph twice, close to each node it will modify.
To modify
Wing Animation Speed,
Flying supplies a Branch Node to choose whether to use the full or modified version of the
Wing Animation Speed.
To modify the particle velocity in the Update Block,
Flying supplies a Branch Node to choose whether to use the full or reduced velocity calculation. Because velocity is used elsewhere in the graph, avoid setting the velocity to zero to avoid division errors.
[TODO: Eric – when I ran through the tutorial using the Unity template in latest 2020, the sample graph at the above point was slightly different. Is the screenshot above enough for the reader to see the changes required, now that they’ve been introduced to all the elements. Or do we need to explain further?]
Save the graph and return to the Game view.
Select HeartOfHearts in the Hierarchy and set up its Properties in the inspector. Check each property to enable it. Set the Count to 500. Copy the gradient you used for BoxOfHearts.
Save the scene, click Play and test your new controls.
Next, you’ll play with the shape your butterhearts use when they spawn.
Customizing Spawn Shapes
Spawning from a sphere works, but it would be more interesting to use a Point Cache to control where particles spawn. In this case, you’ll make them appear in the shape of a heart.
To create a Point Cache, navigate to Window ▸ Visual Effects ▸ Utilities ▸ Point Cache Bake Tool. Set the Bake Mode to Texture, then select butterheartfull from the RW/Sprites folder. Set the Threshold to 0.9. Then, save the new file to RW/PointCaches and name it butterheartfull.
Open the HeartOfHearts graph and add a Point Cache Node. Then, assign the butterheartfull asset from the PointCaches folder.
In the Initialize Particle Block, replace the Add Position (Sphere) Block with a Set Position from Map Block. Then, connect the Point Cache Node to the Attribute Map input and set the Value Scale to (X:2, Y:2, Z:1).
Because the graph no longer uses
Radius, remove it from the Blackboard.
In the Update Particle Block, also remove the Tile/Warp Positions Block.
Save the graph, then return to the Game view.
Make sure you’ve enabled the HeartLight and the BoxOfHearts.
Now, save the scene and the project.
Click Play and experiment with the input keys to trigger new flocks of butterhearts and toggle their active state. You’re now making hearts fly with Unity VFX Graph!
Where to Go From Here?
Great job completing this tutorial! You can download the completed project files by clicking the Download Materials button at the top or bottom of the tutorial.
Feel free to tweak the project to your needs and see what flies. :]
For example, try creating your own wing art. You can find many art tutorials online, including this one on Making Butterflies with Inkscape.
Or you can create your own Point Caches with your own sprites or even meshes, like a tree or a butterfly bush.
Explore advanced HDRP materials with Unity’s tutorial on HDRP Butterfly Shaders.
The Unity Visual Effect Graph will continue to evolve, so follow the latest updates in the Unity Visual Effect Graph official documentation.
Now that you’ve nailed Visual Effect Graph, why not take it a step further and learn about Unity’s Shader Graph?
I hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below! | https://www.raywenderlich.com/20964535-making-hearts-fly-with-unity-vfx-graph | CC-MAIN-2021-49 | refinedweb | 4,416 | 65.83 |
SETUID(2) BSD Programmer's Manual SETUID(2)
setuid, seteuid, setgid, setegid - set user and group ID
#include <sys/types.h> #include <unistd.h> int setuid(uid_t uid); int seteuid(uid_t euid); int setgid(gid_t gid); int setegid(gid_t egid);
The setuid() function sets the real and effective user IDs and the saved set-user-ID of the current process to the specified value. The setuid() function is permitted if the effective user ID is that of the superuser,user, or if the specified group ID is the same as the effective group ID. If not, but the specified group ID is the same as the real group ID, set- gid() will set the effective group ID to the real group ID..
Upon success, these functions return 0; otherwise -1 is returned. If the user is not the superuser, or the UID specified is not the real, effective, or saved ID, these functions return -1.
getgid(2), getgroups(2), getuid(2), issetugid(2), setgid(2), setgroups(2), setregid(2), setreuid(2) A setuid() function appeared in Version 2 AT&T UNIX.
The setuid() and setgid() functions are compliant with the IEEE Std 1003.1-1990 ("POSIX"). | https://www.mirbsd.org/htman/i386/man2/seteuid.htm | CC-MAIN-2015-32 | refinedweb | 196 | 52.09 |
Hello all - brand new to using pyrosetta so apologies if I'm missing something obvious.
I'm currently in the process of trying to translate the code from the point mutation scan tutorial to my system ()
The system I'm working with has a total of five chains, with the first group being chains ABC and the second group being chains DE. For some reason, during the unbinding process these groups aren't being maintained. The code at current is formatted as:
def unbind(pose, partners):
STEP_SIZE = 100
JUMP = 2
setup_foldtree(pose,partners,Vector1([1]))
trans_mover = rigid.RigidBodyTransMover(pose,JUMP)
trans_mover.step_size(STEP_SIZE)
trans_mover.apply(pose)
unbind(relaxPose, "ABC_DE")
Depending on how I order the groups, either only chain D will translate from the complex (leaving a 100 angstrom disulfide to chain E..) or AB and DE will translate, leaving chain C hanging out in space. I've also tried running the script after editing the pdb to have chains D and E be one single chain, and I'm still running into the same issue. I've also tinkered around with Vector1 and had the same result as well. I've noticed that setup_foldtree adds a jumping edge between chains B and E, so I'm assuming during the jump it treats ABDE as a singular group. Has anyone run into a similar issue, or have any suggestions on how to fix this particular one? | https://rosettacommons.org/node/11258 | CC-MAIN-2022-40 | refinedweb | 236 | 58.21 |
Knockout is a new JavaScript library that simplifies data-binding on the client. It can be used to keep a UI and a data-model in sync in a similar way to how Silverlight bindings work. It’s similar in principle to data-linking in jQuery but takes the concept even further. It was written by Steven Sanderson author of one of the best (in my opinion) books on ASP.NET MVC so it’s got a good pedigree behind it. It’s designed to complement the functionality of other libraries such as jQuery and when used together, some really powerful behaviours can be achieved with little effort.
To run the code, you’ll need Visual Studio 2008 or above with MVC installed. The relevant JavaScript files are included in the sample project but for reference, I’m using Knockout v1.1.1 and jQuery templates downloadable from here along with jQuery 1.4.4
This article follows on from a previous one I wrote about using list boxes in ASP.NET MVC. For reference, the UI is shown below:
To summarise the behaviour, the user selects one or more items in the available list box and clicks the >> button to move them to the requested list box. Having done so, the details of the chosen items are shown under the list box.
Currently the UI is driven entirely by the web server, when a button is pressed the form is submitted and the server takes the necessary actions to update the view data model then re-displays the updated UI. This works well enough but it would be a much nicer user experience for this behaviour to be client driven; there’s no real need to involve the server in redrawing the interface every time. In this article, we’ll enhance the current UI so that all the display work is done on the client with no involvement from the server beyond the initial set-up and final submit. We’ll do it using progressive enhancement so that the page will still work for non-JavaScript enabled browsers.
To see what we’re aiming for, let’s define some behaviours we want the page to exhibit, some of which are repeated from the current server driven behaviour.
On first inspection, these requirements might not seem too tricky, but consider all the elements that have to be kept in sync; done manually the code could get tangled very quickly. Also there is not enough information in the page to generate the product details list, so we'd need to resort to AJAX or manually manage JavaScript objects to fulfil #3. Knockout simplifies these kinds of tasks considerably.
Let’s start with behaviour #1. The first thing we need to do is to define a view model for the page, this object models the state and behaviour of the UI and is what Knockout uses to perform its bindings. In practice, it’s a bit like a code-behind in a web forms page but more dynamic. To represent the state of the list boxes, we need 2 pieces of information for each: the items in the list box and the items that the user has selected. We want Knockout to be able to track these values so we create them as observable arrays. An observable array is a Knockout object that wraps a standard array and raises notifications when the number of objects in it changes. Our view model then looks like this:
var viewModel = {
availableProducts: ko.observableArray
(<%=new JavaScriptSerializer().Serialize(Model.AvailableProducts) %> ),
availableSelected: ko.observableArray([]),
requestedProducts: ko.observableArray
(<%=new JavaScriptSerializer().Serialize(Model.RequestedProducts) %>),
requestedSelected: ko.observableArray([])
}
In this code, I'm using the JavaScriptSerializer class (available in the System.Web.Script.Serialization namespace) to inject JSON data into the array. Alternatively, we could have fetched the data using an AJAX request to an action method that returns a JsonResult.
JavaScriptSerializer
System.Web.Script.Serialization
JsonResult
Knockout works by applying data-bind attributes to HTML elements to describe the desired behaviour, while this shouldn’t cause any browser problems, it does raise a few potential issues:
Some of these problems can be ameliorated through the use of the jQuery attr command to apply the attributes. For the sake of simplicity, I’ll be applying the attributes directly for the rest of this article but the sample download includes a ‘pure html’ version which passes XHTML 1.1 validation (admittedly in a slightly dirty way) and does all the work in external JavaScript files.
To hook up the data for the list-boxes, we'll use 3 bindings:
To apply these attributes to the select lists, alter the calls to Html.ListBoxFor().
Html.ListBoxFor()
<%=Html.ListBoxFor(m => m.AvailableSelected,
new MultiSelectList(Model.AvailableProducts,
"Id", "Name", Model.AvailableSelected),
new Dictionary<string, object>{{"data-bind",
"options:availableProducts, selectedOptions:availableSelected,
optionsText:'Name'"}})%>
Notice that options value isn't supplied; an important concept to understand is that the bindings track the underlying data model not just the values contained in the UI elements themselves. When we select an option in the list box, the selectedOptions array in the view model is populated with all the data for the corresponding product item, not just the id and name which is all we could traditionally get from the value / text format of the HTML select element. This is a really powerful feature as we’ll see.
selectedOptions
To set Knockout in motion, add the command ko.applyBindings(viewModel); to the page ready handler. Reload the page and… well nothing should have changed visually, but if you were to examine the list boxes in FireBug or a similar tool, you'd be able to see the changes to the list-box data showing Knockout working.
ko.applyBindings(viewModel);
Right let’s add some behaviour. We are going to apply a new binding to our movement buttons, the click binding. Alter the 2 submits to the following:
<input type="submit" name="add" value=">>" data-
<input type="submit" name="remove" value="<<" data-
The click button specifies a function in the view model to run when the item is clicked. Add the following function after the requestedSelected property in the view model:
requestedSelected
addRequested: function(){
var requested = this.requestedProducts;
$.each(this.availableSelected(), function(n, item) {
requested.push(item);
});
this.availableProducts.removeAll(this.availableSelected());
this.availableSelected.splice(0,this.availableSelected().length);
},
jQuery’s each() function is used to process each item in the selected array and move it into the requested array, these items are then removed from the available array and the selected array cleared. Knockout prevents the default action from occurring as standard so the form will not be submitted when pressing these buttons. The code for removeRequested is the same apart from the arrays used.
each()
removeRequested
Notice in the code above, I sometimes use requestedSelected without parenthesis and at other times requestedSelected() with them. observableArray is a wrapper for an array not the array itself but the underlying array can be accessed through the use of the () operator. This can lead to confusion so it is important to understand which object you are accessing. The observableArray object defines some functions that have the same names as the equivalent functions for a normal array object, for example push() and splice(). Calling requested.push() and requested().push() would both add an item to the underlying array; however Knockout will only register the change using the former syntax. You need to be careful to ensure any changes to an observable are done through the observable object and not the object it is wrapping. Not all functions are duplicated, so for example, length must be accessed through requestedSelected().length. This inconsistency can be a bit confusing at first but as a rule, any write actions should be made without parenthesis, any read actions should be made with them.
requestedSelected()
observableArray
()
push()
splice()
requested.push()
requested().push()
observable
requestedSelected().length
Behaviour #1 is now complete: items can be moved between list boxes. To achieve this, we didn’t need to access the UI components directly but just manipulate the values of the underlying data model. This is neat but hardly revolutionary; we could have achieved the same thing more easily by applying some DOM manipulation. However, our view model is now almost complete and from here implementing the rest of the behaviours is relatively easy, we’ll soon see the approach pay off.
Next up behaviour #2: the transfer buttons should only be enabled if items in the list are selected. Add the following to the “add” button: enable:availableSelected().length > 0 and the following to the remove button: enable:requestedSelected().length > 0. Job done, next! Seriously, how easy was that? The enable binding describes the condition under which the element should be enabled. In this case, the relevant observable array should have items in it. Knockout takes care of the rest for us. Next up that tricky sounding behaviour #3, keeping the keeping the list of product details in sync with the user’s requested items. For this we’re going to need another binding.
enable:availableSelected().length > 0
enable:requestedSelected().length > 0
If you look at how the product list gets rendered in the C# code, it basically consists of rendering a table row template for each product in the model’s requested products collection. The template binding in Knockout lets us do the same thing with our client-side data. It uses the jQuery Template engine under the hood (though you can swap in another if you want) so the format should be familiar if you’ve used that before. First we define a named template:
<script id="requestedTemplate" type="text/html">
<tr>
<td>${Name}</td>
<td>${Description}</td>
<td>${Price}</td>
</tr>
</script>
That script tag looks a bit weird (note the type) but I think it’s pretty obvious what is going on. Next add the template binding to the tbody tag:
tbody
<tbody data-
This just says apply the named template ‘requestedTemplate’ for each item in the requestedProducts array in the view model. And that’s all that is necessary to get the products list working, it will now automatically keep itself in sync with whatever items the user has selected. We also need to update the total of the selected products; to do this, we’ll create a dependant observable, an observable value that itself is dependant on other observables to determine its value:
requestedTemplate
requestedProducts
viewModel.requestedTotal = ko.dependentObservable(function(){
var total = 0;
$.each(this.requestedProducts(), function(n, item){
total = total + item.Price;
});
return total;
}, viewModel);
Then add a value binding to the span in the table footer:
span
table
<span id="total" data-
Now as the user moves items about the total will be automatically updated.
Last up is behaviour #4 only enable the send button if the user’s selections conform to our rules. We already know enough to do this one so I won’t include the code here. Creating a dependant observable to track the status or pasting the conditions directly into the enable attribute will get the job done.
The last thing we need to do is send the correct information to the server when the form is submitted. A hidden field of comma separated ids is used to track which products the user has requested. Using a submit binding, we can update this field before the form is submitted:
<form action='<%=Url.Action("Index") %>' method="post" data-
The view model code is shown below:
onSubmit: function(){
var ids = [];
$.each(this.requestedProducts(), function(n, item) {
ids.push(item.Id);
});
$("#SavedRequested").val(ids.join());
return true;
}
Returning true from the function allows the form to be submitted normally.
true
Knockout is another example of the trend towards a more declarative programming style: it allows us to state what we want to happen without needing to specify how it is achieved. Combining it with jQuery, we can achieve some really nice client-side behaviours in our pages with very little work. With the freedom, ASP.NET MVC gives us over our markup and how data can be received we have a server-side platform that enables us take advantage of these developments more easily and produce some really nice user. | https://www.codeproject.com/Articles/138654/Client-side-Model-Binding-using-ASP-NET-MVC-jQuery?fid=1601917&df=90&mpp=10&sort=Position&tid=3708953 | CC-MAIN-2017-34 | refinedweb | 2,038 | 53.1 |
Coin-or CBC native interface for Python
Project description
cbcpy
Native Python interface for Coin-or Branch and Cut Solver (Cbc).
Description
This project provide the build mechanism to automatically generate the wrapper code between Cbc C++ code and Python using SWIG.
This project was develop as part of the CBC Coin-or Sprint Aug 2019.
Binaries for the following platform are pre-compiled and available on pypi.
- linux x86_64 / python 2.7
- linux x86_64 / python 3.5
- linux x86_64 / python 3.6
- linux x86_64 / python 3.7
- win x86 / python 2.7
- win x86 / python 3.5
- win x86 / python 3.6
- win x86 / python 3.7
- win x86_64 / python 3.5
- win x86_64 / python 3.6
- win x86_64 / python 3.7
Linux x86 is not supported.
Installation
Pre-compiled python packages are deployed to cbcpy Pypi repositories.
To install
cbcpy you should make use of
pip command line:
pip install cbcpy
The packages include pre-compiled version of Cbc.
For Windows: You must install Visual C++ Redistributable for VS2015
Usage
Here a minimalistic python script making use of
cbcpy.
You may download
p0033.mps from here.
import cbcpy as cbc solver1 = cbc.OsiClpSolverInterface() solver1.readMps("p0033.mps") model = cbc.CbcModel(solver1) model.branchAndBound() numberColumns = model.solver().getNumCols() p_solution = model.solver().getColSolution() solution = cbc.doubleArray_frompointer(p_solution) for i in range(numberColumns): value = solution[i] print("%s has value %s" % (i, value))
Documentation
Original documentation from Cbc project is available in python using the
help() function.
# python Python 2.7.16 (default, Jul 13 2019, 16:01:51) [GCC 8.3.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import cbcpy >>> help(cbcpy) Help on module cbcpy:
Troubleshooting
The specified module could not be found.
>>> import cbcpy Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python37-32\lib\site-packages\cbcpy.py", line 15, in <module> import _cbcpy ImportError: DLL load failed: The specified module could not be found.
This error might occur on Windows platform when the file
msvcp140.dll cannot
be found. You must install Visual C++ Redistributable for VS2015.
For 32-bit download "vc_redist.x86.exe" file and for 64-bit download "vc_redist.x64.exe" file.
Support
To get community help for cbcpy, you may send email to the Cbc mailing list.
You may also get paid support by contacting Patrik Dufresne Service Logiciel.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/cbcpy/ | CC-MAIN-2022-33 | refinedweb | 425 | 62.54 |
Library for reading/writing farbfeld images.
Project description
farbfeld.py
This is a small Python module for reading and writing pixel data from farbfeld images ().
Installation
The module is available on PyPI:
You can install it with
pip:
pip install farbfeld
Usage
To read an image, open the desired file and read the pixels
from it using
farbfeld.read:
import farbfeld with open('image.ff', 'rb') as f: data = farbfeld.read(f)
Note that since farbfeld stores pixel components as 16-bit unsigned integers, you may have to normalize them or scale them to a different range (e.g. 8-bit). For example, using NumPy and Matplotlib:
import farbfeld import numpy as np import matplotlib.pyplot as plt with open('image.ff', 'rb') as f: data = farbfeld.read(f) data_8bit = np.array(data).astype(np.uint8) plt.imshow(data_8bit, interpolation='nearest') plt.show()
To write a farbfeld image, open the desired file and write the pixels
into it using
farbfeld.write:
import farbfeld # An example 2x2 image data = [ [[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]], ] with open('image.ff', 'wb') as f: farbfeld.write(f, data)
Source code
The source code is available on GitHub:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/farbfeld/ | CC-MAIN-2021-10 | refinedweb | 227 | 68.16 |
20841/python-short-integers
Firstly, if you're doing any sort of manipulation of this huge dataset, you'll probably want to use Numpy, which has support for a wide variety of numeric types, and efficient operations on arrays of them.
And the answer to your question,
from array import array
a = array("h") # h = signed short, H = unsigned short
As long as the value stays in that array, it will be a short integer.
PS: C++'s short not 2 bytes width. It's implementation dependent.
It can be done using a library ...READ MORE
Node-RED supplies an exec node as part ...READ MORE
You connect as a device -> import ibmiotf.device. ...READ MORE
This is a way I can think ...READ MORE
You can use a library like CoAPython ...READ MORE
Hey, it turns out that the IBM ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/20841/python-short-integers?show=20846 | CC-MAIN-2020-40 | refinedweb | 188 | 77.23 |
How would (could?) one get a post-processing script in UCD to capture (and return as an output property) the entire contents of a line that consists of exactly 64 hex chars?
I'm referring, of course, to the Docker container Id as thrown by the Docker Run UCD plugin step. It looks like this:
Executing docker command... command: docker run -d my-ucd-agent /tmp/editandstart.sh cad645c96e57a629e28131c0a8f63403a9ab7a81f0ad273b598d7d95b3a99e5c Command executed with a successful exit code
But I can't figure the regexp to get the script to scan for (and return) that ID.
And just a small addendum for clarity: That "Blockquote" thingie didn't do what I expected so that space between "a99e5c" and "Command" was supposed to be a line break.
The output line (for this container this time) begins with "cad645" and ends with"a99e5c".
Which flavor regexp is used in the scripting here? Ideally I would like to catch the exact pattern of "64 hex digits" and then grab the 12 first ones of those, as that's what is used in the shortform id. Like, in the container's hostname.
70 people are following this question.
Trouble connecting with Docker Login 3 Answers
How to setup urbancode deploy to pull container tags from ibm bluemix container registry? 2 Answers
Post Processing Script is passing null value to next step in UrbanCode build 2 Answers
import artifacts having one in use or open (office files) 1 Answer
Docker compose plugin failed during intallation 3 Answers | https://developer.ibm.com/answers/questions/275405/post-processing-script-to-read-an-entire-line-from.html?smartspace=hybrid-cloud-core | CC-MAIN-2019-26 | refinedweb | 250 | 62.58 |
import a macro-enabled Excel worksheet into Stata 12?
I don't know the answer for sure, but a first step to find it could be
to create a trivial 'light' file with a couple of numbers and a simple
macro, save it and try to import. If it fails on the simplest file -
then it is likely not gonna import your real 'heavy' file. If it does,
then we know that the feature exists, and we could look into how the
'heavy' file differs from the 'light' one.
Best, Sergiy
On Fri, Jun 28, 2013 at 4:25 PM, John Bensin <johnalexbensin@gmail.com> wrote:
>:
> *
> *
> *
*
* For searches and help try:
*
*
* | http://www.stata.com/statalist/archive/2013-06/msg01345.html | CC-MAIN-2016-36 | refinedweb | 111 | 87.35 |
-
case classes and auxiliary constructors
Mon, 2011-07-11, 10:30
hi _
I just discovered that the compiler does not create "auxiliary" apply methods
for auxiliary constructors of case classes, as in:
scala> case class Mod(f: Int => Int) {
| def this(i: Int) = this(Int => i)
| }
defined class Mod
scala> Mod(20)
:19: error: type mismatch;
found : Int(20)
required: (Int) => Int
Mod(20)
^
why is that? is there a reason, why this feature is not implemented or has no
one had the time yet to do it?
the auxiliary apply method syntax is:
def apply(argsOfAuxiliaryConstructor) = new Foo(argsOfAuxiliaryConstructor)
can/will this be implemented?
best regards
christian krause aka wookietreiber
Re: case classes and auxiliary constructors
It works fine in the REPL _if you enter all the code in one block_. Otherwise it treats the two as independent: creation of an object, and then creation of an independent case class that happens to have the same name. In 2.9 you can use :paste to enter a block; in earlier versions, you can enclose both in a wrapper object.
--Rex
On Mon, Jul 11, 2011 at 8:52 AM, john sullivan <sullymandias [at] gmail [dot] com> wrote:
--Rex
On Mon, Jul 11, 2011 at 8:52 AM, john sullivan <sullymandias [at] gmail [dot] com> wrote:
This doesn't work well in the REPL. I'm not exactly sure why. Other people on the list might be able to tell you.
I have no answers for any of your questions, but I will note that you can provide the auxiliary factory method yourself quite easily:
object Mod { def apply(i: Int) = new Mod(i) // <----- DIY 1-liner ----- def main(args: Array[String]) = { val sampleMod = Mod(7) println("sampleMod = " + sampleMod) }}case class Mod(f: Int => Int) { def this(i: Int) = this(Int => i) }
Running this program gives me the following output:
sampleMod = Mod(<function1>)
This doesn't work well in the REPL. I'm not exactly sure why. Other people on the list might be able to tell you.
Thinking it over, one possible reason why the auxiliary apply method is not provided is that if it were, people might also expect a corresponding unapply method, which is probably quite a bit more complicated than the one-liner above. (But that's just speculation.)
Best, John
On Mon, Jul 11, 2011 at 5:28 AM, wookietreiber <kizkizzbangbang [at] googlemail [dot] com> wrote:
--
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy. | http://www.scala-lang.org/old/node/10189 | CC-MAIN-2014-15 | refinedweb | 418 | 62.31 |
Hi, i'm second year in college and i'm diving into the basics of c++ , basicly i have to read 2 numbers and divide them with each other by repeated substractions using a function, the function is to divide those numbers by repeatedly substracting one from another, and return the result, and the rest, + give an error in case one of the numbers is 0 :D
i tried to play with that a bit but i think i pretty much failed :(
here is what i got for now:
#include "stdafx.h" #include <conio.h> #include <iostream> using namespace std; int div (int x, int y) { int c=0,d=0; while (x!=0,y!=0,x>=y) { c=x-y; x=c; d=d+1; } return d; } int main() { int a,b,c=0,d=0,z=0,n; cout<<"A= "; cin>>a; cout<<" B= "; cin>>b; z = div (a,b); if (a==0 || b==0) { cout<<" ERROR "<<endl; } if (a!=b,a<=b) { cout<<" "<<endl; cout<<"A / B = "<<z<<endl; n=a%b; cout<<"Rest= "<<n<<endl; } getch(); }
i get 4 errors but it would be pointless to post them since i know its probably something obvious >.< | https://www.daniweb.com/programming/software-development/threads/250278/simple-problem-function-trouble | CC-MAIN-2017-39 | refinedweb | 199 | 64.58 |
The 0.1 through 1.0.0 releases focused on bringing in functions from yum and python-fedora. This porting guide tells how to port from those APIs to their kitchen replacements.
Previously, yum had several pieces of code to initialize i18n. From the toplevel of yum/i18n.py:
try:. ''' Setup the yum translation domain and make _() and P_() translation wrappers available. using ugettext to make sure translated strings are in Unicode. ''' import gettext t = gettext.translation('yum', fallback=True) _ = t.ugettext P_ = t.ungettext except: ''' Something went wrong so we make a dummy _() wrapper there is just returning the same text ''' _ = dummy_wrapper P_ = dummyP_wrapper
With kitchen, this can be changed to this:
from kitchen.i18n import easy_gettext_setup, DummyTranslations try: _, P_ = easy_gettext_setup('yum') except: translations = DummyTranslations() _ = translations.ugettext P_ = translations.ungettext
Note
In Overcoming frustration: Correctly using unicode in python2, it is mentioned that for some things (like exception messages), using the byte str oriented functions is more appropriate. If this is desired, the setup portion is only a second call to kitchen.i18n.easy_gettext_setup():
b_, bP_ = easy_gettext_setup('yum', use_unicode=False)
The second place where i18n is setup is in yum.YumBase._getConfig() in yum/__init_.py if gaftonmode is in effect:
if startupconf.gaftonmode: global _ _ = yum.i18n.dummy_wrapper
This can be changed to:
if startupconf.gaftonmode: global _ _ = DummyTranslations().ugettext() | http://pythonhosted.org/kitchen/porting-guide-0.3.html | CC-MAIN-2013-20 | refinedweb | 230 | 53.27 |
"Eaglestone, Robert [NGC:B918:EXCH]" wrote: > > Hello all, > > In general, it seems to me that all good programmers > indent their code. However, mandating it as part of > the form of a language sounds ... strange. This can be a failure of the language or or your ears. > However, I don't like the BEGIN...END bracketing done > by Pascaline languages; neither do I love the {...} > of C and its descendants. When you write Pascal, Modula, Oberon or Delphi (what I did for years), you are used to it and will probably like it. When you write C or C++, the braces will be not that bad after some time. > Has the creator of Python looked into fully bracketed > syntax? That might not be the actual term, so here's > an example of what I'm thinking about: I think he is programing since over 20 years but probably never had time to think about braces. That's for sure one of the reasons to omit them for Python :-) > if ( foo > bar ) > do_something(); > do_something_else(); > done(); > endif Whowwwww! What a nice concept. Maybe you should invent a new language? You might think of a very new name, something like "Visual Basic"? > This bracketing gets the best [ or worst ] of both > worlds, with a bit of syntactic sugar added for free. How do you define "best" in a person independant manner? > The lead bracket is the keyword itself, and the end > bracket is tailored for that keyword alone. Only > one 'line' is 'wasted' by the end bracket (Python > doesn't like wasting lines). And indentation is no > longer a language issue. The lead bracket is the keyword itself, and the end bracked is obvious by indentation and no longer necessary. If you want to waste one line, use #endif. This feature was added to Python long time ago and isn't very well known. Python is so flexible that you even can choose anything behind the "#". > I only know of one language which uses this syntax, > and it's not in common use. However, lots of us may > have seen this kind of syntax in college (hey! is it > part of ML?). Bill Gates would kiss you for this :-) > I guess what I'd like is some clear reasoning about > bracketing and indenting that warrants such a feature > of a language. If you like Python, you will like indentation. Indentation belongs to Python like Python belongs to Guido. I-did-not-intend-to-hurt-you-by-indent- -but-it's-easier-to-change-you- -than-Python's-intended-indentation -ly y'rs - | https://mail.python.org/pipermail/python-list/2000-February/049665.html | CC-MAIN-2019-35 | refinedweb | 428 | 74.9 |
exportar olm a pst
olm in pst konvertieren
olm to pst convertor
converter olm para pst
covert pst to olm
convert olm naar pst
convert olm file in pst
olm datei in pst konvertieren
export ost database to pst
utility for export pst from edb file
Convert OLM file with Export OLM to PST converter tool and convert MAC OLM file ...
/ Size :7,266K / Shareware
Export OLM to PST file thorugh our advanced MAC OLM to PST exporter utility. Thr...
/ Size :7,252K / Shareware
The Free Mac to PC file transfer will convert the OLM file into PST totally free...
/ Size :15,527K / Shareware
Highly used Email Client is Windows Outlook that uses PST file format. Therefore many opt for Export Mac Mail to PST. OLM to PST Converter Pro is professional software for Instant Corrupted OLM Repair and Export Mac Mail to PST. OLM to PST Converter Free Demo is the first step to take for any user who needs to complete demand of OLM Conversion to PST Outlook. .
Outlook 2011 OLM to PST converter software is available for converting and moving entire MAC Outlook 2011 database from MAC machine to Windows machine. MAC Outlook OLM to PST converter software helps you to move. import and export MAC Outlook 2011 OLM to PST file format. Download now free conversion tool for OLM file to PST file format (folder). Software is able to export unlimited heavy OLM files from OLM file to PST files. .
outlook 2011 olm to pst , olm to pst converter , outlook 2011 olm file to pst , 2011 olm to pst , outlook olm file to pst , outlook olm 2011 to pst importer/exporter tool to import and export PST files from Apple MAC machine. .
import olm to pst import outlook olm to pst , olm to pst converter , convert mac olm to pst , import olm file , outlook olm to pst , free mac outlook olm to pst , export olm to pst
Export Outlook OLM to PST file format without any hassle and at less time. The tool OLM conversion will make it possible.. You can believe in OLM Converter to for getting conversion task safely. .
export outlook olm to pst , convert outlook olm file to pst , convert olm file to pst converter , convert mac olm file to pst like Windows 8.
outlook olm to pst converter app , convert outlook 2011 olm to pst , olm to pst converter software , mac olm database to pst , mac mailbox to pst mailbox
If. contacts etc. from Mac OLM file into PST file. You can run our latest OLM to PST converter software on all Windows version like Windows 8.
latest olm to pst converter , convert olm file to pst , convert mac outlook to pst , outlook 2011 to pst , free olm to pst. .
how to export mac outlook 2011 to pst , export outlook olm to pst , export outlook 2011 to pst , download olm to pst exporter , export mac outlook 2011 olm file to pst , olm file to pst exporter
The migration of OLM to PST Outlook without any alteration done to the Meta Data Information of Mac OLM file while executing the conversion can only be reachable with an effective export from Outlook 2011 to PST FREE Utility which has adequate supremacy in it to make one comprehend the query such as “How to convert OLM to EML” file. The OLM converter software after the implementation of its restructured edition has now become supportive of Windows 8 and Outlook 2013 both which makes it supportive of ALL the versions of Windows with Outlook. The Export from Outlook 2011 to PST free tool after the completion of OLM to PST conversion provides users the option to SPLIT Large Sized PST files in two or more. However the OLM converter software at all times does not fail to keep absolute Data-Accuracy within the exported files and their respective attachments. .
export from outlook 2011 to pst free , olm to pst file , convert olm to pst , olm to pst conversion , olm converter , migration of olm to pst outlook , how to convert olm to eml , olm to msg outlook , free export mac olm file to pst
Comprehend the technique of exporting . olm to Windows . pst file format in a constructive manner by means of OLM to PST Windows Converter Software. The tool on the other hand has been reorganized with some of the most exclusive attributes of all obtainable in the market which has made the productivity to export Mac OLM to Windows PST more precise. However the tool earlier to initiate the conversion process proffers users a self devoted Scan aspect as in to scan the entire Mac OLM file with the intent to remove the odds of Data-Corruption. The Mac Outlook export to PST once the OLM files gets exported to the desired file provide users the option to SPLIT Large Sized PST files in two or more. Furthermore to this it also has the efficiency to export Outlook 2011 to PST in ANSI as well as in UNICODE PST both systematically. It even previews the exported files and their respective attachments in Horizontal and Vertical View both. .
exporting olm to windows pst , export mac olm to windows pst , olm to pst windows , changing olm to pst for outlook 2007 , export oulook 2011 to pst , mac outlook export. .
how to convert olm to pst , demo of olm to pst , olm to pst conversion procedure , safe olm pst conversion | http://freedownloadsapps.com/s/export-olm-to-pst/1/ | CC-MAIN-2018-51 | refinedweb | 902 | 62.41 |
Exercise 2 is a good lesson in functions. You will be forced to use a function for every requirement of the program. In addition, you will have to be inventive with how you terminate the program early and still calculate a running average. See my source below:
2. Write a program that asks the user to enter up to 10 golf scores, which are to be stored
in an array. You should provide a means for the user to terminate input prior to entering
10 scores. The program should display all the scores on one line and report the average
score. Handle input, display, and the average calculation with three separate array-
processing functions.
#include <iostream> const int maxSize = 10; // Prototypes int input(int scores[], int arSize); double avg(int scores[], int arSize); void display(int scores[], int arSize); int main() { int scores[maxSize]; int arSize = input(scores, maxSize); display(scores, arSize); std::cout << "Average score: " << avg(scores, arSize); return 0; } int input(int scores[], int arSize) { double score; int count = 0; std::cout << "Enter up to " << maxSize << " scores (press -1 to terminate)\n"; // Loop up to 10 times for(int i = 0; i < maxSize; i++) // Loop until maxSize { std::cout << "Enter Score #" << (i + 1) << ": "; std::cin >> score; if(score > 0) { scores[i] = score; // assign score to score[i] count++; } else break; } return count; // return something sice fucntion is not void. } double avg(int scores[], int arSize) { double total = 0; for(int i = 0; i < arSize; i++) // Loop as many times as was actually inputted { total += scores[i]; } return total/arSize; } void display(int scores[], int arSize) { std::cout << "Scores: "; for(int i = 0; i < arSize; i++) // Loop as many times as was actually inputted { std::cout << scores[i] << " "; // Print to one line } }
Advertisements | https://rundata.wordpress.com/2013/01/11/c-primer-plus-chapter-7-exercise-2/ | CC-MAIN-2017-22 | refinedweb | 294 | 52.73 |
I have an Admin page that allows the admin to edit user details and change their account. I would also like the admin to be able to logout a different user. I've tried numerous ways to do this with no avail. Any ideas?
View Complete Post
Case is if a user wants to clear their personal information from a page after clicking on a button that they be re-directed to the ASP Login page. This code needs to be developed using Java Script because I want the user to confirm with a confirmation box that they intended to clear their info. I have found resources on MS that pointed to referencing the "System Web Extension" within the Web Config file shown below, which enables Java Script to be able to reference the Authentication Service classes. I am also calling the function show_confirm in the button onclick event to process the message box response.
I also need to redirect a user back to the Login page within this same show_confirm() function without pointing directly to the URL, but instead to the folder where the page is locateded like in VB Server Redirect if possible.
The error message I receive when I run this code is: Microsoft JScript runtime error: 'sys' is undefined.
Any help will be grate full!
Shawn
<system.web.extensions>
<scripting>
<webServices>
<authenticationService enabled="true" />
</webServices>
</scripting>
</system.web.extensions>
function show_confirm(message) {
var r = confirm
Is there a membership function to search a username Like this: LIKE '%UserName%' ?
This is not working.
MembershipUser User = Membership.GetUser("%" + TextBox1.Text + "%");Thank
please let me know whether there is a facility in SQL SERVER EXPRESS 2008 to logout idling users automatically.My problem here is that due to limited no of license other needy users can not log in to the system
-!
I am tring to import data in .net membership database, i have written some code below that import the user profile that includes
username
My code works and does imports all the records in membership aspnet_user table but aspnet_Profile table remain empty, and i think that all the profile information should be copied in aspnet_profile.
Below is my code would really appreciate if some one could help...
<..
When creating a foreign key to the membership user table, is it better to refer to the user_id column or is it ok to use the username column? I assume that the Membership spec enforces uniqueness in the username since there is a method Membership.GetUser(username) that returns one user. Using the id is a bit awkward in code because I have to end up casting all over the place like (int)user.ProviderUserKey. The username would give more information quickly when viewing raw data as well. Just wondering if someone knows a reason why I shouldn't use it as the foreign key.
Thanks
Jason.
hi,?
I'm tryint to convert some existing code to the new framework and according to microsoft the membership stuff has been pushed to System.Web.ApplicationServices so I added the reference and imported the System.Web.Security namespace but MembershipUser continues to error...what am I doing wrong?
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/46764-membership-logout-user.aspx | CC-MAIN-2017-43 | refinedweb | 541 | 62.38 |
QTimeLine
The QTimeLine class provides a timeline for controlling animations. More...
#include <QTimeLine>
This class was introduced in Qt 4.2.
Public Types
Properties
Public Functions
Public Slots
Signals
Reimplemented Protected Functions
Additional Inherited Members
Detailed Description().
Note that.().
By default, this property contains a value of 0.
Access functions:
curveShape : CurveShape
This property holds the shape of the timeline curve.
The curve shape describes the relation between the time and value for the base implementation of valueForTime().
If you have reimplemented valueForTime(), this value is ignored.
By default, this property is set to EaseInOutCurve.
Access functions:
See also valueForTime().
direction : Direction
This property holds the direction of the timeline when QTimeLine is in Running state.
This direction indicates whether the time moves from 0 towards the timeline duration, or from the value of the duration and towards 0 after start() has been called.
By default, this property is set to Forward.
Access functions:
duration : int. = 0 )
Constructs a timeline with a duration of duration milliseconds. parent is passed to QObject's constructor. The default duration is 1000 milliseconds.
QTimeLine::~QTimeLine () [virtual]().
void QTimeLine::finished () [signal]
This signal is emitted when QTimeLine finishes (i.e., reaches the end of its time line), and does not loop.
void QTimeLine::frameChanged ( int frame ) [signal]
QTimeLine emits this signal at regular intervals when in Running state, but only if the current frame changes. frame is the current frame number.
See also QTimeLine::setFrameRange() and QTimeLine::updateInterval.::resume () [slot]Paused ( bool paused ) [slot]().
void QTimeLine::start () [slot]().().
void QTimeLine::stateChanged ( QTimeLine::State newState ) [signal]
This signal is emitted whenever QTimeLine's state changes. The new state is newState.
void QTimeLine::stop () [slot]
Stops the timeline, causing QTimeLine to enter NotRunning state.
void QTimeLine::timerEvent ( QTimerEvent * event ) [virtual protected]
Reimplemented from QObject::timerEvent().
void QTimeLine::toggleDirection () [slot]
Toggles the direction of the timeline. If the direction was Forward, it becomes Backward, and vice verca.
See also setDirection().
void QTimeLine::valueChanged ( qreal value ) [signal].
qreal QTimeLine::valueForTime ( int msec ) const [virtual] | https://developer.blackberry.com/native/reference/cascades/qtimeline.html | CC-MAIN-2015-18 | refinedweb | 335 | 53.07 |
# WAL in PostgreSQL: 1. Buffer Cache
The previous series addressed [isolation and multiversion concurrency control](https://habr.com/ru/company/postgrespro/blog/442804/), and now we start a new series: **on write-ahead logging**. To remind you, the material is based on training courses on administration that Pavel [pluzanov](https://habr.com/ru/users/pluzanov/) and I are creating (mostly [in Russian](https://postgrespro.ru/education/courses), although one course is available [in English](https://postgrespro.com/education/courses)), but does not repeat them verbatim and is intended for careful reading and self-experimenting.
This series will consist of four parts:
* Buffer cache (this article).
* [Write-ahead log](https://habr.com/en/company/postgrespro/blog/494246/) — how it is structured and used to recover the data.
* [Checkpoint and background writer](https://habr.com/en/company/postgrespro/blog/494464/) — why we need them and how we set them up.
* [WAL setup and tuning](https://habr.com/en/company/postgrespro/blog/496150/) — levels and problems solved, reliability, and performance.
> Many thanks to Elena Indrupskaya for the translation of these articles into English.
>
>
Why do we need write-ahead logging?
===================================
Part of the data that a DBMS works with is stored in RAM and gets written to disk (or other nonvolatile storage) asynchronously, i. e., writes are postponed for some time. The more infrequently this happens the less is the input/output and the faster the system operates.
But what will happen in case of failure, for example, power outage or an error in the code of the DBMS or operating system? All the contents of RAM will be lost, and only data written to disk will survive (disks are not immune to certain failures either, and only a backup copy can help if data on disk are affected). In general, it is possible to organize input/output in such a way that data on disk are always consistent, but this is complicated and not that much efficient (to my knowledge, only Firebird chose this option).
Usually, and specifically in PostgreSQL, data written to disk appear to be inconsistent, and when recovering after failure, special actions are required to restore data consistency. Write-ahead logging (WAL) is just a feature that makes it possible.
Buffer cache
============
Oddly enough, we will start a talk on WAL with discussing the buffer cache. The buffer cache is not the only structure that is stored in RAM, but one of the most critical and complicated of them. Understanding how it works is important in itself; besides we will use it as an example in order to get acquainted with how RAM and disk exchange data.
Caching is used in modern computer systems everywhere; a processor alone has three or four levels of cache. In general cache is needed to alleviate the difference in the performances between two kinds of memory, of which one is relatively fast, but there is not enough of it to go round, and the other one is relatively slow, but there is quite enough of it. And the buffer cache alleviates the difference between the time of access to RAM (nanoseconds) and disk storage (milliseconds).
Note that the operating system also has the disk cache that solves the same problem. Therefore, database management systems usually try to avoid double caching by accessing disks directly rather than through the OS cache. But this is not the case with PostgreSQL: all data are read and written using normal file operations.
Besides, controllers of disk arrays and even disks themselves also have their own cache. This will come in useful when we reach a discussion of reliability.
But let's return to the DBMS buffer cache.
It is called like this because it is represented as an array of *buffers*. Each buffer consists of space for one data page (block) plus the header. The header, among the rest, contains:
* The location of the page in the buffer (the file and block number there).
* The indicator of a change to the data on the page, which will sooner or later need to be written to disk (such a buffer is called *dirty*).
* The usage count of the buffer.
* The pin count of the buffer.
The buffer cache is located in the shared memory of the server and is accessible to all processes. To work with data, that is, read or update them, the processes read pages into the cache. While the page is in the cache, we work with it in RAM and save on disk accesses.

The cache initially contains empty buffers, and all of them are chained into the list of free buffers. The meaning of the pointer to the «next victim» will be clear a bit later. A hash table in the cache is used to quickly find there a page you need.
Search for a page in the cache
==============================
When a process needs to read a page, it first attempts to find it in the buffer cache by means of the hash table. The file number and the number of the page in the file are used as the hash key. The process finds the buffer number in the appropriate hash bucket and checks whether it really contains the page needed. Like with any hash table, collisions may arise here, in which case the process will have to check several pages.
> Usage of hash tables has long been a source of complaint. A structure like this enables to quickly find a buffer by a page, but a hash table is absolutely useless if, for instance, you need to find all buffers occupied by a certain table. But nobody has suggested a good replacement yet.
>
>
If the page needed is found in the cache, the process must «pin» the buffer by incrementing the pin count (several processes can concurrently do this). While a buffer is pinned (the count value is greater than zero), it is considered to be used and to have contents that cannot «drastically» change. For example: a new tuple can appear on the page — this does no harm to anybody because of multiversion concurrency and visibility rules. But a different page cannot be read into the pinned buffer.
Eviction
========
It may so happen that the page needed will not be found in the cache. In this case, the page will need to be read from disk into some buffer.
If empty buffers are still available in the cache, the first empty one is chosen. But they will be over sooner or later (the size of a database is usually larger than the memory allocated for the cache), and then we will have to choose one of the occupied buffers, evict the page located there and read the new one into the freed space.
The eviction technique is based on the fact that for each access to a buffer, processes increment the usage count in the buffer header. So the buffers that are used less often than the others have a smaller value of the count and are therefore good candidates for eviction.
The *clock-sweep* algorithm circularly goes through all buffers (using the pointer to the «next victim») and decreases their usage counts by one. The buffer that is selected for eviction is the first one that:
1. has a zero usage count
2. has a zero pin count (i. e. is not pinned)
Note that if all buffers have a non-zero usage count, the algorithm will have to do more than one circle through the buffers, decreasing the values of counts until some of them is reduced to zero. For the algorithm to avoid «doing laps», the maximum value of the usage count is limited by 5. However, for a large-size buffer cache, this algorithm can cause considerable overhead costs.
Once the buffer is found, the following happens to it.
The buffer is pinned to show other processes that it is used. Other locking techniques are used, in addition to pinning, but we will discuss this in more detail later.
If the buffer appears to be dirty, that is, to contain changed data, the page cannot be just dropped — it needs to be saved to disk first. This is hardly a good situation since the process that is going to read the page has to wait until other processes' data are written, but this effect is alleviated by checkpoint and background writer processes, which will be discussed later.
Then the new page is read from disk into the selected buffer. The usage count is set equal to one. Besides, a reference to the loaded page must be written to the hash table in order to enable finding the page in future.
The reference to the «next victim» now points to the next buffer, and the just loaded buffer has time to increase the usage count until the pointer goes circularly through the entire buffer cache and is back again.
See it for yourself
===================
As usual, PostgreSQL has an extension that enables us to look inside the buffer cache.
```
=> CREATE EXTENSION pg_buffercache;
```
Let's create a table and insert one row there.
```
=> CREATE TABLE cacheme(
id integer
) WITH (autovacuum_enabled = off);
=> INSERT INTO cacheme VALUES (1);
```
What will the buffer cache contain? At a minimum, there must appear the page where the only row is added. Let's check this using the following query, which selects only buffers related to our table (by the `relfilenode` number) and interprets `relforknumber`:
```
=> SELECT bufferid,
CASE relforknumber
WHEN 0 THEN 'main'
WHEN 1 THEN 'fsm'
WHEN 2 THEN 'vm'
END relfork,
relblocknumber,
isdirty,
usagecount,
pinning_backends
FROM pg_buffercache
WHERE relfilenode = pg_relation_filenode('cacheme'::regclass);
```
```
bufferid | relfork | relblocknumber | isdirty | usagecount | pinning_backends
----------+---------+----------------+---------+------------+------------------
15735 | main | 0 | t | 1 | 0
(1 row)
```
Just as we thought: the buffer contains one page. It is dirty (`isdirty`), the usage count (`usagecount`) equals one, and the page is not pinned by any process (`pinning_backends`).
Now let's add one more row and rerun the query. To save keystrokes, we insert the row in another session and rerun the long query using the `\g` command.
```
| => INSERT INTO cacheme VALUES (2);
```
```
=> \g
```
```
bufferid | relfork | relblocknumber | isdirty | usagecount | pinning_backends
----------+---------+----------------+---------+------------+------------------
15735 | main | 0 | t | 2 | 0
(1 row)
```
No new buffers were added: the second row fit on the same page. Pay attention to the increased usage count.
```
| => SELECT * FROM cacheme;
```
```
| id
| ----
| 1
| 2
| (2 rows)
```
```
=> \g
```
```
bufferid | relfork | relblocknumber | isdirty | usagecount | pinning_backends
----------+---------+----------------+---------+------------+------------------
15735 | main | 0 | t | 3 | 0
(1 row)
```
The count also increases after reading the page.
But what if we do vacuuming?
```
| => VACUUM cacheme;
```
```
=> \g
```
```
bufferid | relfork | relblocknumber | isdirty | usagecount | pinning_backends
----------+---------+----------------+---------+------------+------------------
15731 | fsm | 1 | t | 1 | 0
15732 | fsm | 0 | t | 1 | 0
15733 | fsm | 2 | t | 2 | 0
15734 | vm | 0 | t | 2 | 0
15735 | main | 0 | t | 3 | 0
(5 rows)
```
VACUUM created the visibility map (one-page) and the free space map (having three pages, which is the minimum size of such a map).
And so on.
Tuning the size
===============
We can set the cache size using the *shared\_buffers* parameter. The default value is ridiculous 128 MB. This is one of the parameters that it makes sense to increase right after installing PostgreSQL.
```
=> SELECT setting, unit FROM pg_settings WHERE name = 'shared_buffers';
```
```
setting | unit
---------+------
16384 | 8kB
(1 row)
```
Note that a change of this parameter requires server restart since all the memory for the cache is allocated when the server starts.
What do we need to consider to choose the appropriate value?
Even the largest database has a limited set of «hot» data, which are intensively processed all the time. Ideally, it's this data set that must fit in the buffer cache (plus some space for one-time data). If the cache size if less, then intensively used pages will be constantly evicting one another, which will cause excessive input/output. But blindly increasing the cache is no good either. When the cache is large, the overhead costs of its maintenance will grow, and besides, RAM is also required for other use.
So, you need to choose the optimal size of the buffer cache for your particular system: it depends on the data, application, and load. Unfortunately, there is no magic, one-size-fits-all value.
It is usually recommended to take 1/4 of RAM for the first approximation (PostgreSQL versions lower than 10 recommended a smaller size for Windows).
And then we should adapt to the situation. It's best to experiment: increase or reduce the cache size and compare the system characteristics. To this end, you, certainly, need a test rig and you should be able to reproduce the workload. — Experiments like these in a production environment are a dubious pleasure.
> Be sure to look into Nikolay Samokhvalov's presentation at PostgresConf Silicon Valley 2018: [The Art of Database Experiments](https://www.slideshare.net/samokhvalov/the-art-of-database-experiments-postgresconf-silicon-valley-2018-san-jose).
>
>
But you can get some information on what's happening directly on your live system by means of the same `pg_buffercache` extension. The main thing is to look from the right perspective.
For example: you can explore the distribution of buffers by their usage:
```
=> SELECT usagecount, count(*)
FROM pg_buffercache
GROUP BY usagecount
ORDER BY usagecount;
```
```
usagecount | count
------------+-------
1 | 221
2 | 869
3 | 29
4 | 12
5 | 564
| 14689
(6 rows)
```
In this case, multiple empty values of the count correspond to empty buffers. This is hardly a surprise for a system where nothing is happening.
We can see what share of which tables in our database is cached and how intensively these data are used (by «intensively used», buffers with the usage count greater than 3 are meant in this query):
```
=> SELECT c.relname,
count(*) blocks,
round( 100.0 * 8192 * count(*) / pg_table_size(c.oid) ) "% of rel",
round( 100.0 * 8192 * count(*) FILTER (WHERE b.usagecount > 3) / pg_table_size(c.oid) ) "% hot"
FROM pg_buffercache b
JOIN pg_class c ON pg_relation_filenode(c.oid) = b.relfilenode
WHERE b.reldatabase IN (
0, (SELECT oid FROM pg_database WHERE datname = current_database())
)
AND b.usagecount is not null
GROUP BY c.relname, c.oid
ORDER BY 2 DESC
LIMIT 10;
```
```
relname | blocks | % of rel | % hot
---------------------------+--------+----------+-------
vac | 833 | 100 | 0
pg_proc | 71 | 85 | 37
pg_depend | 57 | 98 | 19
pg_attribute | 55 | 100 | 64
vac_s | 32 | 4 | 0
pg_statistic | 27 | 71 | 63
autovac | 22 | 100 | 95
pg_depend_reference_index | 19 | 48 | 35
pg_rewrite | 17 | 23 | 8
pg_class | 16 | 100 | 100
(10 rows)
```
For example: we can see here that the `vac` table occupies most space (we used this table in one of the previous topics), but it has not been accessed long and it is not evicted yet only because empty buffers are still available.
You can consider other viewpoints, which will provide you with food for thought. You only need to take into account that:
* You need to rerun such queries several times: the numbers will vary in a certain range.
* You should not continuously run such queries (as part of monitoring) since the extension momentarily blocks accesses to the buffer cache.
And there is one more point to note. Do not forget either that PostgreSQL works with files through usual OS calls and therefore, double caching takes place: pages get both into the buffer cache of the DBMS and the OS cache. So, not hitting the buffer cache does not always cause a need for actual input/output. But the eviction strategy of OS differs from that of the DBMS: the OS knows nothing about the meaning of the read data.
Massive eviction
================
Bulk read and write operations are prone to the risk that useful pages can be fast evicted from the buffer cache by «one-time» data.
To avoid this, so called *buffer rings* are used: only a small part of the buffer cache is allocated for each operation. The eviction is carried out only within the ring, so the rest of the data in the buffer cache are not affected.
For sequential scans of large tables (whose size is greater than a quarter of the buffer cache), 32 pages are allocated. If during a scan of a table, another process also needs these data, it does not start reading the table from the beginning, but connects to the buffer ring already available. After finishing the scan, the process proceeds to reading the «missed» beginning of the table.
Let's check it. To do this, let's create a table so that one row occupies a whole page — this way it is more convenient to count. The default size of the buffer cache is 128 MB = 16384 pages of 8 KB. It means that we need to insert more than 4096 rows, that is, pages, into the table.
```
=> CREATE TABLE big(
id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
s char(1000)
) WITH (fillfactor=10);
=> INSERT INTO big(s) SELECT 'FOO' FROM generate_series(1,4096+1);
```
Let's analyze the table.
```
=> ANALYZE big;
=> SELECT relpages FROM pg_class WHERE oid = 'big'::regclass;
```
```
relpages
----------
4097
(1 row)
```
Now we will have to restart the server to clear the cache of the table data that the analysis has read.
```
student$ sudo pg_ctlcluster 11 main restart
```
Let's read the whole table after the restart:
```
=> EXPLAIN (ANALYZE, COSTS OFF) SELECT count(*) FROM big;
```
```
QUERY PLAN
---------------------------------------------------------------------
Aggregate (actual time=14.472..14.473 rows=1 loops=1)
-> Seq Scan on big (actual time=0.031..13.022 rows=4097 loops=1)
Planning Time: 0.528 ms
Execution Time: 14.590 ms
(4 rows)
```
And let's make sure that table pages occupy only 32 buffers in the buffer cache:
```
=> SELECT count(*)
FROM pg_buffercache
WHERE relfilenode = pg_relation_filenode('big'::regclass);
```
```
count
-------
32
(1 row)
```
But if we forbid sequential scans, the table will be read using index scan:
```
=> SET enable_seqscan = off;
=> EXPLAIN (ANALYZE, COSTS OFF) SELECT count(*) FROM big;
```
```
QUERY PLAN
-------------------------------------------------------------------------------------------
Aggregate (actual time=50.300..50.301 rows=1 loops=1)
-> Index Only Scan using big_pkey on big (actual time=0.098..48.547 rows=4097 loops=1)
Heap Fetches: 4097
Planning Time: 0.067 ms
Execution Time: 50.340 ms
(5 rows)
```
In this case, no buffer ring is used and the entire table will get into the buffer cache (along with almost the entire index):
```
=> SELECT count(*)
FROM pg_buffercache
WHERE relfilenode = pg_relation_filenode('big'::regclass);
```
```
count
-------
4097
(1 row)
```
Buffer rings are used in a similar way for a vacuum process (also 32 pages) and for bulk write operations COPY IN and CREATE TABLE AS SELECT (usually 2048 pages, but not more than 1/8 of the buffer cache).
Temporary tables
================
Temporary tables are an exception from the common rule. Since temporary data are visible to only one process, there's no need for them in the shared buffer cache. Moreover, temporary data exist only within one session and therefore do not need protection against failures.
Temporary data use the cache in the local memory of the process that owns the table. Since such data are available to only one process, they do not need to be protected with locks. The local cache uses the normal eviction algorithm.
Unlike for the shared buffer cache, memory for the local cache is allocated as the need arises since temporary tables are far from being used in many sessions. The maximum memory size for temporary tables in a single session is limited by the *temp\_buffers* parameter.
Warming up the cache
====================
After server restart, some time must elapse for the cache to «warm up», that is, get filled with live actively used data. It may sometimes appear useful to immediately read the contents of certain tables into the cache, and a specialized extension is available for this:
```
=> CREATE EXTENSION pg_prewarm;
```
Earlier, the extension could only read certain tables into the buffer cache (or only into the OS cache). But PostgreSQL 11 enabled it to save the up-to-date state of the cache to disk and restore it after a server restart. To make use of it, you need to add the library to *shared\_preload\_libraries* and restart the server.
```
=> ALTER SYSTEM SET shared_preload_libraries = 'pg_prewarm';
```
```
student$ sudo pg_ctlcluster 11 main restart
```
After the restart, if the value of the *pg\_prewarm.autoprewarm* parameter did not change, the **autoprewarm master** background process will be launched, which will flush the list of pages stored in the cache once every *pg\_prewarm.autoprewarm\_interval* seconds (do not forget to count the new process in when setting the value of *max\_parallel\_processes*).
```
=> SELECT name, setting, unit FROM pg_settings WHERE name LIKE 'pg_prewarm%';
```
```
name | setting | unit
---------------------------------+---------+------
pg_prewarm.autoprewarm | on |
pg_prewarm.autoprewarm_interval | 300 | s
(2 rows)
```
```
postgres$ ps -o pid,command --ppid `head -n 1 /var/lib/postgresql/11/main/postmaster.pid` | grep prewarm
```
```
10436 postgres: 11/main: autoprewarm master
```
Now the cache does not contain the table `big`:
```
=> SELECT count(*)
FROM pg_buffercache
WHERE relfilenode = pg_relation_filenode('big'::regclass);
```
```
count
-------
0
(1 row)
```
If we consider all its contents to be critical, we can read it into the buffer cache by calling the following function:
```
=> SELECT pg_prewarm('big');
```
```
pg_prewarm
------------
4097
(1 row)
```
```
=> SELECT count(*)
FROM pg_buffercache
WHERE relfilenode = pg_relation_filenode('big'::regclass);
```
```
count
-------
4097
(1 row)
```
The list of blocks is flushed into the `autoprewarm.blocks` file. To see the list, we can just wait until the **autoprewarm master** process completes for the first time, or we can initiate the flush manually like this:
```
=> SELECT autoprewarm_dump_now();
```
```
autoprewarm_dump_now
----------------------
4340
(1 row)
```
The number of flushed pages already exceeds 4097; the pages of the system catalog that are already read by the server are counted in. And this is the file:
```
postgres$ ls -l /var/lib/postgresql/11/main/autoprewarm.blocks
```
```
-rw------- 1 postgres postgres 102078 jun 29 15:51 /var/lib/postgresql/11/main/autoprewarm.blocks
```
Now let's restart the server again.
```
student$ sudo pg_ctlcluster 11 main restart
```
Our table will again be in the cache after the server start.
```
=> SELECT count(*)
FROM pg_buffercache
WHERE relfilenode = pg_relation_filenode('big'::regclass);
```
```
count
-------
4097
(1 row)
```
That same **autoprewarm master** process provides for this: it reads the file, divides the pages by databases, sorts them (to make reading from disk sequential whenever possible) and passes them to a separate **autoprewarm worker** process for handling.
[Read on](https://habr.com/en/company/postgrespro/blog/494246/). | https://habr.com/ru/post/491730/ | null | null | 3,698 | 60.35 |
Build the app shown in this screenshot!
This code lab walks you through the process of building a simple web app with Dart and Angular 2. You don't need to know Dart, Angular, or web programming to complete this code lab, but we do assume you have some programming experience.
In this step, you download any software that you need, and learn where to find the sample code.
If you haven't already done so, get the Dart SDK and Dartium.
The Dart SDK download includes several Dart tools that you'll need. If you wish to run the Dart tools from the command line, add <path-to-the-SDK>/dart-sdk/bin to your path.
You will need Dartium to test your app during development.
If you don't have a favorite editor already, try IntelliJ IDEA, community edition. You will install a plugin for Dart development. You can also download Dart plugins for other IDEs and editors.
If this is the first time you've used your IDE with Dart, you'll need to configure the plugin with the location of the Dart SDK and Dartium. See Configuring Dart support for instructions on configuring IntelliJ IDEA, community edition. The Dart Tools page has links where you can find more information about other plugins.
In this step, you create an Angular app, look at its code, and run the app in Dartium.
IntelliJ provides a set of templates for creating a variety of Dart apps. When you create a new app, you can start with one of the application templates, or you can start from scratch.
IntelliJ IDEA takes many seconds to analyze the sources and do other housekeeping. This only happens once. After that, you'll be able to do the usual things, like using F1 to get help on any method, class or field, or Command+B to navigate to a method's declaration, or Shift+F6 to refactor or rename.
What happened?
IntelliJ IDEA creates a
pirate_badge directory and boilerplate files for a basic Angular app. It then runs
pub get, Dart's package management tool, to download the packages that the app depends on. Finally, IntelliJ runs Dart's static analyzer over the code to look for errors and warnings.
Key information
web/index.htmlfile.
my-appis an unknown HTML tag in
index.html. You can ignore this warning.
Get familiar with the structure of a basic Angular app.
In the Project view, on the left, expand the
pirate_badge folder. Then expand the
lib and
web folders to see the following:
For now, you can ignore some of these auto-created files. The following shows the files and directories referenced in this code lab:
pirate_badge/ lib/ app_component.dart app_component.html web/ index.html main.dart pubspec.yaml
As you might expect, the
lib directory contains library files. In an Angular app, component classes are generally created as library files. The
web directory contains the main files for a web app. Double clicking a file opens that file in the editor view.
Key information
Get familiar with the HTML and the Dart code for the skeleton version of the app. Double-click a filename in the project view to see its contents in the editor view. Double click the ellipsis (
... ) highlighted in green to see the hidden text. You should the following code (all copyright notices are omitted here):
web/main.dart
import 'package:angular2/browser.dart'; import 'package:angular2/app_component.dart'; main() { bootstrap(AppComponent); }
Key information
main()function is the single entry point for the app.
main()is a top-level function. A top-level variable or function is one that is declared outside a class definition.
package:syntax specifies the location of the library.
angular2package, which the pub tool downloads from
pub.dartlang.org. Files that call
bootstrap()import
platform/browser.dartfrom the angular package.
app_component.dart, loads the app component,
AppComponent. The
package:pirate_badge/app_component.darttext tells the pub tool to look for this file under the
libdirectory of this app.
app_component.dartfile defines the
AppComponentclass.
bootstrap()starts your app, with the specified component as the app's root component.
web/index.html
<!DOCTYPE html> <html> <head> <title>pirate_badge</title> <script defer</script> <script defer</script> </head> <body> <my-app>Loading...</my-app> </body> </html>
Key information
<script>tag identifies the main file that implements the app. Here, it's the
main.dartfile. The Dart VM launches the app using this file.
packages/browser/dart.jsscript checks for native Dart support (for example, Dartium) and either bootstraps the Dart VM or loads compiled JavaScript instead.
<my-app>selector, it loads an instance of the component associated with that selector. In this example, that's the
AppComponentclass.
lib/app_component.dart
import 'package:angular2/core.dart'; @Component(selector: 'my-app', templateUrl: 'app_component.html') class AppComponent {}
Key information
core.dartlets the app use
Componentand other common Angular types.
@Componentannotation defines
AppComponentas an Angular component.
@Componentconstructor has two named parameters:
selectorand
templateUrl.
selectorparameter specifies a CSS selector for this component. Angular creates and displays an instance of
AppComponentwhen it encounters a
<my-app>element in the HTML.
templateUrlparameter specifies the file that contains the view. To define the HTML within the Dart file as a Dart string, use the
templateparameter instead.
lib/app_component.html
<h1>My First Angular 2 App</h1>
Key information
<my-app>element appears in the app.
pubspec.yaml
name: pirate_badge description: A Dart app that uses Angular 2 version: 0.0.1 environment: sdk: '>=1.13.0 <2.0.0' dependencies: angular2: 2.0.0-beta.17 browser: ^0.10.0 dart_to_js_script_rewriter: ^1.0.1 transformers: - angular2: platform_directives: - 'package:angular2/common.dart#COMMON_DIRECTIVES' platform_pipes: - 'package:angular2/common.dart#COMMON_PIPES' entry_points: web/main.dart - dart_to_js_script_rewriter
Key information
pubspec.yamlfile (often referred to as the pubspec) contains metadata about that package, such as its name.
angular2,
browser, and
dart_to_js_script_rewriterpackages needed by this app are hosted on
pub.dartlang.orgalong with many others.
transformers:field.
platform_directivesdefinition makes some common Angular directives available to every component. An example of a common Angular directive is NgIf, which lets a component change its UI based on a true-false value in your Dart code.
platform_pipesdefinition makes some common Angular pipes available to every component. For example, you can use the built-in PercentPipe to format a number as a percentage.
entry_pointssection tells the Angular transformer which file contains the starting point for the app. Some apps have multiple entry points.
dart_to_js_script_rewritertransformer when building your app for deployment.
pub getinstalls the packages that your app depends on, as defined by your app's pubspec. IntelliJ typically detects that the pubspec has changed and asks you to get the dependencies again. If the buttons don't appear at the top of the editor view, you can find them by opening the pubspec.
pubspec.lockfile, created by
pub get, lists every package that your app directly or indirectly depends on, along with the version number for each package.
Run the app using Dartium.
In IntelliJ, double-click the web/index.html file to open the file in the editor view. Hover your mouse pointer over the code to show the browser icons bar, and click the Dartium icon on the far right.
IntelliJ launches the app in a Dartium window. You should see something like the 1-skeleton.
In this step, you extend the basic Angular app with a badge component, which encapsulates the behavior and appearance of the pirate badge.
This is the hardest step in this code lab. By the end of this step, your app will display a snazzy name badge. The next steps, where you add interactivity, are easy and fun.
libdirectory and select New > File from the menu that pops up.
badge_component.htmlas the filename and click OK.
Key information
badge_component.htmlfile under
lib.
Enter the HTML for the name badge.
<div class="badge"> <div class="greeting">Arrr! Me name is</div> <div class="name">{{badgeName}}</div> </div>
Key information
<head>or
<body>tags.
<div class =...>code defines areas of content that you can style. Later in this step, you add a stylesheet (a CSS file) that defines how the badge should be displayed. This lab doesn't cover how to write CSS. The resources page has information on where you can learn more about CSS styling.
badgeNameto the Dart code as an instance variable in the next section.
{{expression}}, is sometimes called double mustaches. This notation creates a one-way binding between the HTML template and the Dart code.
badgeNamechanges in the Dart code, the value in the UI automatically updates.
Shiver me timbers!
The stylesheet is too long to include here, but we've provided one for you to copy and paste into your project.
libdirectory, and select New -> File from the menu that pops up.
badge_component.cssas the filename and click OK.
badge_component.cssinto the newly created file.
Key information
badge_component.cssfile under
lib.
libdirectory, and select New > Dart File from the menu that pops up.
badge_componentas the filename and click OK.
Import Angular's core library.
import 'package:angular2/core.dart';
Create a BadgeComponent class annotated with
@Component. The class contains a name badge instance variable—replace "Sundar" with your name.
import 'package:angular2/core.dart'; @Component( selector: 'pirate-badge', templateUrl: 'badge_component.html', styleUrls: const ['badge_component.css']) class BadgeComponent { String badgeName = 'Sundar'; }
Key information
@, followed by either a reference to a compile-time constant or a call to a constant constructor.
styleUrlsparameter to the
Componentconstructor specifies the file that contains the CSS styling for this component.
[<value>]creates a list that contains a single value (in this case the name of the CSS file).
constmodifier on the list literal,
const ['badge_component.css'], converts the collection to a compile-time constant. Recall that
Component(...)is a constant constructor, and all arguments to constant constructors must be compile-time constants.
Import the badge component.
import 'package:angular2/core.dart'; import 'badge_component.dart';
Key information
lib, they can import each other using relative paths, for example,
import 'badge_component.dart'. However, any file that's not under
lib(
web/main.dart, for example) must use a
package:URL to import libraries defined under
lib.
BadgeComponentdirective.
Add the text
, directives: const [BadgeComponent] to the
@Component annotation.
@Component(selector: 'my-app', templateUrl: 'app_component.html' , directives: const [BadgeComponent])
Key information
directives:field.
<pirate-badge>selector and loads the BadgeComponent class.
Format the file.
To format the file that is currently open, right-click in the editor view and select Reformat with Dart Style from the menu that pops up. After formatting, the file should look like the following:
@Component( selector: 'my-app', templateUrl: 'app_component.html', directives: const [BadgeComponent]) class AppComponent {}
Key information
Replace the contents of the HTML template:
<h1>Avast, Ye Pirates</h1> <pirate-badge></pirate-badge>
Key information
<pirate-badge>selector, it loads an instance of
BadgeComponent.
webdirectory and select New > File from the menu that pops up.
styles.cssin the dialog that opens and click OK. An empty
styles.cssfile is created under
web.
body { background-color: #F8F8F8; font-family: 'Open Sans', sans-serif; font-size: 14px; font-weight: normal; line-height: 1.2em; margin: 15px; }
Change the title to "Avast, Ye Pirates".
<head> <title>Avast, Ye Pirates</title> ... </head>
Add a reference to the stylesheet.
<head> <title>Avast, Ye Pirates</title> <link rel="stylesheet" href="styles.css"> ... </head>
In IntelliJ, double-click the web/index.html file to open the file in the editor view. Hover your mouse pointer over the code to show the browser icons bar, and click the Dartium icon on the far right.
You should see a name badge with your name, or "Sundar" if you didn't change the name. Assuming your machine has the fonts specified in the CSS file, the badge should look similar to the following:
Next you will add some interactivity. 2-blankbadge.
In this step, you add an input field. As the user types into the input field, the badge updates.
Add a div containing an input field to the top of the file:
<div class="widgets"> <input (input)="updateBadge($event.target.value)" type="text" maxlength="15"> </div>
Key information
<input...>defines an HTML5 input element.
(input)="updateBadge(...)"text creates an event binding.
(input). This event binding listens for an input event on the input field.
updateBadge($event.target.value), appears (in quotes) to the right of the equals sign.
updateBadge()that you'll define soon in Dart code. The argument is the value that the user entered.
$event, contains the value of the raised event. In this example, the event object represents the DOM event object, so the new value resides in
$event.target.value.
maxLength="15"text limits user input to 15 characters.
Delete the hard coded badge name and add an event handler,
updateBadge(), to the BadgeComponent class.
class BadgeComponent { String badgeName = ''; void updateBadge(String inputName) { badgeName = inputName; } }
Key information
updateBadge()with the value entered by the user.
In IntelliJ, double-click the web/index.html file to open the file in the editor view. Hover your mouse pointer over the code to show the browser icons bar, and click the Dartium icon on the far right.
Type into the input field. The name badge updates to display what you've typed. 3-inputnamebadge.
In this step, you add a button that's enabled when the input field is empty. When the user clicks the button, the app displays "Anne Bonney" on the badge.
Add a button to the
widgets div.
<div class="widgets"> <input ...> <button [disabled]="!isButtonEnabled" (click)="generateBadge()"> {{buttonText}} </button> </div>
Key information
[]specify a property on the element. This example references the
disabledproperty on the button.
[disabled] = "!isButtonEnabled"text enables or disables the button element, based on the value of the corresponding Dart variable.
isButtonEnabledto the Dart code in the next section.
(click)="generateBadge()"text sets up an event handler for button clicks. Whenever the user clicks the button, the
generateBadge()method is called. You'll add the
generateBadge()event handler to the Dart code in the next section.
buttonTextvariable will soon be defined in the Dart code. The
<button ...> {{buttonText}} </button>code tells Angular to display the value of
buttonTexton the button.
Add two variables to the BadgeComponent class.
class BadgeComponent { String badgeName = ''; String buttonText = 'Aye! Gimme a name!'; bool isButtonEnabled = true; ... }
Key information
isButtonEnabledwhen determining whether to display the button.
Add a
generateBadge() function.
class BadgeComponent { ... bool isButtonEnabled = true; void generateBadge() { badgeName = 'Anne Bonney'; } ... }
Key information
Modify the
updateBadge() function to toggle the button's state based on whether there is text in the input field.
class BadgeComponent { ... void updateBadge(String inputName) { badgeName = inputName; if (inputName.trim().isEmpty) { buttonText = 'Aye! Gimme a name!'; isButtonEnabled = true; } else { buttonText = 'Arrr! Write yer name!'; isButtonEnabled = false; } }
Key information
In IntelliJ, double-click the web/index.html file to open the file in the editor view. Hover your mouse pointer over the code to show the browser icons bar, and click the Dartium icon on the far right.
Type in the input field. The name badge updates to display what you've typed, and the button is disabled. Remove the text from the input field and the button is enabled. Click the button. The name badge displays "Anne Bon 4-buttonbadge.
A proper pirate name consists of a name and an appellation, such as "Margy the Fierce" or "Renée the Fighter". In this step, you learn about Angular's support for dependency injection as you add a service that returns a pirate name.
libdirectory and select New > Dart File from the menu that pops up.
Add imports to the file.
import 'dart:math' show Random; import 'package:angular2/core.dart';
Key information
showkeyword lets you import only the classes, top-level functions, or top-level variables that you need.
Randomprovides a random number generator.
angular2/core.dartlibrary gives you access to the
Injectableclass that you'll add next.
Add a class declaration below the import and annotate it with
@Injectable().
@Injectable() class NameService { }
Key information
@Injectable()annotation, it generates necessary metadata so that the annotated object is injectable.
Create a class-level Random object.
class NameService { static final Random _indexGen = new Random(); }
Key information
staticannotation makes the field a class variable, rather than an instance variable. Therefore, the random number generator is shared with all instances of this class.
finalvariables must be initialized and cannot change.
_); Dart has no
privatekeyword.
newkeyword indicates a call to the constructor.
Create two lists within the class that provide a small collection of names and appellations to choose from.
class NameService { static final Random _indexGen = new Random(); final List _names = [ 'Anne', 'Mary', 'Jack', 'Morgan', 'Roger', 'Bill', 'Ragnar', 'Ed', 'John', 'Jane' ]; final List _appellations = [ 'Jackal', 'King', 'Red', 'Stalwart', 'Axe', 'Young', 'Brave', 'Eager', 'Wily', 'Zesty'];
Key information
Listclass provides the API for lists.
Provide helper methods that retrieve a randomly chosen first name and appellation.
class NameService { ... String _randomFirstName() => _names[_indexGen.nextInt(_names.length)]; String _randomAppellation() => _appellations[_indexGen.nextInt(_appellations.length)]; }
Key information
=> expr;) syntax is a shorthand for
{ return expr; }.
nextInt()function gets a new random integer from the random number generator.
[and
]) to index into a list.
lengthproperty returns the number of items in a list.
Provide a method that gets a pirate name.
class NameService { ... String getPirateName(String firstName) { if (firstName == null || firstName.trim().isEmpty) { firstName = _randomFirstName(); } return '$firstName the ${_randomAppellation()}'; } }
Key information
'$firstName the ${_randomAppellation()}') lets you easily build strings from other objects. To insert the value of an expression, use
${expr}. You can drop the curly brackets when the expression is an identifier:
$id.
Hook up the name service to the badge component.
Import the name service.
import 'package:angular2/core.dart'; import 'name_service.dart';
Add
NameService as a provider by adding the text
, providers: const [NameService] to the
@Component annotation. After formatting, it should look as follows:
@Component( selector: 'pirate-badge', templateUrl: 'badge_component.html', styleUrls: const ['badge_component.css'], providers: const [NameService]) class BadgeComponent { ... }
Key information
providerspart of
@Componenttells Angular what objects are available for injection in this component.
Add a
_nameService instance variable.
class BadgeComponent { final NameService _nameService; ... }
Key information
_nameServiceis a final field–an instance variable that's declared
final. Final fields must be set before the constructor body runs.
Add a constructor that assigns a value to
_nameService.
class BadgeComponent { ... BadgeComponent(this._nameService); ... }
Key information
@Injectableannotation on NameService, combined with the
providerslist containing NameService, lets Angular create the NameService object.
thisbefore. You can access local instance variables using
this. Dart only uses this when necessary, otherwise Dart style omits it.
this._nameServicetext in the argument list assigns the passed-in parameter to the
_nameServicevariable. Since the assignment happens in the argument list, and the constructor doesn't need to do anything else, the body isn't needed.
_nameServiceweren't a final variable, this code could be replaced with:
BadgeComponent(var nameService) { _nameService = nameService; }
But since _nameService is final, it has to be initialized when it's declared, or in the constructor's argument list.
Add a
setBadgeName() method.
class BadgeComponent { ... void setBadgeName([String newName = '']) { if (newName == null) return; badgeName = _nameService.getPirateName(newName); } }
Key information
[String newName = '']is an optional positional parameter with a default value of the empty string.
Modify the
generateBadge() and
updateBadge() methods.
class BadgeComponent implements OnInit { ... void generateBadge() { setBadgeName(); } void updateBadge(String inputName) { setBadgeName(inputName); ... }
In IntelliJ, double-click the web/index.html file to open the file in the editor view. Hover your mouse pointer over the code to show the browser icons bar, and click the Dartium icon on the far right.
Click the button—each click displays a new pirate name composed of a name and an appellation. 5-piratenameservice.
In this final step, you learn about Dart's support for asynchronous file I/O as you modify the name service to fetch the names and appellations from a JSON file on.
Add imports to the top.
import 'dart:async'; import 'dart:convert'; import 'dart:html'; ...
Key information
dart:asynclibrary provides for asynchronous programming.
dart:convertlibrary provides convenient access to the most commonly used JSON conversion utilities.
dart:htmllibrary contains the classes for all DOM element types, in addition to functions for accessing the DOM.
After the import, add a constant defining the location of the JSON file.
import ... const _namesPath = '';
Replace
_names and
_appellations with empty lists.
class NameService { ... final _names = <String>[]; final _appellations = <String>[]; ... }
Key information
<String>[]is equivalent to new
List<String>().
Listtype on
_namesand
_appellations, but you don't need to.
Add a function,
readyThePirates(), to read the names and appellations from the JSON file.
class NameService { ... Future readyThePirates() async { if (_names.isNotEmpty && _appellations.isNotEmpty) return; var jsonString = await HttpRequest.getString(_namesPath); var pirateNames = JSON.decode(jsonString); _names.addAll(pirateNames['names']); _appellations.addAll(pirateNames['appellations']); } ... }
Key information
readyThePiratesis marked with the
asynckeyword. An asynchronous function returns a Future immediately, so the caller has the opportunity to do something else while waiting for the function to complete its work.
Futureclass (from
dart:async) provides a way to get a value in the future. (Dart Futures are similar to JavaScript Promises.)
HttpRequestis a
dart:htmlutility for retrieving data from a URL.
getString()is a convenience method for doing a simple GET request that returns a string.
getString()is asynchronous. It sets up the GET request and returns a Future that completes when the GET request is finished.
awaitexpression, which can only be used in an async function, causes execution to pause until the GET request is finished (when the Future returned by
getString()completes).
Enable the input field depending on the value of a property.
<div class="widgets"> <input [disabled]="!isInputEnabled" (input)="updateBadge($event.target.value)" type="text" maxlength="15"> ... </div>
Key information
isInputEnabledrefers to a property that you'll add to this component's Dart file.
Load the pirate names and appellations from a JSON file. When successfully loaded, enable the UI.
At startup, disable the button and input field.
class BadgeComponent { ... bool isButtonEnabled = false; bool isInputEnabled = false; }
After the constructor, add a function to get the names from the JSON file, handling both success and failure.
class BadgeComponent implements OnInit { ... ngOnInit() async { try { await _nameService.readyThePirates(); //on success isButtonEnabled = true; isInputEnabled = true; } catch (arrr) { badgeName = 'Arrr! No names.'; print('Error initializing pirate names: $arrr'); } } ... }
Key information
ngOnInit()is one of the lifecycle hooks available in Angular. Angular calls
ngOnInit()after the component is initialized.
async, so this function can use the
awaitkeyword.
readyThePirates()function, which immediately returns a Future.
readyThePirates()successfully completes, set up the UI.
tryand
catchto detect and handle errors.
In IntelliJ, double-click the web/index.html file to open the file in the editor view. Hover your mouse pointer over the code to show the browser icons bar, and click the Dartium icon on the far right.
The app should work as before, but this time the pirate name is constructed from the JSON file. 6-readjsonfile.
You've written an Angular 2 for Dart web app!
Now that you've written your app, what do you do next? Here are some suggestions.
Work through the QuickStart and Developer Guides on angular.io.
You can test your app in other browsers by right-clicking index.html and choosing Open in Browser from the pop up menu.
To compile the app into JavaScript that runs in any modern browser, use pub build. Build the app in IntelliJ, as follows:
pubspec.yamland click Build....
pub build.
Learn more about Dart from the language tour and library tour. | https://codelabs.developers.google.com/codelabs/ng2-dart/index.html?index=..%2F..%2Findex | CC-MAIN-2017-13 | refinedweb | 3,891 | 52.36 |
Hi there, I am in a first year programming class at Dalhousie University (Halifax, Nova Socita, Canada), and I need a little help with an assignment. The exercise is as follows:
"[10 Points] Create a number guessing game. First, ask the user for some positive integer, n. Ask the user to select any number between 0 and n and have your program try to guess it. After each guess, ask the user if the guess is correct. If incorrect, your program should ask the user if the user's chosen number is higher or lower than the guess. Have the program keep track of the number of guesses it needed to find the answer. Try to find the answer in as few guesses as possible! An excellent implementation should be able to guess a number between 0 and 1000 within 10 tries. Remember to add descriptive comments to your code."
Heres what I have:
import random n = input("Hi! Please enter a positive integer!") print "Thanks! Now, pick a number between 0 and your number!" guess = random.randint(0, n) attempts = 0 print "Is", guess, "your number?" answer = input ("1 = yes, 2 = no") while answer == 2: highlow = input ("Is the number higher or lower? 1 = higher 2 = lower") if highlow == 1: guess1 = random.randint(guess, 50) print "Is", guess1, "your number?" answer = input ("1 = yes, 2 = no") attempts += 1 if highlow == 2: guess1 = random.randint(0, guess) print "Is", guess1, "your number?" answer = input ("1 = yes, 2 = no") attempts += 1 if answer == 1: print "I WIN!!!!!" print "I guessed your number in", attempts, "guesses!"
The program works, however, is not very smart. As you can see, I set it to guess a number between 0 and 50 rather than 1000 and it still cannot guess the number is a decent number of tries. I know whats wrong with it, its that I am having the program guess random numbers between 0 and the initial guess or between the initial guess and 1000, which can keep the computer guessing for ages. How can I make the program smarter so that it keeps track of its guesses and makes it next guess based on that, rather than just guessing another random number? Would Appreciate any help! | https://www.daniweb.com/programming/software-development/threads/325878/number-guessing | CC-MAIN-2017-09 | refinedweb | 374 | 74.08 |
On 12/16/2016 10:01 AM, Michal Hocko wrote:>> about that. I am just wondering whether this has been motivated by any> particular bug recently. I do not seem to remember any such an issue for> quite some time.Yes, I've been hitting a use-after-free with trinity that happens whenthe OOM killer reaps a task. I can reproduce it reliably within a fewseconds, but with the amount of refs and syscalls going on I haven'tbeen able to figure out what's actually going wrong (to put things intoperspective the refcounts goes into the thousands before eventuallydropping down to 0 and trying to trace_printk() each get/put results inseveral hundred megabytes of log files).The UAF itself (sometimes a NULL pointer deref) is on a struct file(sometimes in the page fault path, sometimes in clone(), sometimes inexecve()), and my initial debugging lead me to believe it was actually aproblem with mm_struct getting freed prematurely (hence this patch). Butdisappointingly this patch didn't turn up anything so I must reevaluatemy suspicion of an mm_struct leak.I don't think it's a bug in the OOM reaper itself, but either of thefollowing two patches will fix the problem (without my understand how orwhy):diff --git a/mm/oom_kill.c b/mm/oom_kill.cindex ec9f11d4f094..37b14b2e2af4 100644--- a/mm/oom_kill.c+++ b/mm/oom_kill.c@@ -485,7 +485,7 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) */ mutex_lock(&oom_lock);- if (!down_read_trylock(&mm->mmap_sem)) {+ if (!down_write_trylock(&mm->mmap_sem)) { ret = false; goto unlock_oom; }@@ -496,7 +496,7 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) * and delayed __mmput doesn't matter that much */ if (!mmget_not_zero(mm)) {- up_read(&mm->mmap_sem);+ up_write(&mm->mmap_sem); goto unlock_oom; }@@ -540,7 +540,7 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) K(get_mm_counter(mm, MM_ANONPAGES)), K(get_mm_counter(mm, MM_FILEPAGES)), K(get_mm_counter(mm, MM_SHMEMPAGES)));- up_read(&mm->mmap_sem);+ up_write(&mm->mmap_sem); /* * Drop our reference but make sure the mmput slow path is called from a--OR--diff --git a/mm/oom_kill.c b/mm/oom_kill.cindex ec9f11d4f094..559aec0acd21 100644--- a/mm/oom_kill.c+++ b/mm/oom_kill.c@@ -508,6 +508,7 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) */ set_bit(MMF_UNSTABLE, &mm->flags);+#if 0 tlb_gather_mmu(&tlb, mm, 0, -1); for (vma = mm->mmap ; vma; vma = vma->vm_next) { if (is_vm_hugetlb_page(vma))@@ -535,6 +536,7 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) &details); } tlb_finish_mmu(&tlb, 0, -1);+#endif pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n", task_pid_nr(tsk), tsk->comm, K(get_mm_counter(mm, MM_ANONPAGES)),Maybe it's just the fact that we're not releasing the memory and so someother bit of code is not able to make enough progress to trigger thebug, although curiously, if I just move the #if 0..#endif insidetlb_gather_mmu()..tlb_finish_mmu() itself (so just calling tlb_*()without doing the for-loop), it still reproduces the crash.Another clue, although it might just be a coincidence, is that it seemsthe VMA/file in question is always a mapping for the exe file itself(the reason I think this might be a coincidence is that the exe filemapping is the first one and we usually traverse VMAs starting with thisone, that doesn't mean the other VMAs aren't affected by the sameproblem, just that we never hit them).I really wanted to figure out and fix the bug myself, it's a great wayto learn, after all, instead of just sending crash logs and lettingsomebody else figure it out. But maybe I have to admit defeat on this> that uncommon to take an mm reference from one context and release it> from a different one. But I might be missing your point here.An owner is somebody who knows the pointer and increments the referencecounter for it.You'll notice a bunch of functions just take a temporary on-stackreference (e.g. struct mm_struct *mm = get_task_mm(tsk); ...mmput(&mm)), in which case it's the function that owns the referenceuntil the mmput.Some functions take a reference and stash it in some heap object, anexample from the patch could be 'struct vhost_dev' (which has a ->mmfield), and it does get_task_mm() in an init function and mmput() in acleanup function. In this case, it's the struct which is the owner ofthe reference, for as long as ->mm points to something non-NULL. Thiswould be an example of taking the reference in one context and releasingit in a different one. I guess the point is that we must always releasea _specific_ reference when we decrement a reference count. Yes, it's anumber, but that number does refer to a specific reference that wastaken at some point in the past (and we should knowwhich reference this is, otherwise we don't actually "have it").We may not be used to thinking of reference counts as actual places ofreference, but that's what it is, fundamentally. This patch just makesit very explicit what the owners are and where ownership transfers takeplace.>>:[...]>> This all sounds way too intrusive to me so I am not really sure this is> something we really want. A nice thing for debugging for sure but I am> somehow skeptical whether it is really worth it considering how many> those ref. count bugs we've had.Yeah, I agree it's intrusive. And it did start out as just a debuggingpatch, but I figured after having done all the work I might as wellslap on a changelog and submit it to see what people think.However, it may have some value as documentation of who is the owner ofeach reference and where/when those owners change. Maybe I should justextract that knowledge and add it in as comments instead.Thanks for your comments!Vegard | https://lkml.org/lkml/2016/12/16/122 | CC-MAIN-2020-45 | refinedweb | 966 | 57.81 |
Modules allow you to use standard libraries that extend PowerShell's functionality. They are easier to use than to create, but if you get the hang of creating them, your code will be more easily-maintained and re-usable. Let Michael Sorens once more be your guide through PowerShell's 'Alice in Wonderland' world.
Contents
- Encapsulation
- Best Practices for Module Design
- Name Collisions – Which One to Run?
- Conclusion
In my previous PowerShell exploration (A Study in PowerShell Pipelines, Functions, and Parameters) I concentrated on describing how parameters were passed to functions, explaining the bewildering intricacies on both sides of the function interface (the code doing the calling and the code inside the function doing the receiving). I didn’t mention how to go about actually creating a function because it was so simple to do that it could safely be left as an extracurricular exercise. With modules, by contrast, the complexity reverses; it is more intricate to create a module than to use a module, so that is where you are heading now. The first half of this article guides you along the twisted path from raw code to tidy module; the second half introduces a set of best practices for module design.
Encapsulation
As you likely know, encapsulation makes your code more manageable. Encapsulation is the process of separating an interface from its implementation by bundling data and code together and exposing only a well-defined portion of it. The following sections walk you along the road to encapsulation in PowerShell.
's Adventures in Wonderland (Lewis Carroll)
Refactor Inline Code into Functions
Encapsulation encourages you to convert a single code sequence with inordinate detail into a more digestible and simpler piece of code (Figure 1).
Figure 1: Refactoring inline code to a function
Refactoring the first example into the second ended up only moving one or two lines of code (depending on how you count it) into the separate Match-Expression function. But look at how much easier it is to comprehend the code! The main program lets a reader of your code observe that Match-Expression uses the given regular expression to find several values from a given string. It does not reveal how—the Match-Expression function hides the details of how the match operator works. And that's great, because your reader does not care. Before you argue the point, consider a different context such as some .NET-supplied function, e.g. String.Join. Except in rare circumstances you simply do not care about the implementation of String.Join; you just need to know what it does.
Refactoring to functions is useful and important to do, of course, but there is one cautionary note: if instead of the simple Match-Expression function you have a more complex function that includes several support functions and variables, all of those support objects are polluting your current scope. There is nothing to prevent another part of your script from using one of these support functions that was specifically designed to be used only by Match-Expression (or rather its complex cousin). Or worse, in your zeal to refactor into smaller and smaller functions you might create a function with the same name as a built-in cmdlet; your function would supersede the built-in one. The next section returns to this consideration after a fashion.
Refactor Functions into Files
Now you have this Match-Expression function that came in quite handy in your script. You find it so useful, in fact, that you want to use it in other scripts. Good design practice dictates the DRY principle: Don't Repeat Yourself. So rather than copying this function into several other script files, move it into its own file (Expressions.ps1) and reference it from each script. Modify the above example to use dot-sourcing (explained in the Using Dot Source Notation with Scope section of the help topic about_Scopes) to incorporate the contents of Expressions.ps1 (Figure 2).
Figure 2: Refactoring an inline function to a separate file
The code on the right is exactly equivalent to the code on the left. The elegance of this is that if you want to change the function you have only one piece of code to modify and the changes are automatically propagated everywhere you have referenced the file.
Dot-sourcing reads in the specified file just as if it was in the file.
Dot-Sourcing Pitfall
There is, however, a potential problem. As you have just seen, dot-sourcing syntax includes just two pieces: a dot (hence the name!) and a file path. In the example above I show the file path as a dot as well, but there it means current directory. The current directory is where you happen to be when you invoke the script; it is not tied to the script's location at all! Thus, the above only works because I specifically executed the script from the script directory. What you need then is a way to tell PowerShell to look for the Expressions.ps1 file in the same directory as your main script—regardless of what your current directory is.
A web search on this question leads you to the seemingly ubiquitous script that originated with this post by Jeffrey Snover of the PowerShell team:
function Get-ScriptDirectory
{
$Invocation = (Get-Variable MyInvocation -Scope 1).Value
Split-Path $Invocation.MyCommand.Path
}
If you include the above in your script (or in a separate file and dot-source it!) then add this line to your script:
Write-Host (Get-ScriptDirectory)
…it will properly display the directory where your script resides rather than your current directory. Maybe. The results you get from this function depend on where you call it from!?”
--Alice. Chapter 12, Through the Looking Glass (Lewis Carroll)
It failed immediately when I tried it! I was surprised, because I found this code example proliferated far and wide on the web. I soon discovered that it was because I used it differently to Snover's example: Instead of calling it at the top-level in my script, I’d called it from inside another function in a way I refer to as “nested twice” in the following table. It took just a simple tweak to make Get-ScriptDirectory more robust: You just need to change from parent scope to script scope; -scope 1 in the original function definition indicates parent scope and $script in the modified one indicates script scope.
function Get-ScriptDirectory
{
Split-Path $script:MyInvocation.MyCommand.Path
}
To illustrate the difference between the two implementations, I created a test vehicle that evaluates the target expression in four different ways (bracketed terms are keys in the table that follows):
- Inline code [inline]
- Inline function, i.e. function in the main program [inline function]
- Dot-sourced function, i.e. the same function moved to a separate .ps1 file [dot source]
- Module function, i.e. the same function moved to a separate .psm1 file [module]
The first two columns in the table define the scenario; the last two columns display the results of the two candidate implementations of Get-ScriptDirectory. A result of script means that the invocation correctly reported the location of the script. A result of module means the invocation reported the location of the module (see next section) containing the function rather than the script that called the function; this indicates a drawback of both implementations such that you cannot put this function in a module to find the location of the calling script. Setting this module issue aside, the remarkable observation from the table is that using the parent scope approach fails most of the time (in fact, twice as often as it succeeds)!
(You can find my test vehicle code for this in my post on StackOverflow.)
Dot-Sourcing: The Dark Side
Dot-sourcing has a dark side, too, however. Consider again if instead of the simple Match-Expression function you have a more complex function that includes several support functions and variables. Moving those support functions out of the main file and hiding them (i.e. encapsulating them) in the file you will include with dot-sourcing is clearly a good thing to do. But the problem of dot-sourcing, then, is precisely the same as the benefit:
Dot-sourcing reads in the specified file just as if it was in the file.
That means dot-sourcing pollutes your main file with all of its support functions and variables—it is not actually hiding anything. In fact, the situation is far worse with dot-sourcing than it was with just refactoring in the same file: here the detritus is hidden from you (because you no longer see it in your main file) yet it is present and polluting your current scope all the same. But do not despair! The next section provides a way out of this quagmire.
Refactor Functions into Modules
A module is nothing more than a PowerShell script with a .psm1 extension instead of a .ps1 extension. But that small change also addresses both of the issues just discussed for dot-sourcing a script. Figure 3 returns to the familiar example again. The contents of Expressions.ps1 and Expressions.psm1 are identical for this simple example. The main program uses the Import-Module cmdlet instead of the dot-sourcing operator.
Figure 3: Refactoring code from dot-sourcing to module importation
Notice that the Import-Module cmdlet is not referencing a file at all; it references a module named Expressions, which corresponds to the file Expressions.psm1 when it is located under one of these two system-defined locations (See Storing Modules on Disk under Windows PowerShell Modules):
Thus, the whole issue of current directory and script directory, a problem for dot-sourcing, becomes moot for modules. To use modules you must copy them into one or the other of these system repositories to be recognized by PowerShell. Once deposited you then use the Import-Module cmdlet to expose its interface to your script. (Caveat: you cannot just put Expressions.psm1 in either repository as an immediate child; you must put it in a subdirectory called Expressions. See the next section for the rules on this interesting topic.)
The second issue with dot-sourcing and inline code was pollution due to “faux encapsulation”. A module truly does encapsulate its contents. Thus, you can have as much support code as you want in your module; your main script that imports the module will be able to see only what you want exposed. By default, all functions are exposed. So if you do have some functions that you want to remain private, you have to use explicit exporting instead of the default. Also, if you want to export aliases, variables, or cmdlets, you must use explicit exporting. To explicitly specify what you want to export (and thus what a script using the module can see from an import) use the Export-ModuleMember cmdlet. Thus, to make Expressions.psm1 use explicit exporting, add this line to the file:
Export-ModuleMember Match-Expression
Best Practices for Module Design
Before you launch into creating modules willy-nilly, there are a few more practical things you should know, discussed next.
Extracting Information about Modules
Before you can use modules you have to know what you already have and what you can get. Get-Module is the gatekeeper you need. With no arguments, Get-Module lists the loaded modules. (Once you load a module with Import-Module you then can use its exported members.) Here is an example:
ModuleType Name ExportedCommands
---------- ---- ----------------
Manifest Assertions {Set-AbortOnError, Assert-Expression,Set-MaxExpressionDisplayLe…
Manifest IniFile Get-IniFile
Manifest Pscx {}
Script Test-PSVersion {}
Script TestParamFunctions {}
Manifest BitsTransfer {}
The module type may be manifest, script, or binary (more on those later). The exported commands list identifies all the objects that the module writer exported with explicit exports. An empty list indicates default or implicit export mode, i.e. all functions in the module.
Guideline #1: Use explicit exports so Get-Module can let your user know what you are providing
Get-Module has a ListAvailable parameter to show you what is available to load, i.e. what you have correctly installed into one of the two system repository locations provided earlier. The output format is identical to that shown just above.
The default output of Get-Module shows just the three properties above, but there are other ones that are important as well. To see what other interesting properties you could extract from Get-Module, pipe it into the handy Get-Member cmdlet:
Get-Module | Get-Member
Notable properties you find in the output include Path (the path to the module file), Description (a brief summary of the module), and Version. To display these properties with Get-Module, switch from its implicit use of Format-Table to explicit use, where you can enumerate the fields you want:
Get-Module -ListAvailable | Format-Table Name, Path, Description, Version
Name Path Description Version
---- ---- ----------- -------
Assertion C:\Users\ms\Documents\Wi... Aborting and non-abortin... 1.0
EnhancedChildItem C:\Users\ms\Documents\Wi... Enhanced version of Get-... 1.0
inifile C:\Users\ms\Documents\Wi... INI file reader 1.0
SvnKeywords C:\Users\ms\Documents\Wi... 0.0
MetaProgramming C:\Users\ms\Documents\Wi... MetaProgramming Module 0.0.0.1
TestParamFunctions C:\Users\ms\Documents\Wi... 0.0
AppLocker C:\Windows\system32\Wind... PowerShell AppLocker Module 1.0.0.0
BitsTransfer C:\Windows\system32\Wind... 1.0.0.0
PSDiagnostics C:\Windows\system32\Wind... 1.0.0.0
TroubleshootingPack C:\Windows\system32\Wind... Microsoft Windows Troubl... 1.0.0.0
If you actually want to see the value of some fields, though, particularly longer fields like Path or Description, it might behoove you to use Format-List rather than Format-Table:
Get-Module -ListAvailable | Format-List Name, Path, Description, Version
Name : Assertion
Path : C:\Users\ms\Documents\WindowsPowerShell\Modules\CleanCode\Assertion\Assertion.psm1
Description : Aborting and non-aborting validation functions for testing.
Version : 1.0
Name : EnhancedChildItem
Path : C:\Users\ms\Documents\WindowsPowerShell\Modules\CleanCode\EnhancedChildItem\
EnhancedChildItem.psd1
Description : Enhanced version of Get-ChildItem providing -ExcludeTree, -FullName, -Svn,
-ContainersOnly, and -NoContainersOnly.
Version : 1.0
etc. . .
The Get-Member cmdlet quite thoroughly tells you what you can learn about a module but if, like me, you occasionally prefer to bore down into the raw details, you can follow the object trail to its source. First, you can determine that the .NET type of an object returned by Get-Module is called PSModuleInfo via this command:
(Get-Module)[0].GetType().Name
Lookup PSModuleInfo on MSDN and there you can see that the list of public properties are just what Get-Member showed you. On MSDN, however, you can dig further. For example, if you follow the links for the ModuleType property, you can drill down to find that the possible values are Binary, Manifest, and Script, as mentioned earlier.
Finally, for loaded modules (i.e. not just installed but actually loaded) you can explore further with the Get-Command cmdlet, specifying the module of interest:
Get-Command -Module Assertion
CommandType Name Definition
----------- ---- ----------
Function Assert-Expression param($expression, $expected)…
Function Get-AssertCounts …
Function Set-AbortOnError param([bool]$state)…
Function Set-MaxExpressionDisplayLength param([int]$limit = 50)…
Again, you can use Get-Member to discover what other properties Get-Command could display.
Installing Modules
Now that you know how to see what you have installed here are the important points you need to know about installation. As mentioned earlier you install modules into either the system-wide repository or the user-specific repository. Whichever you pick, its leaf node is Modules so in this discussion I simply use “Modules” to indicate the root of your repository. The table shows what Get-Module and Import-Module can each access for various naming permutations.
Standard module installation (line 1 in the table) requires that you copy your module into this directory:
Modules/module-name/module-name.psm1
That is, whatever your modules base file name is, the file must be stored in a subdirectory of the same name under Modules. If instead you put it in the Modules root without the subdirectory:
Modules/module-name.psm1
…PowerShell will not recognize the module (line 2 in the table)! This peculiar behavior is probably what you would try first, so it is a common source of frustration with modules not being recognized.
Alice felt dreadfully puzzled. The Hatter's remark seemed to her to have no sort of meaning in it, and yet it was certainly English. “I don't quite understand you,” she said, as politely as she could.
--Alice, Chapter 7, Alice's Adventures in Wonderland (Lewis Carroll)
Putting a module in the Modules directory is not good enough; only in an eponymous subfolder will it make be recognized by PowerShell.
Line 3 illustrates that you can use namespaces rather than clutter up your Modules root with a hodgepodge of modules from different sources. When you use Get-Module, though, the default output shows just the name; you must look at the Path property of Get-Module if you want to see the namespace as well. If you ask Get-Module to find a particular module, you again provide only the name. However, when you use Import-Module you specify the path relative to the Modules root.
Note that namespaces are purely a convention you may or may not choose to use; PowerShell has no notion of namespaces per se (at least as of version 2—Dmitry Sotnikov has made a plea via Microsoft Connect to add namespaces in future versions; see We Need Namespaces!).
Line 4 extends the case of line 3, showing that you can make your namespace as nested as you like—as long as your modules end up in like-named leaf directories.
Given the above discourse, here is the next cardinal rule for modules:
Guideline #2: Install a module in an eponymous subdirectory under your Modules root
Line 5 in the table presents an interesting corner case showing what happens if you violate Guideline #2. The module is invisible to Get-Module -ListAvailable yet you can still load it by specifying the differing subdirectory name and module name. This is, of course, not advisable.
Associating a Manifest to a Module
The first half of the article showed the progression from inline code to script file to module file. There is a further step – introducing a manifest file associated with the module file. You need to use a manifest to specify details of your module that may be accessed programmatically. Recall that when discussing Get-Module one example showed how to get additional properties beyond the default – including description and version. But in the example's output, some entries showed an empty description and a 0.0 version. Both description and version come from the manifest file; a module lacking a manifest has just those default values.
To create a manifest file, simply invoke the New-ModuleManifest command and it will prompt you to enter property values. If you do this in a standard PowerShell command-line window, you receive a series of prompts for each property. If, on the other hand, you use the PowerGUI script editor it presents a more flexible pop-up dialog, as shown in figure 4. I also entered a couple other common properties, author and copyright.
Figure 4: New-ModuleManifest dialog from PowerGUI Script Editor
The ModuleToProcess property must reference your module script file. Upon selecting OK, the dialog closes and the manifest file is created at the location you specified for the Path property. The path of the manifest file must also follow rule #2, this time with a .psd1 extension. Once the manifest exists, PowerShell now looks to the manifest whenever you reference the module, notably in both the Get-Module and Import-Module cmdlets. You can confirm this with Get-Module: recall that Get-Module displays the ModuleType property by default; now you will see it display Manifest instead of Script for the ModuleType.
Guideline #3: Use a manifest so your users can get a version and description of your module
Once you create your manifest, or at any time later, you can use Test-ModuleManifest to validate it. This cmdlet checks for existence of the manifest and it verifies any file references in the manifest. For more on manifests, see How to Write a Module Manifest on MSDN.
Unapproved Verbs
If you imported the Expressions.psm1 module given earlier, you likely received this warning message:
WARNING: Some imported command names include unapproved verbs which might make them less discoverable. Use the Verbose parameter for more detail or type Get-Verb to see the list of approved verbs.
PowerShell wants to encourage users to use standard naming conventions so it is easier for everybody who uses external modules to know what to expect. Cmdlets and functions should use the convention action-noun (e.g. Get-Module). PowerShell does not make any guesses about your choice of nouns, but it is particular about your choice of actions. You can see the list of approved actions, as the warning about indicates, by executing the Get-Verb cmdlet.
Note that I use the term action rather than verb in this paragraph, because PowerShell's definition of verb is rather non-standard(!). Humpty Dumpty really had the right idea – I use this quote frequently…
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean – neither more nor less.”
-- Chapter 6, Through the Looking Glass (Lewis Carroll)
To PowerShell a verb is “a word that implies an action”, so a construct such as New-ModuleManifest qualifies. See Cmdlet Verbs in MSDN for more details on naming.
Guideline #4: Name your functions following PowerShell conventions
Documenting a Module
The help system in PowerShell is a tremendous boon: without leaving the IDE (or PowerShell prompt) you can immediately find out almost anything you care to know about any PowerShell cmdlet (e.g. Get-Help Get-Module) or general topic (e.g. Get-Help about_modules). When you create a module you can easily provide the same level of professional support for your own functions. Implementing the help is the easy part; writing your content is what takes most of your time.
To implement the integrated help support, you add documentation comments (“doc-comments”) to your module script file just like you would with your other favorite programming language. main scripts (ps1 files); it does not apply to modules (psm1 files). What you will see here is that you must add a special comment section that looks like this for each function:
<#
.< help keyword>
< help content>
. . .
#>
…and that you can place that in any of three positions relative to your function body. You can then pick your relevant help keywords from the subsequent section, Comment-Based Help Keywords.
One small annoyance (hard to say if it is a feature or a defect, since it documents it as both in adjoining paragraphs!): for each function parameter, Get-Help displays a small table of its attributes. But the default value is never filled in! Here is an example from Get-Module's ListAvailable parameter:
-ListAvailable [<SwitchParameter>]
Gets all of the modules that can be imported into the session. Get-Module
gets the modules in the paths specified by the $env:PSModulePath
environment variable.
Without this parameter, Get-Module gets only the modules that have been
imported into the session.
Required? false
Position? named
Default value
Accept pipeline input? false
Accept wildcard characters? false
You can see this feature/issue documented under Autogenerated Content > Parameter Attribute Table. The documentation is certainly thorough on this point, though, even to the extent of providing a workaround—it suggests you mention your default in your help text. And that is just what all the standard .NET cmdlets do!
PowerShell provides support for help on individual modules, allowing Get-Help to access your help text, as you have just seen. If you produce libraries rather than just individual modules you will next be looking for the way to create an API documentation tree that you can supply with your library. Wait for it… sigh. No, PowerShell does not provide any such took like javadoc for Java or Sandcastle for .NET. Well, I found that rather unsatisfactory so I undertook to create one. My API generator for PowerShell (written in PowerShell, of course!) is in my PowerShell library, scheduled for release in the fourth quarter of 2011. You can find it here on my API bookshelf, alongside my libraries in five other languages. As an enthusiastic library builder, I have created similar API generators for Perl (see Pod2HtmlTree) and for T-SQL (see XmlTransform). (Note that the Perl version is Perl-specific while the T-SQL one is my generic XML conversion tool configured to handle SQL documentation, described in Add Custom XML Documentation Capability To Your SQL Code.)
Guideline #5: Add polish to your modules by documenting your functions
Enhancing Robustness
I would be remiss if I did not add a mention, however brief, of an important guideline for any PowerShell script, module or otherwise. Let the compiler help you—turn on strict mode with Set-StrictMode:
Set-StrictMode -Version Latest
Guideline #6: Tighten up your code by enforcing strict mode
It is regrettable that that setting is not on by default.
Name Collisions – Which One to Run?
If you create a function with the same name as a cmdlet, which one does PowerShell pick? To determine that you need to know the execution precedence order (from about_Command_Precedence):
- Alias
- Function
- Cmdlet
- Native Windows commands
If you have two items at the same precedence level, such as two functions or two cmdlets with the same name, the most recently added one has precedence. (Hence the desire by some to have namespaces introduced in PowerShell, as mentioned earlier.)
When you add a new item with the same name as another item it may replace the original or it may hide the original. Defining a function with the same name as an existing cmdlet, for example, hides the cmdlet, but does not replace it; the cmdlet is still accessible if you provide a fully-qualified name. To determine the name, examine the PSSnapin and Module properties of the cmdlet:
Get-Command Get-ChildItem | Format-List -property Name, PSSnapin, Module
Name : Get-ChildItem
PSSnapIn : Microsoft.PowerShell.Management
Module :
The fully qualified name, then, for the Get-ChildItem cmdlet is:
Microsoft.PowerShell.Management\Get-ChildItem
To avoid naming conflicts in the first place, import a module with the Prefix option to the Import-Module cmdlet. If you have created, for example, a new version of Get-Date in a DateFunctions module and run this:
Import-Module -name DateFunctions -prefix Enhanced
…then your Get-Date function is now mapped to Get-EnhancedDate, i.e., the action in the command is affixed with the prefix you specified.
Conclusion
Modules let you organize your code well and to make your code highly reusable. Now that you are aware of them, you will probably start noticing code smells that shout “Module!”. That is, be on the lookout for chunks of code that perform a useful calculation but are generic enough to deserve separating out from your main code. I have found that taking the effort to move generic functionality into a separate module forces me to think about it in isolation and often leads me to find corner cases that I missed in the logic. Also, modularizing lets you then focus more fine-grained and more specific unit tests on that code as well. For further reading, be sure to take a look at the whole section on modules on MSDN at Writing a Windows PowerShell Module. Finally, for a smattering of open-source modules, see Useful PowerShell Modules. | https://www.simple-talk.com/dotnet/.net-tools/further-down-the-rabbit-hole-powershell-modules-and-encapsulation/ | CC-MAIN-2014-15 | refinedweb | 4,634 | 53.1 |
this also.
Create a new J# project.
Add your java code .
Maximum of the code syntax is same .
Thank you for your good direction. I am considering to create a new J# project. The problem now is how to add a JAR file to the J# project? If I can use these JAR files then everything is OK.
This course will introduce you to the interfaces and features of Microsoft Office 2010 Word, Excel, PowerPoint, Outlook, and Access. You will learn about the features that are shared between all products in the Office suite, as well as the new features that are product specific.
Try this link.
Regards
Prakash
VJ# is a choice, but dealing with it is not trivial in a short time (and I am in lack of time now, deadline is coming for the master thesis). So I turn to resist on a bad (but workable right now) solution: JSP files can call my java library easily and the GUI part is in VB.NET using WebClient class to talk to the JSP files (I don't want to write my own server in Java code, again it is time consuming). So it is quite OK at the moment, what is in lack is that I can't use SOAP to access my java library. I want to close the question here.
Thank you a lot for the VJ# tip. Actually I don't think of it before :-)
Please make a comment and I will accept yours as the solution with full points and Grade A.
Best regards,
Nhuan
using JSP on server . for that you run the program in web server.
Best of luck
Regards
Prakash
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
I my case, I use UDDI4J which is a java open source library to access a UDDI registry. After a while I want to turn to use VB.NET and the problem above raises. In fact the best way that I have found is to turn to use Microsoft UDDI SDK which has a namespace Microsoft.Uddi and then we can use the classes there to do the same thing as UDDI4J.
I think this is the best way for me to do my task.
I close the question with a solution found by myself. | https://www.experts-exchange.com/questions/21189865/Using-java-code-in-VB-NET.html | CC-MAIN-2018-30 | refinedweb | 412 | 82.75 |
Web – not unique for each client session.
return counter++;
}
}
Enabling session support for your web service would require little effort on the server and client.’t.
Session can be maintained just by setting the property “MessageContext.SESSION_MAINTAIN_PROPERTY”.
How can we control the session time out?
Posted by: technohunter on June 09, 2006 at 01:43 AM
I suppose that does only work if my webservice endpoint lives in the web container and not in the EJB container?! I.e., am i right in assuming that the following does *not* work?!:
@WebService
@EJB
public class Hello {
@Resource
private WebServiceContext wsContext;
public int getCounter() {
// yada yada
}
}
This is important because usually i want to implement my webservice endpoints as EJBs.
As an aside, i hope i'm not the only one who considers implementing a stateful service *like this* a depressing set-back compared to the level of abstraction and comfort stateful session beans have been providing for years.
cheers,
gerald
Posted by: geraldloeffler on June 09, 2006 at 01:08 PM
You can specify the session timeout in web.xml
<web-app>
<session-config>
<session-timeout>60</session-timeout>
</session-config>
............
</web-app>
Or
You could set it on HttpSession object like HttpSession.setMaxInactiveInterval(int interval);
Posted by: ramapulavarthi on June 22, 2006 at 06:24 PM
I guess this webservice is only consumable by a Java Application. Is there a way a .NET client cal also consume this webservice, with state?
please provide with me some coding.
Posted by: lordhiru on August 21, 2006 at 07:17 PM
Interesting, but it doesn't seem to work. Using tcpmon I see different JSESSIONID's for each request and the objects I store in the session are not persistent. I am using Sun Java System Application Server Platform Edition 9.0_01 (build b14) and the jars that come with it.
Posted by: travbow on February 01, 2007 at 09:30 AM
What would cause the MessageContext to not be populated with a SERVLET_REQUEST object? I am using the JDK 6 http server to deploy a web service. Is this the reason there is no SERVLET_REQUEST object? If so, is there another way I can obtain the calling client's IP address?
Posted by: jesterfred on June 22, 2007 at 08:52 AM
Hello, how can we know the sessions which are active on server-side ?
Posted by: matof on May 02, 2008 at 04:07 AM
Rama
How can i load a java bean in application scope since web application start and then access it from a web service.
Posted by: ksimon on June 19, 2008 at 07:21 AM | http://weblogs.java.net/blog/ramapulavarthi/archive/2006/06/maintaining_ses.html | crawl-002 | refinedweb | 438 | 64 |
Old threads never die: Tim Newsham <newsham at lava.net> writes: >>? Did you ever get to the bottom of this? I have a similar problem with Data.Binary that I don't know how to work around yet. It boils down to reading a large list. This exhibits the problem: newtype Foo = Foo [Word8] instance Binary Foo where get = do xs <- replicateM 10000000 get return (Foo xs) Doing 'x <- decodeFile "/dev/zero" and "case x of Foo y -> take 10 y" blows the heap. I thought Data.Binary was lazy? My actual program looks something like this: instance Binary MyData where get = do header <- get data <- replicateM (data_length header) $ do ....stuff to read a data item return (MyData header data) This blows the stack as soon as I try to access anything, even if it's just the contents of the header. Why? My understanding of how Data.Binary works must be sorely lacking. Could some kind soul please disperse some enlightenment in my direction? -k -- If I haven't seen further, it is by standing in the footprints of giants | http://www.haskell.org/pipermail/haskell-cafe/2008-November/050252.html | CC-MAIN-2013-48 | refinedweb | 181 | 75.81 |
Chatting about IronRuby, Web Development, Azure and more!
This is a coding technique I've learned today from looking at Rhino mocks Playback() method code. It's a nice and smooth technique and I can think of several uses for it.
Consider the following code:
public static void DoSomething()
{
TimerClass timer = new TimerClass();
using (timer.CountTime())
{
string s = "hello world";
for (int i = 0; i < 100000; i++)
{
int index = s.IndexOf("world");
}
}
}
This code will result in writing to the console: "Total time: 110ms".How is it done?
public class TimerClass
public IDisposable CountTime()
return new MyTimer();
private class MyTimer : IDisposable
int start;
public MyTimer()
start = Environment.TickCount;
public void Dispose()
Console.WriteLine("Total ms: {0} ms", (Environment.TickCount - start));
The "secret" is returning an inner disposable class that will take care of what you need...
Hope you've been enlightened,Shay.
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Good technique
Nice :)
You can also use this technique for capturing and restoring state before and after doing some work, as I've shown here: blog.functionalfun.net/.../misusing-idisposable-beyond-resource.html | http://blogs.microsoft.co.il/blogs/shayf/archive/2008/06/29/using-quot-using-quot-with-a-method.aspx | crawl-003 | refinedweb | 186 | 59.6 |
29.12.
inspect — Inspect live objects¶
Code source :.
29.12.1.:
Modifié dans la version 3.5: Add
__qualname__ and
gi_yieldfrom attributes to generators.
The
__name__ attribute of generators is now set from the function
name, instead of the code name, and it can now be modified.
inspect.
getmembers(object[, predicate])¶
Return all the members of an object in a list of (name, value) pairs sorted by name. If the optional predicate argument is supplied, only members for which the predicate returns a true value are included.
Note).
Nouveau dans la version 3.5.
inspect.
iscoroutine(object)¶
Return true if the object is a coroutine created by an
async deffunction.
Nouveau dans la version 3.5.
inspect.
isawaitable(object)¶
Return true if())
Nouveau dans la version 3.5.
inspect.
isasyncgenfunction(object)¶
Return true if the object is an asynchronous generator function, for example:
>>> async def agen(): ... yield 1 ... >>> inspect.isasyncgenfunction(agen) True
Nouveau dans la version 3.6.
inspect.
isasyncgen(object)¶
Return true if the object is an asynchronous generator iterator created by an asynchronous generator function.
Nouveau dans la().
inspect.
isdatadescriptor(object)¶
Return true if if the object is a getset descriptor.
Particularité de l’implémentation CPython : getsets are attributes defined in extension modules via
PyGetSetDefstructures. For Python implementations without such types, this method will always return
False.
inspect.
ismemberdescriptor(object)¶
Return true if the object is a member descriptor.
Particularité de l’implémentation CPython : Member descriptors are attributes defined in extension modules via
PyMemberDefstructures. For Python implementations without such types, this method will always return
False.
29.12.2..
Modifié dans la version 3.5:.
29.12.3. Introspecting callables with the Signature object¶
Nouveau dans la version 3.3..
Nouveau dans la version 3.5:
follow_wrappedparameter. Pass
Falseto get a signature of
callablespecifically (
callable.__wrapped__will not be used to unwrap decorated callables.)
Note.
Modifié dans la version 3.5: Signature objects are picklable and hash)
Nouveau dans la version 3.5.
- class
inspect.
Parameter(name, kind, *, default=Parameter.empty, annotation=Parameter.empty)¶
Parameter objects are immutable. Instead of modifying a Parameter object, you can use
Parameter.replace()to create a modified copy.
Modifié dans la version 3.5: Parameter objects are picklable and hashable.
name¶
The name of the parameter as a string. The name must be a valid Python identifier.
Particularité de l’implémentation CPython : CPython generates implicit parameter names of the form
.0on the code objects used to implement comprehensions and generator expressions.
Modifié dans la version 3.6:'"
Modifié dans la version 3.4:.
Note', ())])
Nouveau dans la version 3.5.
The
argsand
kwargsproperties can be used to invoke functions:
def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs)
29.12.4. Classes and functions
*and
**parameters or
None. defaults is a tuple of default argument values or
Noneif there are no default arguments; if this tuple has n elements, they correspond to the last n elements listed in args.
Obsolète depuis la version 3.0:
..
Modifié dans la version 3.4: This function is now based on
signature(), but still ignores
__wrapped__attributes and includes the already bound first parameter in the signature output for bound methods.
Modifié dans la version 3.6: This method was previously documented as deprecated in favour of
signature()in Python 3.5, but that decision has been reversed in order to restore a clearly supported standard interface for single-source Python 2/3 code migrating away from the legacy
getargspec()API.
inspect.
getargvalues(frame)¶.
Note,
*argument name,
**argument name, default values, return annotation and individual annotations into strings, respectively.
Par exemple :
>>> from inspect import formatargspec, getfullargspec >>> def f(a: int, b: float): ... pass ... >>> formatargspec(*getfullargspec(f)) '(a: int, b: float)'
Obsolète depuis la version 3.5:.
Note'
Nouveau dans la version 3.2.
Obsolète depuis la version 3.5:.
Nouveau dans la version 3.3..
Nouveau dans la version 3.4.
29.12.5..
Modifié dans la version 3.5: Return a named tuple instead of a tuple..
Modifié dans la version 3.5:.
Modifié dans la version 3.5: A list of named tuples
FrameInfo(frame, filename, lineno, function, code_context, index)is returned.
inspect.
currentframe()¶
Return the frame object for the caller’s stack frame.
Particularité de l’implémentation CPython :.
Modifié dans la version 3.5:.
Modifié dans la version 3.5: A list of named tuples
FrameInfo(frame, filename, lineno, function, code_context, index)is returned.
29.12.6..
Nouveau dans la
29.12.7..
Nouveau dans la version 3.2..
Nouveau dans la version 3.5..
Particularité de l’implémentation CPython : This function relies on the generator exposing a Python stack frame for introspection, which isn’t guaranteed to be the case in all implementations of Python. In such cases, this function will always return an empty dictionary.
Nouveau dans la version 3.3.
inspect.
getcoroutinelocals(coroutine)¶
This function is analogous to
getgeneratorlocals(), but works for coroutine objects created by
async deffunctions.
Nouveau dans la version 3.5.
29.12.8..
Nouveau dans la.
Nouveau dans la version 3.5.
29.12.9. Interface en ligne de command. | https://docs.python.org/fr/3/library/inspect.html | CC-MAIN-2017-47 | refinedweb | 851 | 53.37 |
On Tue, Aug 29, 2006 at 11:09:35AM -0400, Sev Binello wrote: > From a strictly practical and immediate stand point, > what is the best way to handle this situation if it should occur again in > the near future ? Without any kernel patches, the best thing to do is, (a) don't restore the path to the device, (b) unmount the filesystem, (c) Compile the enclosed flushb program (also found in the e2fsprogs sources, but not compiled by most or all distributions), and run it: "flushb /dev/hdXX", and only after completing all of these steps, you can restore the path and do fsck of the filesystem if you are feeling sufficiently paranoid, and then remount it. I wish I could offer you something better, but that's what we have at the moment. - Ted /* * flushb.c --- This routine flushes the disk buffers for a disk * * Copyright 1997, 2000, by Theodore Ts'o. * * WARNING: use of flushb on some older 2.2 kernels on a heavily loaded * system will corrupt filesystems. This program is not really useful * beyond for benchmarking scripts. * * %Begin-Header% * This file may be redistributed under the terms of the GNU Public * License. * %End-Header% */ #include <stdio.h> #include <string.h> #include <unistd.h> #include <stdlib.h> #include <fcntl.h> #include <sys/ioctl.h> #include <sys/mount.h> /* For Linux, define BLKFLSBUF if necessary */ #if (!defined(BLKFLSBUF) && defined(__linux__)) #define BLKFLSBUF _IO(0x12,97) /* flush buffer cache */ #endif const char *progname; static void usage(void) { fprintf(stderr, "Usage: %s disk\n", progname); exit(1); } int main(int argc, char **argv) { int fd; progname = argv[0]; if (argc != 2) usage(); fd = open(argv[1], O_RDONLY, 0); if (fd < 0) { perror("open"); exit(1); } /* * Note: to reread the partition table, use the ioctl * BLKRRPART instead of BLKFSLBUF. */ if (ioctl(fd, BLKFLSBUF, 0) < 0) { perror("ioctl BLKFLSBUF"); exit(1); } return 0; } | https://www.redhat.com/archives/ext3-users/2006-August/msg00041.html | CC-MAIN-2015-14 | refinedweb | 311 | 65.22 |
UART bootloader + I2C LCD problemlukasakerlund_1537441 Aug 18, 2017 1:28 PM
I'm having a weird problem. I'm driving a LCD05 with I2C, and I use a button to return to the bootloader. I'm using a CY8CKIT-049-42xx.
The code below works fine, however if I remove the 1s delay in the for loop, or set it to something lower, say 100ms, the bootloader does not work properly when activated. The led flashes and I get the "Bootloader ready" on the display, but the bootloader host says:
- Communication port reported error 'Unable to read data from the target device'.
And times out after 5s, then I have to physically reconnect the usb-connector to get the following error:
- The bootloader reported error 'Unknown error 0x98 occurred in the bootload process'.
Pressing the program button again after that programs the processor. Any ideas?
I'm running PSoC creator 3.3 in win7 in WMware on a mac book pro.
The display:
#include <project.h>
#define LCD_MODULE_ADDRESS (0xC6u)
uint8 blFlag = 0;
CY_ISR_PROTO(enterBootloader);
CY_ISR(enterBootloader)
{
blFlag = 1;
}
int main()
{
CyGlobalIntEnable;
BL_ISR_StartEx(enterBootloader);
I2C_Start();
I2C_LCD_Start();
I2C_LCD_HandleOneByteCommand(0x1f, 0xff); //Set brightness to 0xff
CyDelay(1000u);
uint8 i = 0;
for(;;)
{
I2C_LCD_ClearDisplay();
I2C_LCD_PrintInt8(i);
i++;
CyDelay(1000u);
if( blFlag == 1)
{
I2C_LCD_ClearDisplay();
I2C_LCD_PrintString("Bootloader ready");
Bootloadable_Load();
}
}
}
1. Re: UART bootloader + I2C LCD problemuser_242978793 Dec 10, 2015 5:34 PM (in response to lukasakerlund_1537441)
Sir You need a bootloader component and not Bootloadable component you also need a UART Component to do the bootloading. You also have not started your capsense component or set the CapSense_InitializeAllBaselines(); command I don't know how it worked at all. I am sending you a program that has the bootloader in it so you can add your program into it.
- SCB_Bootloader._3.2_SP1.zip 450.8 K
2. Re: UART bootloader + I2C LCD problemlukasakerlund_1537441 Dec 11, 2015 1:59 AM (in response to lukasakerlund_1537441)
Thanks for your reply.
I have the Bootloadable component linked to bootloader .hex and .elf files just like your example. They are in another folder so they were not included in the .zip, sorry.
What UART component should I use? I don't see one in your example?
I removed the CapSense code to isolate the problem, just forgot to remove the component itself.
3. Re: UART bootloader + I2C LCD problemuser_242978793 Dec 11, 2015 4:57 AM (in response to lukasakerlund_1537441)There are two sections to the program I sent you. At the top is the led section and if you scroll down in the left panel of Psoc Creator you will see the code for the bootloader. If you look at the topdesign you will see the UART and the bootloader components.
4. Re: UART bootloader + I2C LCD problemlukasakerlund_1537441 Dec 12, 2015 6:41 PM (in response to lukasakerlund_1537441)
Thank you,
I copied my code into your project, same problem. I added the bootload project (with the bootload and UART component) to my workspace and linked the bootloadable components to the hex and elf files again. Still the same problem, what am I missing here?
5. Re: UART bootloader + I2C LCD problemuser_242978793 Dec 13, 2015 6:49 AM (in response to lukasakerlund_1537441)
Are you trying to program a board or are you trying to use the bootloader to program the board with a new program at a later time after you have installed the original program? Such as an ECO to the program. Also did you look at the bootloader section of the program I sent you it has some delays that are set by the bootloader to insure that the programmer works correctly.
6. Re: UART bootloader + I2C LCD problemlukasakerlund_1537441 Dec 15, 2015 11:21 AM (in response to lukasakerlund_1537441)
I'm trying to program a new program to the board using the bootloader and UART. What is an ECO?
Your bootloader folder did not include the project files, only .hex and .elf
7. Re: UART bootloader + I2C LCD problemuser_242978793 Dec 15, 2015 12:02 PM (in response to lukasakerlund_1537441)A ECO is an engineering change . If you had a device in the field and you want to upgrade the program. I am still not clear as to what you want to do. The program with the LED blink does every thing you need to program the board.
8. Re: UART bootloader + I2C LCD problemuser_1377889 Dec 15, 2015 12:12 PM (in response to lukasakerlund_1537441)
Sorry, but isn't ECO the acronym for External Crystal Oscillator?
Bob
9. Re: UART bootloader + I2C LCD problemuser_242978793 Dec 15, 2015 1:00 PM (in response to lukasakerlund_1537441)
Bob not in the Military electronic field it stands for Engineering Change Order. I guess it could also stand for external crystal oscillator . Lukasakerlund here are some PDF's that explain Bootloaders in PSOC 4.
10. Re: UART bootloader + I2C LCD problemoliverbroad Dec 18, 2015 7:46 AM (in response to lukasakerlund_1537441)
Sorry if this is off topic but as you're discussing a CY8CKIT-049 can you tell me where the example code is. I have PSoC creator and it has a long list of examples but I can't find the specific ones listed in the kit documentation (Bootloadable LED blink). It looked as if they were in a separate download but I can't find it.
FWIW I don't have a programmer for the PSoC, I intend to use the bootloader that should be already programmed along with the demo.
11. Re: UART bootloader + I2C LCD problemuser_1377889 Dec 18, 2015 9:09 AM (in response to lukasakerlund_1537441)
Welcome in the forum, Oliver!
When you did install the kit files there is an entry in Creator's start page for the Kits you have got. When you missed that step you may catch up with files from here. As an interesting alternative, there is a CY8CKIT-043 which not only has got a larger chip with more capabilities, but comes with a programmer and debugging capabilities that make experiencing PSoCs a bit (or byte) easier.
Bob
12. Re: UART bootloader + I2C LCD problemoliverbroad Dec 18, 2015 10:33 AM (in response to lukasakerlund_1537441)
Thanks, it looks as if the files were already installed because on running the installer it offers to remove them.
In hindsight I must have missed the part of the release notes telling me where to find them.
The CY8CKIT-043 looks interesting, at first I couldn't see the difference but I see that the USB part provides a SWD interface instead of just serial so it isn't dependent on a boot-loader in the target chip. I've already used a MBED module, that uses a similar concept except there's no provision for breaking off the programmer section on the MBED. The 043 model doesn't appear to have made it into the Farnell catalog yet.
I'm quite interested in the USB-Serial part of the 049 board.
13. Re: UART bootloader + I2C LCD problemuser_1377889 Dec 18, 2015 11:17 AM (in response to lukasakerlund_1537441)
For the -49: Look at the schematics of your kit (Programs(x86)\Cypress\ your kit name\hardware\...
There is an USB-UART bridge: connect an UART component to the pins P4_0 and P4_1, look for the com-port the USB device emulates (windows->system->device manager) and connect that port to PuTTY or whatever terminal program you use.
Bob
14. Re: UART bootloader + I2C LCD problemoliverbroad Dec 26, 2015 6:54 AM (in response to user_1377889)
Thanks for the tip. Sending to the PC worked. I'm currently trying to rewrite a crude Modbus device I originally wrote for a MBED to run on it. The UART API is different enough to give me some problems, better in some ways but different. I appreciate having a non-blocking read function, it is an annoying omission in the API I used previously.
Unfortunately the program as written needs a software delay of 500us. I probably need to rewrite it to use hardware timing eventually but for now I could use a delay, but there doesn't seem to be a "delay" or "wait" defined. | https://community.cypress.com/thread/15143 | CC-MAIN-2017-51 | refinedweb | 1,350 | 59.94 |
I did it! I figured out a way to reliably interop between modern unidirectional data flow React components and old Backbone views. It ain’t pretty, but it works.
There are three components in that page:
- A
CurrentCountcomponent that shows current counter state; it’s pure React
- A
ButtonWrappercomponent that shows counter state and does
+1on click; it’s a wrapped Backbone view
- A
ReactButtoncomponent that does
+10on click; it’s also pure React
I know this is a trivial example, but it shows a powerful concept. That
+1 button is still the same Backbone view from Tuesday. It stores state in a local Backbone Model, it uses a Handlebars template, and it remains the same idiomatic Backbone View it’s always been. Yet it interops with the React app, blissfully unaware that something freaky's going on.
The Backbone View
All three components share the same MobX data store, which has a single observable value called
N. It looks like this:
class CounterStore { @observable N = 0 }
In MobX, stores are classes with observable properties. They often have methods and computed values as well, but this example is too simple.
@observable is a decorator that compiles into something like
makeObservable(this.N, 0), which in turn uses ES6 to add magical getters and setters that fire up the MobX engine whenever you access – dereference – the observable value. Doing it yourself would look like this:
class CounterStore { N = 0 set N (val) { this.N = val; // notify all observers that N has changed } get N () { // add call site to list of observers return N; } }
MobX saves you from writing that logic yourself, and it adds a bunch of smartness to make it fast and efficient. I don’t really know how the engine works, but after reading the docs and some of Michel Weststrate’s Medium posts, I’m convinced it’s amaze.
Benchmarked Immutables against Observables in TodoMVC. More details at @ReactAmsterdam conf! #reactjs#redux#mobxjspic.twitter.com/tL536DWR6f
— Michel Weststrate (@mweststrate) April 8, 2016
So that’s the store – no boilerplate involved. The two pure React components don’t involve much boilerplate either.
@inject('counterStore') class ReactButton extends Component { buttonClicked() { this.props.counterStore.N += 10; } render() { return ( <div> <p>React Button:</p> <button onClick={action('inc-counter', this.buttonClicked.bind(this))}>Jump click count +10</button> </div> ); } }; @inject('counterStore') @observer class CurrentCount extends Component { render() { const { N } = this.props.counterStore; return (<p>Current count in counterStore: {N}</p>) } }
The
@inject decorator takes props from a React context and adds them to a component. I don’t know how MobX-specific this is, but it reduces our boilerplate. Instead of giving each component a
store={this.props.store} type of prop, we can wrap the whole thing in a
<Provider> and give everyone access.
At the end of the day, you always realize that all your components need access to your application state. Global singletons for things everyone needs make life easier. Trust me.
The
@observer decorator comes from MobX’s React bindings. It automagically makes the
render() method listen to store changes, but only those changes that it uses.
This is key. It’s what makes MobX fast. It’s also what leads to confusion when you’re doing things that are not idiomatic React, like inserting Backbone views into React components.
This is weird. If I add console.log to the render() method, MobX works as expected. Take it out, and observer isn't updating.@mweststrate?
— Swizec (@Swizec) September 21, 2016
Thanks to Michel Weststrate for helping me out. His tip about using
autorun saved the day
And with that out of the way, I had a Backbone view wrapped in a React component, including full data interop. Change local Backbone state, and the global data store finds out, changes the data store, and the Backbone view updates.
It looks like this:
@inject('counterStore') @observer class ButtonWrapper extends Component { constructor(props) { super(props); this._init(); autorun(this._render.bind(this)); } _init() { this.button = new BackboneButton({ N: this.props.counterStore.N }); this.button.model.on('change:N', action('inc-counter', (_, N) => { this.props.counterStore.N = N; })); } componentDidUpdate() { this._render(); } componentDidMount() { this._render(); } _render() { this._cleanup(); this._init(); this.button.setElement(this.refs.anchor).render(); } componentWillUnmount() { this._cleanup(); } _cleanup() { this.button.undelegateEvents(); } render() { return ( <div> <p>Backbone Button:</p> <div className="button-anchor" ref="anchor" /> </div> ); } }
Like I said, it ain’t pretty
It works like this:
- React renders an anchor
div
- On component mount or update, it runs Backbone view rendering in
_render
- On component unmount, it cleans up Backbone’s DOM event listeners
- Inside
_renderit:
- Cleans up Backbone DOM event listeners
- Instantiates a new Backbone view with
_init
- Tells the view to render inside that anchor
<div>
- when initializing the view in
_initit:
- Creates a new
BackboneButtoninstance and gives it the value of
Nfrom our store
- Listens for changes to
Non the view’s internal state and communicates them upstream with a MobX action
This approach is a leaky abstraction. The
_init method has to know intimate details about the Backbone view you’re wrapping. There’s no way to get around that because MobX needs those getters and setters to observe state changes.
As soon as you pass a value, MobX loses track. I tried passing the whole store into a Backbone view and using it directly as a model, but that didn’t work. Backbone Model’s
set() and
get() methods circumvent native getters and setters, which means MobX can’t track changes or uses.
Another issue is that because our Backbone views aren’t pure, the UI could sometimes look wrong to the user. It won’t be stale, but it won’t show all side-effects from user actions either.
But we can deal with that later. The important part is that we have a way forward! A way to go from Backbone to React without resorting to a full complete rewrite of everything from scratch. \o/
Related
You should follow me on twitter, here.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/backbone-react-handling-state-with-mobx | CC-MAIN-2018-05 | refinedweb | 1,013 | 56.45 |
Metasploit Generic NTLM Relay Module
July 22, 2012 2 Comments
I recently put in a pull request for a Metasploit module I wrote that does NTLM relaying (I recommend this post for some background). I also have a time slot for a tool arsenal thing at blackhat. Here’s the description:.
With this module, I made a choice for flexibility over some other things. Although basic usage is fairly straightforward, some of the more advanced functionality might not be. This post will give some usage examples. I have desktop captures of each example, full screen and HD those suckers if you want to see what’s going on.
Stealing HTTP Information
Corporate websites that use NTLM auth are extremely common, whether they’re sharepoint, mediawiki, ERP systems, or (probably most commonly) homegrown applications. To test my module out, I wrote a custom application that uses Windows Auth. I did it this way so I could easily reconfigure without too much complexity (although I have tested similar attacks against dozens of real sharepoint sites, mediawiki, and custom webpages). This simple example is an app that displays some information about the user visiting. Think of it as an internal paystub app that HR put up.
IIS is set to authenticate with Windows auth:
All domain accounts are authorized to see their own information, which it grabs out of a DB:
<asp:SqlDataSource ... Username.Text = Page.User.Identity.Name; String username = Page.User.Identity.Name; SqlDataSource1.SelectCommand = "SELECT * FROM Employee WHERE username='" + Page.User.Identity.Name + "'"; GridView1.DataBind();
So let’s attack this, shall we? I made the following resource file:
#This simple resource demonstrates how to collect salary info from an internal HTTP site authed with NTLM #to extract run the ballance_extractor resource unset all use auxiliary/server/http_ntlmrelay set RHOST mwebserver set RURIPATH /ntlm/EmployeeInfo.aspx set SYNCID empinfo set URIPATH /info run
Now our HTTP server sits around and waits for someone to connect. Once someone does and sends their NTLM creds, the credentials are passed along to the web server, and the data is saved to the notes database. We can look at this HTML, or with the data at our disposal, we can extract the relevant info with a resource script.
#extract the mealcard information from the database <ruby> #data is the saved response def extract_empinfo(data) fdata = [] bleck = data.body empdata = bleck.split('<td>')[2..-1] empdata.each do |item| fdata.push(item.split('</td>')[0]) end return (fdata) end framework.db.notes.each do |note| if (note.ntype == 'ntlm_relay') begin #SYNCID was set to "empinfo" on the request that retrieved the relavent values if note.data[:SYNCID] == "empinfo" empinfo = extract_empinfo(note.data[:Response]) print_status("#{note.data[:user]}: Salary #{empinfo[0]} | SSN: #{empinfo[4]}") end rescue next end end end </ruby>
Here’s that in action.
HTTP CSRF
Whether they be HTTP requests or something else, a lot of the really interesting attacks simply require more than one request. Look at any time we want to POST and change data. If the website has protection against CSRF, this should take at least two requests – one to grab the CSRF token and another to do the POST that changes state.
In my dummy app, a member of the “cxo” active directory security group has permission to edit anyone’s salary. This extreme functionality for a privileged few is super common. Think ops administration consoles, help desk type tools, HR sites, etc. The goal of this attack is to change “mopey’s” salary to -$100 after a member of the cxo group visits our site.
The first step for me is just to run the interface as normal through an HTTP proxy. In this case, it took three requests for me to edit the salary, and each request requires data to be parsed out – namely the VIEWSTATE and EVENTVALIDATION POST values. HTTP_ntlmrelay was designed to support this sort of scenario. We’ll be using the SYNCFILE option to extract the relevant information and update the requests dynamically.
Here’s the resource file
#This demonstrates how to do a CSRF against an "HR" app using NTLM for auth #It grabs the secret from a GET request, then a POST request and uses that secret in a subsequent POST request #This is a semi-advanced use of this auxiliary module, demonstrating how it can be customized #to use, from msf run this resource file, and force a victim to visit a page that forces the 3 requests #to modify the content put in the wiki, edit extract_2.rb unset all use auxiliary/server/http_ntlmrelay set RHOST mwebserver set RTYPE HTTP_GET set RURIPATH /ntlm/Admin.aspx set URIPATH /grabtoken1 set SYNCID csrf run set SYNCID csrf2 set URIPATH /grabtoken2 set RTYPE HTTP_POST set SYNCFILE /root/ntlm_relay/bh_demos/http_csrf/extract_1.rb set HTTP_HEADERFILE /root/ntlm_relay/bh_demos/http_csrf/headerfile run unset SYNCID set URIPATH /csrf set SYNCFILE /root/ntlm_relay/bh_demos/http_csrf/extract_2.rb set HTTP_HEADERFILE /root/ntlm_relay/bh_demos/http_csrf/headerfile run
extract_1.rb extracts the secret information from the first GET request, which the second request uses. Note the requests go one at a time – you have a guarantee one request will completely finish before the next one begins.
# cat extract_1.rb #grab the request with the ID specified " print_status("Found GET request containing CSRF stuff. Extracting...") viewstate = extract_viewstate(note.data[:Response]) eventvalidation = extract_eventvalidation(note.data[:Response]) datastore['FINALPUTDATA'] = ( "__EVENTTARGET=ctl00%24MainContent%24GridView1&__EVENTARGUMENT=Edit%243&__VIEWSTATE=" + Rex::Text.uri_encode(viewstate) + "&__VIEWSTATEENCRYPTED=&__EVENTVALIDATION=" + Rex::Text.uri_encode(eventvalidation) ) puts(datastore['FINALPUTDATA']) end end end
extract2.rb is nearly identical, except the POST data needs our CSRF values and the requests we’re parsing are different (we have a separate syncid we’re looking for).
# cat extract_2.rb #grab the request with the ID specified new_salary = "-100" victim = "EVIL%5Cmopey" 2" print_status("Found Second request containing CSRF stuff. Extracting...") viewstate = extract_viewstate(note.data[:Response]) eventvalidation = extract_eventvalidation(note.data[:Response]) datastore['FINALPUTDATA'] = ( "__EVENTTARGET=ctl00%24MainContent%24GridView1%24ctl05%24ctl00&__EVENTARGUMENT=&__VIEWSTATE=" + Rex::Text.uri_encode(viewstate) + "&__VIEWSTATEENCRYPTED=&__EVENTVALIDATION=" + Rex::Text.uri_encode(eventvalidation) + "&ctl00%24MainContent%24GridView1%24ctl05%24ctl02=" + victim + '&ctl00%24MainContent%24GridView1%24ctl05%24ctl03=' + new_salary ) puts(datastore['FINALPUTDATA']) end end end
The HTTP_HEADERS options is easier to explain. In the file, it’s just a list of HTTP_HEADERS…
# cat headerfile Content-Type: application/x-www-form-urlencoded
Lastly, I put all three requests in a nice rick roll package. This is the site the victim will visit with their browser in the final attack.
<iframe src="" style='position:absolute; top:0;left:0;width:1px;height:1px;'></iframe> <iframe src="" style='position:absolute; top:0;left:0;width:1px;height:1px;'></iframe> <iframe src="" style='position:absolute; top:0;left:0;width:1px;height:1px;'></iframe> <h1>Never gonna Give you Up!!!</h1> <iframe width="420" height="315" src="" frameborder="0" allowfullscreen></iframe>
Here’s the whole thing in action. In this video I fail the rickroll due to my lack of IE flash, but the attack works. Despite good CSRF protection, mopey’s salary is successfully modified by visiting a malicious website.
Yes, it’s kind of complicated to parse HTML and extract values, but that’s just the nature of the problem. I’ve done this several times. In this archive there’s a mediawiki POST that edits an NTLM authed page. It’s similar to above, but requires a multipart form and a authentication cookie (which you can use the headerfile option for). HTTP_ntlmrelay is designed to do one request at a time, but multiple requests can easily be stacked on the browser side (like I did here with the hidden iframes).
SMB File Operations
There’s (usually) no reason you can’t use a browser’s NTLM handshake to authenticate to an SMB share, and from there, all the regular file operations you’d expect should be possible. Because there’s no custom HTML to parse or anything, this is actually a lot simpler to demonstrate. The setup is the same as above, a fully patched browser client from one machine being relayed to a separate fully patched default win 2008R2 domain machine (mwebserver).
#This simple resource simply enums shares, reads, writes, ls, and pwn unset all use auxiliary/server/http_ntlmrelay set RHOST mwebserver set RPORT 445 #smb_enum set RTYPE SMB_ENUM set URIPATH smb_enum run #SMB_PUT set RTYPE SMB_PUT set URIPATH smb_put set RURIPATH c$\\secret.txt set PUTDATA "hi ima secret" set VERBOSE true run #smb_ls unset PUTDATA set RTYPE SMB_LS set URIPATH smb_ls set RURIPATH c$\\ run #smb_get set RTYPE SMB_GET set URIPATH smb_get set RURIPATH c$\\secret.txt run #smb_rm set RTYPE SMB_RM set URIPATH smb_rm set RURIPATH c$\\secret.txt run
Another cool thing. With current attacks like the smb_relay module you pretty much need to be an admin, and that’s still true here if you want to start services. But any Joe that can authenticate might be able to write/read to certain places. Think about how corporate networks might do deployment with development boxes, distribute shared executables, transfer shares, etc and you might get the idea. Below I replace somebody’s winscp on their Desktop with something that has a winscp icon and just does a system call to calculator (this is from a different lab but the idea applies anywhere)
SMB Pwn
How does psexec work? Oh yeah, it uses SMB :) You astute folks may have known this all along, but showing this off to people… they’re just amazed at how pwned they can get by visiting a website.
This can be super simple. Here’s an example that tries to execute calc.exe. It’s flaky on 2008r2 because windows is trying to start calc as a service… but it still pretty much works if you refresh a few times.
#This simple resource simply executes calculator unset all use auxiliary/server/http_ntlmrelay set RHOST mwebserver set RTYPE SMB_PWN set RPORT 445 set RURIPATH %SystemRoot%\\system32\\calc.exe set URIPATH smb_pwn
MUCH more reliable is to create a service that has at least onstart, onstop methods. This next video has three requests, one to upload a malicious binary with smb_put, a second call to smb_pwn, and a third to remove the binary. This is similar to what the current metasploit smb_relay and psexec modules do automatically. Here I upload a meterpreter service binary that connects back to my stage 2, and then executes a packed wce to dump the user’s password in cleartext.
This is my favorite demo, because it shows the user’s cleartext passwords from just visiting a website or viewing an email.
Conclusions
So like I mentioned before, I have a tool arsenal thing next week at Blackhat, so that would be cool if people stopped by to chat. Also, at the time of this writing this hasn’t made it into Metasploit proper yet, but I hope it makes it eventually! Egypt gave me a bunch of good fixes to do. I’ve made most the changes, I just haven’t committed them yet (but I probably will tomorrow, barring earthquakes and whatnot).
Mitigations are a subject I don’t cover here. I think this deserves a post of its own, since misconceptions about relaying are so prevalent. Until then, this might be a good starting point:.
I was wondering if it might be possible to use the SMB credentials to authenticate to an outbound proxy. i.e. in targeting a client whose only outbound access is via an NTLM-authenticating proxy such as ISA Server, etc?
It should be possible as long as the NTLM-authing proxy isn’t doing something similar to extended protection for authentication. You can probably test this with my module out of the box. Just set the RURI option to the fully qualified version (e.g. /default.asp to since that’s what non-invisible proxies expect) | http://webstersprodigy.net/2012/07/22/metasploit-generic-ntlm-relay-module/ | CC-MAIN-2013-20 | refinedweb | 1,971 | 54.12 |
Java stringtokenizer
Java stringtokenizer What is difference between string and stringtokenizer
Can iadd StringTokenizer into JLabel componenet
Can iadd StringTokenizer into JLabel componenet l24="india.cricket.java.dotnet.oracle"
String mn=l24;
StringTokenizer mn1=new StringTokenizer(l24...(3,280,1580,600);
nowon.setBackground(Color.cyan);
sir i want to add StringTokenizer
Problems in Stringtokenizer
*;
import java.io.*;
import java.util.Scanner;
class Node
{
public boolean marked... advice;
}
public class Input
{
public int n,nn,vn... node[];
StringTokenizer str,str1;
Input() throws Exception
{
FileReader df=new
Query on StringTokenizer
Query on StringTokenizer Sir,I kept my some data in backend using ms-access.
for Example my data is like this
vijayawada is good city.india is my... code will be helpful for you.
import java.sql.*;
import javax.swing.*;
class
Java StringTokenizer
Java StringTokenizer
In this tutorial we will discuss about String Tokenizer in java. String
Tokenizer allows an application to break into token. Java... string into token, first create a object of
the StringTokenizer
java.util.StringTokenizer
,
which have been added to Java 1.4.
A StringTokenizer constructor...
Java: java.util.StringTokenizer
Purpose
The java.util.StringTokenizer class is used
to break strings into tokens (words, numbers, operators
how to use stringtokenizer on table? and display in table format.
how to use stringtokenizer on table? and display in table format. table is retrieved from mysql database and each row contains data like(java,c,c++,.net
Breaking a string into words without using StringTokenizer
Breaking a string into words without using StringTokenizer how can we Break a string into words without using StringTokenizer ??
The given code convert the string into words.
import java.util.*;
class StringExample :
JLabel class - Java Beginners
JLabel class what are the superclasses of the JLabel class?
Thanks
taylordk@nationwide.com Hi Friend,
JComponent class is the superclass of all the swing components.The JComponent Class provides the framework
Wrapper Class - Java Beginners
Wrapper Class What is Wrapper class? Why are they use
Wrapper Class - Java Beginners
Wrapper Class Please explain the above question on Wrapper class with some examples
class - Java Beginners
)
Abstract class Shapes has one abstract method-area() and a concrete method display(). Class Rectangle should implement the abstract class shapes
Interface... implement the Perimeter interface
Class Rectangle extends another class...://
Thanks... :
import java.io.*;
public class ReadString {
public static void main
question on class - Java Beginners
question on class
A class can act as subclass itself?
if yes give me one example
if no give me one example
Robot class - Java Beginners
the package java.awt.event.* in your java code.The class InputEvent is in this package...!
import java.lang.*;
import java.awt.*;
public class Robot04{
public
Toolkit Class.......... - Java Beginners
Toolkit Class Hi will you please tell me something about Toolkit class. What following code will give ToolKit kit=ToolKit.getDefaultToolKit(); Dimension dim=kit.getScreenSize(); & How JFileChosser is used Please
MovieRating class - Java Beginners
the movie rating.
12.Create a MovieRating class that contains a private map... the class is instantiated. Also include a method which takes two parameters, rating... class to determine whether the child can watch the movie.
14.Add
Desgining a Class - Java Beginners
Desgining a Class Design a class named ?DBList? having the following data members and member functions :
Class Name : DBList
Data Members /Instance variables:
Start : stores a link to the first node of the linked list
Class Path - Java Beginners
Class Path Hello I was able to set the class path as You have specified in the site... But still am not able to compile the "javac" command...; there u can type the path both in user variables for administrator Hi
class Math - Java Beginners
class Math "Helo man&sir can you share or gave me a java code hope its ok?"
Complete a program that asks the user a floating-point number (either... to you.
public class MathFunction {
public static void main(String s
Search Class - Java Beginners
Search Class Hi i have created a search class with 3 txtfield... the output to another form rather than the search class i created. I hava...*;
/**
*
* @author alvtan
*/
public class SearchForm extends ShowGUI
Void class in Java - Java Beginners
Void class in Java Hi,
What is the purpose of Void class?
Thanks
Hi Friend,
It is an uninstantiable class that hold a reference to the Class object representing the primitive Java type void.
Thanks
java inner class - Java Beginners
java inner class What is the difference between the nested class and inner class? Explain with suitable example. Hi Friend,
Differences:
1)Nested class is declared inside another class with the static keyword
Java-Abstract Class - Java Beginners
Java-Abstract Class Can a abstract class have a constructor ?When would the constructor in the abstract class be called ?
Please give me with good...; Hi Friend,
Yes. But Abstract class constructors will be called when its
java beginners - Java Beginners
java beginners what is StringTokenizer?
what is the funciton... the following links:
socket class in java - Java Beginners
socket class in java i created server and client class using socket....
i have used localhost and port as 2010....
if again i connected in that port it is saying already the port is used....
now my question is how
java question - Java Beginners
java question Given the string "hey how are you today?" how many... class StringProgram{
public static void main(String[] args){
String str="hey how are you today?";
int count=0;
StringTokenizer st=new
scanner Class Error - Java Beginners
, Scanner class is not provided. Check your version.It should work with java 1.5...scanner Class Error Hello Sir ,When i run the program of Scanner Class ,there is Error
Can not Resolve Symbol-Scanner
how i can solve
abstract class and interface - Java Beginners
abstract class and interface what is the need for an abstract class? when should we use an abstract class?
when should we use interface instead of abstract class?
Hi friend,
Abstract classes
In java
abstract class and overriding - Java Beginners
abstract class and overriding what is the difference between abstract class and overriding?
Interface?
Give some example program?
Hi friend,
Abstract class :
In java programming language, abstract classes
java - Java Beginners
Hi Friend,
Try the following code:
import java.util.*;
class LongestWord
{
String str = "Ram is intelligent boy";
StringTokenizer st = new StringTokenizer(str);
String stringArray[] = str.split("\\s");
public String
java - Java Beginners
java.util.*;
public class CountWords{
public static void main(String[] args...: ");
String sen = scanner.nextLine();
StringTokenizer st = new StringTokenizer(sen);
int counter = 0;
while(st.hasMoreTokens
clarify date class - Java Beginners
clarify date class Dear,
clarify this question...
"Develop Date class in Java similar to the one available in java.util package. Use JavaDoc comments."
Give me the source code and explain how to create classes similar
subclasses an imported class - Java Beginners
subclasses an imported class Sir,
Pls help me to write:
Program to illustrate subclasses an imported class
Thanks
singleton class in struts - Java Beginners
singleton class in struts hi,
This balakrishna,
I want to retrive data from database on first hit to home page in struts frame work
java - Java Beginners
:// compiler and interpreter is there any online Java compiler...;public class TokenizeToString{ public static void main(String [] args
Movie and Cinema class - Java Beginners
Movie and Cinema class Hi, I'm new in java. Can someone help me...:
Question:
1. Create a new Movie class which has two properties, title and rating.
2. Create a new Cinema class, add method addMovieGoers which
create named class - Java Beginners
a class named Employee with name and salary. Make a class Manager inherit from...() that prints the manager?s name, department and salary. Make another class named... and methods with public accessibility)
My coding :
class Employee
poi & class path - Java Beginners
poi & class path This is the same problem regarding POI ,
Sir , i have downloaded poi-bin-3.5-beta6-20090622.zip from this link
http....
Also after downloading how to set class path. Hi Friend
creating class and methods - Java Beginners
creating class and methods Create a class Computer that stores... of the Computers.
This class contains following methods,
- Constructor method... ( ) method that creates array of 4 objects of Computer
class and that takes input
java - Java Beginners
java give the java programming syntex when the text like i love...,
Try the following code:
import java.util.*;
class Reverse{
public static...="";
StringBuffer sb=new StringBuffer();
StringTokenizer stk=new StringTokenizer
Java - Java Beginners
Java I need a program in java.
For ex,
if I Give my input as "My... the following code:
import java.util.*;
public class ReverseWords{
public static... = new StringBuffer(str);
StringTokenizer st = new StringTokenizer
using class and methods - Java Beginners
using class and methods Sir,Plz help me to write this below pgm. here is the question:
Assume a table for annual examination results for 10... the following code:
import java.util.*;
public class Student{
int rollNo
Servlet vs Class - Java Beginners
Convert Class File to .java File - Java Beginners
Convert Class File to .java File Sir, How To Convert .class file to .java File.
Please Help me Use cavaj software ,which is freeware, to convert .class to .java Use jd-gui to convert ur .class to .java
What is inner class in Java? - Java Beginners
What is inner class in Java? Hi,
Explain me inner classes in Java.
What is the use of inner classes? Give examples of inner classes in Java...://
Uses:
Non-static inner class
java - Java Beginners
java.util.StringTokenizer;
public class Encode {
public static void main(String[] args...){
StringBuffer output = new StringBuffer();
StringTokenizer tokenizer = new StringTokenizer(s," ");
while(tokenizer.hasMoreTokens()){
String next
java swing - Java Beginners
java swing hello sir,
Is it possible to show... java.awt.event.*;
import javax.swing.table.*;
public class... InputStreamReader(fis));
StringTokenizer st1 = new StringTokenizer(br.readLine
STRINGS - Java Beginners
STRINGS WRITE A JAVA PROGRAM TO REVERSE THE ORDER OF WORDS...*;
public class ReverseWords{
public static void main(String[] args){
String str...);
StringTokenizer st = new StringTokenizer(buffer.reverse().toString(), " ");
while
movie class..(dr java) - Java Beginners
movie class..(dr java) hi there.....am about to create a movie class... java.util.ArrayList;
public class movie
{
private String movie_Title;
private... the following code:
import java.io.*;
import java.util.*;
public class movie
Class
Class, Object and Methods
Class : Whatever we can see in this world all
the things... is termed as
a class. All the
objects are direct interacted with its class
string - Java Beginners
java.util.*;
class CountPalindromes{
public static void main(String[] args){
int... sentence: ");
String str=input.nextLine();
StringTokenizer stk=new StringTokenizer(str);
while(stk.hasMoreTokens()){
String words=stk.nextToken
java - Java Beginners
java.util.StringTokenizer;
public class Encode {
public static void main(String[] args...();
StringTokenizer tokenizer = new StringTokenizer(s," ");
while(tokenizer.hasMoreTokens
Uses of abstract class & interface - Java Beginners
Uses of abstract class & interface Dear sir,
I'm new to java. I knew the basic concepts of interface and the abstract class. But i dont... my doubt
Thank you Hi Friend,
Interface:
Java does
Can you instantiate the Math class? - Java Beginners
Can you instantiate the Math class? Hi,
Can you create object of Math class?
Thanks
Hi,
All the methods in Math class is static. The constructor is not public in the Math class. So, we can't create
program using StringTokenizer
program using StringTokenizer I want to know about StringTokenizer,so can you please explain it with an example
core java - Java Beginners
java.io.*;
import java.lang.*;
import java.util.*;
public class StringRevers...("Enter String:");
String str=br.readLine();
StringTokenizer st=new StringTokenizer(str);
int i=st.countTokens();
String str1[]=new String[6
question - Java Beginners
question Define a class to accept a sentence & print...; Hi Friend,
Try the following code:
import java.util.*;
class Words...=input.next();
StringTokenizer stk=new StringTokenizer(st);
while
user defined subclass of throwable class - Java Beginners
of throwable class Hi Friend,
Try the following:
1)
public class...);
}
}
}
class MyException extends Throwable {}
2)
public class ThrowableClass {
public static void main(String args[]) {
try
que 1 - Java Beginners
java.util.*;
class CountSameWords {
public static void main...();
StringTokenizer stk = new StringTokenizer(st);
while
Parsing a pharase - Java Beginners
some of them are
String.split methods
StringTokenizer and StreamTokenizer classes
Scanner class
This may help you
http
get the value from another class - Java Beginners
javax.xml.transform.stream.StreamResult;
public class xmlRead{
static public void main(String[] arg
StringTokenizer not returning proper result
StringTokenizer not returning proper result I have this code...
System.out.print("Enter the stack integers side by side with a space in between: ");
StringTokenizer st=new StringTokenizer(br.readLine());
int a[]=new int[9000
main function defnition in class - Java Beginners
java vowel program - Java Beginners
java vowel program Write a program to Print all the words that start... class UserInput{
public static void main(String[] args...);
String input = scan.nextLine();
final StringTokenizer st = new
Making Tokens of a String
. In the program a string is passed into a constructor of StringTokenizer
class. StringTokenizer is a class in java.util.package. We are using
while loop to generate... is to
create a class named StringTokenizing. Inside this class we will declare our
Java array - Java Beginners
Java array Q 1- write a program that counts the frequency...--
programming
java
hello
would be displayed as
hello java programming...
Hi friend,
Code to solve the problem :
Answer 5 :
public class
string - Java Beginners
codes:
1) import java.io.*;
import java.util.*;
class AlphabeticalOrder...*;
public class FirstLetter{
public static String capitalizeFirstLetter( String str ) {
final StringTokenizer st = new StringTokenizer( str, " ", true
java.util.StringTokenizer;
/**
* @author helper
*
*/
public class EmailExtractor... = reader.readLine()) != null) {
StringTokenizer st = new StringTokenizer(line
Program using String functions - Java Beginners
the following code:
import java.util.*;
class CountSameWords...();
String s1 = "the";
StringTokenizer stk = new StringTokenizer(st);
while (stk.hasMoreTokens | http://www.roseindia.net/tutorialhelp/comment/90906 | CC-MAIN-2014-35 | refinedweb | 2,296 | 50.84 |
CodePlexProject Hosting for Open Source Software
Hi,
I set the UploadFileProcessor of my Uploader control to "MySolutionName.MyFileProcessor,MySolutionName", just like the example.
But I'm getting an error when I try to upload files on my app.
The progress goes until the end. I think it's can't access my custom processor.
The class implements the provided interface correctly. It does nothing, like the example, but it still does not work.
=/
Regards.
If you have incorrectly specified the UploadedFileProcessorType, it should say "Error" as the status of the file, because it will throw an exception trying to create an instance of your class. If it doesn't, something else must be going on...
Also, UploadedFileProcessorType is "[Fully qualified class name with namespaces],[Assembly Name]".
For example, if I have an assembly called Vci.dll, and in it I create a class in Namespace1.Namespace2 called MyClass, I would specify:
UploadedFileProcessorType="Namespace1.Namespace2.MyClass,Vci"
Hope that helps.
I made a library called MyFileProcessor.dll, put a reference on my WebSite project (both are in same solution), then I wrote:
UploadedFileProcessorType="Namespace1.Namespace2.MyFileProcessor,MyFileProcessor"
I still get an error message.
Thanks.
EDIT: I removed the first Namespace. Now it calls: "ProjectNamespace.MyFileProcessor, MyFileProcessor"
'bout the error? The same.
I would need more information to help you diagnose the problem at this point -- are you able to run the project in the debugger and get more information about the error?
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://silverlightuploader.codeplex.com/discussions/65712 | CC-MAIN-2017-30 | refinedweb | 277 | 61.02 |
14 March 2008 15:41 [Source: ICIS news]
LONDON (ICIS news)--Second-quarter sulphur contract prices in China are set to jump on the back of higher spot values in a globally tight market, market sources said on Friday.?xml:namespace>
Although contract discussions are yet to begin in earnest, Middle East producers have indicated that they are looking for significant price increases of $150-200/tonne (€96-128/tonne) on first-quarter levels.
Such an increase would take prices to around $600-650/tonne FOB (free on board) ?xml:namespace>
Freight from the Middle East to
A hike of this magnitude would bring prices into line with current spot values, sources said. Abu Dhabi National Oil Company (Adnoc) set its March sulphur price at $640/tonne FOB, while a recent spot sale out of
Spot prices have jumped so far this year on the back of strong demand from the fertilizer sector and reduced output from the Middle East and other major supply regions
( | http://www.icis.com/Articles/2008/03/14/9108761/china-q2-sulphur-contract-prices-set-to-jump.html | CC-MAIN-2014-52 | refinedweb | 165 | 57.74 |
Composable Resources
In this tutorial, we're going to walkthrough.
For additional support, see the Playground Manual
Resources Owning Resources.
Resources Owning Resources: An Example
// The KittyVerse contract defines two types of NFTs. // One is a KittyHat, which represents a special hat, and // the second is the Kitty resource, which can own Kitty Hats. // // You can put the hats on the cats and then call a hat function // that tips the hat and prints a fun message. // // This is a simple example of how Cadence supports // extensibility for smart contracts, but the language will soon // support even more powerful versions of this. pub contract KittyVerse { // KittyHat is a special resource type that represents a hat pub resource KittyHat { pub let id: Int pub let name: String init(id: Int, name: String) { self.id = id self.name = name } // An example of a function someone might put in their hat resource pub fun tipHat(): String { if self.name == "Cowboy Hat" { return "Howdy Y'all" } else if self.name == "Top Hat" { return "Greetings, fellow aristocats!" } return "Hello" } } // Create a new hat pub fun createHat(id: Int, name: String): @KittyHat { return <-create KittyHat(id: id, name: name) } pub resource Kitty { pub let id: Int // place where the Kitty hats are stored pub let items: @{String: KittyHat} init(newID: Int) { self.id = newID self.items <- {} } destroy() { destroy self.items } } pub fun createKitty(): @Kitty { return <-create Kitty(newID: 1) } }
These definitions show how a Kitty resource could own hats.
The hats are stored in a variable in the Kitty resource.
// place where the Kitty hats are stored pub:
import KittyVerse from 0x01 // This transaction creates a new kitty, creates two new hats and // puts the hats on the cat. Then it stores the kitty in account storage. transaction { prepare(acct: AuthAccount) { // Create the Kitty object let kitty <- KittyVerse.createKitty() // Create the KittyHat objects let hat1 <- KittyVerse.createHat(id: 1, name: "Cowboy Hat") let hat2 <- KittyVerse.createHat(id: 2, name: "Top Hat") // Put the hat on the cat! let oldCowboyHat <- kitty.items["Cowboy Hat"] <- hat1 destroy oldCowboyHat let oldTopHat <- kitty.items["Top Hat"] <- hat2 destroy oldTopHat log("The cat has the hats") // Store the Kitty in storage acct.save<@KittyVerse.Kitty>(<-kitty, to: /storage/Kitty) } }
You should see an output that looks something like:
import KittyVerse from 0x01 // This transaction moves a kitty out of storage, // takes the cowboy hat off of the kitty, // calls its tip hat function, and then moves it back into storage. transaction { prepare(acct: AuthAccount) { // Move the Kitty out of storage, which also moves its hat along with it let kitty <- acct.load<@KittyVerse.Kitty>(from: /storage/Kitty)! // Take the cowboy hat off the Kitty let cowboyHat <- kitty.items.remove(key: "Cowboy Hat")! // Tip the cowboy hat log(cowboyHat.tipHat()) destroy cowboyHat // Tip the top hat that is on the Kitty log(kitty.items["Top Hat"]?.tipHat()) // Move the Kitty to storage, which // also moves its hat along with it. acct.save<@KittyVerse.Kitty>(<-kitty, to: /storage/Kitty) } }
You should see something like this output:
> "Howdy Y'all" > "Greetings, fellow aristocats!"
Whenever the Kitty is moved, its hats are implicitly moved along with it. This is because the hats are owned by the Kitty.
The Future is Meow! Extensibility is coming!! | https://docs.onflow.org/tutorial/cadence/07-resources-compose/ | CC-MAIN-2020-45 | refinedweb | 539 | 58.08 |
Static metadata proof of concept
all metadata are expressed in the setup.cfg file
when "python setup.py SOME_COMMAND" is invoked the metadata values located in the [global] section are passed to Distutils like if they were arguments for setup().
A setup.cfg.in file can be provided it's a template for setup.cfg. If found, Distutils will render it to create setup.cfg.
When executed, the template gets these values:
- platform: the value returned by sys.platform
- os_name: the value returned by os.name
- python_version: the python version string (2.5, 2.6, etc)
Right now the template engine in usage is Mako, but the version that will be included in Distutils will be a light template engine that only supports the %if-%endif syntax.
so you need mako to run the demo.
setup.cfg is re-built from setup.cfg.in everytime setup() is called, allowing it to run on target platform when the system is installed.
one may generate setup.cfg for his platform w/o having to get the whole distribution or to run setup(). A vanilla Python is ennough
Distutils changes are located in configure.py in the proof of concept, but this file will disappear if the change is added into distutils | https://bitbucket.org/tarek/staticmetadata/src | CC-MAIN-2017-43 | refinedweb | 209 | 67.86 |
The wumpus-core quite efficient (no unnecessary stack operations).. There is a set of combinators for composing pictures (more-or-less similar to the usual pretty printing combinators).
With revision 0.15.0 I've added three extra helper modules that are not really part of the "core", but they provide lists of named colours and fonts.
WARNING...
wumpus-core is likely to change quite a bit with the next revision as I want to see if I can make Primitives support affine translations. Hopefully this will not change the API significantly though it will mean the generated SVG and PostScript files will be different (possibly clearer). Also the Core.BoundingBox module is not too well designed, too many functions that do not offer distinct functionality. Some functionality was removed in this revision (0.17.0) and more is likely to follow.
GENERAL DRAWBACKS...
For actually drawing pictures, diagrams, etc. Wumpus is very low level. I've worked its design permits a simple implementation - which is a priority. Text encoding an exception - I'm not sure how reasonable the design is. The current implementation appears okay for Latin 1 but I'm not sure about other character sets, and I may have to revise it significantly.
With revision 0.14.0, I've added the first draft of a user guide. Source for the guide is included as well as the PDF as there is an extra example picture. wumpus-extra hasn't received any more attention unfortunately, so Wumpus is still really a bit too primitive for general use. However,.16.0 to 0.17.0:
Added Core.WumpusTypes to export opaque versions of datatypes from Core.PictureInternal. This should make the Haddock documentation more cohesive.
Moved the Core.PictureLanguage module into the Extra namespace (Extra.PictureLanguage). This module change in detail, if not in spirit in the future as I'm not to happy with it. Also this model is somewhat "higher-level" than the modules in wumpus-core, so a different home seems fitting.
Removed CardinalPoint and boundaryPoint from BoundingBox.
Argument order of textlabel and ztextlabel changed so that Point2 is the last argument.
PathSegment constructor names changed - this is an internal change as the constructors are not exported.
Primitive type changed - moved Ellipse properties into PrimEllipse type - internal change.
Removed dependency on 'old
Properties
Modules
- Wumpus
- Wumpus.Core
- Extra
Downloads
- wumpus-core-0.17.0.tar.gz [browse] (Cabal source package)
- Package description (included in the package)
Maintainers' corner
For package maintainers and hackage trustees | http://hackage.haskell.org/package/wumpus-core-0.17.0 | CC-MAIN-2014-52 | refinedweb | 419 | 51.55 |
confstr()
Get configuration-defined string values
Synopsis:
#include <unistd.h> size_t confstr( int name, char * buf, size_t len );
Since:
BlackBerry 10.0.0
Arguments:
- name
- The system variable to query; see below.
- buf
- A pointer to a buffer in which the function can store the value of the system variable.
- len
- The length of the buffer, in bytes.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The confstr() function lets applications get or set configuration-defined string values. This is similar to the sysconf() function, but you use it to get string values, rather than numeric values. By default, the function queries and returns values in the system.
In order to set a configuration string, your process must have the PROCMGR_AID_CONFSET ability enabled. For more information, see procmgr_ability().
The name argument represents the system variable to query. The values are defined in <confname.h>; at least the following name values are operating system name.
- _CS_TIMEZONE
- Time zone string (TZ style)
- _CS_VERSION
- The current OS version number.
The configuration-defined value is returned in the buffer pointed to by buf, and will be ≤ len bytes long, including the terminating NULL. If the value, including the terminating NULL, is greater than len bytes long, it's truncated to len - 1 bytes and terminated with a NULL character.
To find out the length of a configuration-defined value, call confstr() with buf set to NULL and len set to 0.
To set a configuration value:
- OR the name of the value (e.g. _CS_HOSTNAME) with _CS_SET.
- Put the new value in a NULL-terminated string.
- Set the value of len to 0.
Returns:
A nonzero value (if a "get" is done, the value is the length of the configuration-defined value), or 0 if an error occurs ( errno is set).
You can compare the confstr() return value against len to see if the configuration-defined value was truncated when retrieving a value (you can't do this when setting a value).
Errors:
- EINVAL
- The name argument isn't a valid configuration-defined value.
- EPERM
- The calling process doesn't have the required permission; see procmgr_ability().
Examples:
Print information similar to that returned by the uname() function:
#include <unistd.h> #include <stdio.h> #include <limits.h> #define BUFF_SIZE (256 + 1) int main( void ) { char buff[BUFF_SIZE]; if( confstr( _CS_SYSNAME, buff, BUFF_SIZE ) > 0 ) { printf( "System name: %s\n", buff ); } if( confstr( _CS_HOSTNAME, buff, BUFF_SIZE ) > 0 ) { printf( "Host name: %s\n", buff ); } if( confstr( _CS_RELEASE, buff, BUFF_SIZE ) > 0 ) { printf( "Release: %s\n", buff ); } if( confstr( _CS_VERSION, buff, BUFF_SIZE ) > 0 ) { printf( "Version: %s\n", buff ); } if( confstr( _CS_MACHINE, buff, BUFF_SIZE ) > 0 ) { printf( "Machine: %s\n", buff ); } if( confstr( _CS_SET | _CS_HOSTNAME, "myhostname", 0 ) != 0 ) { printf( "Hostname set to: %s\n", "myhostname" ); } return 0; }
Classification:
Caveats:
The confstr() function is part of a draft standard; its interface and/or behavior may change in the future.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/c/confstr.html | CC-MAIN-2019-35 | refinedweb | 510 | 57.57 |
Welcome to the finest collection of Cognos Interview questions. These questions are specially collected for all candidates whether they are fresher or experienced. This article will help you a lot to crack your interviews. This list of questions includes basic questions as well as high-level questions. Just take a look at the questions, if you are going to sit in an interview.
IBM Cognos Interview Questions
Cognos Interview Questions
The Cognos gateway is a point or starting point where the first request received and then send to the BI server. Cognos gateways used to encrypt the request adds necessary updating like variables or namespaces and then transfers the information to application servers.
The following gateways are supported by Cognos:
- CGI: It stands for Common Gateway Interface known as the default gateway. It is a basic gateway.
- ISAPI: This is called an Internet Service Application Interface which is for the windows environment. It is best suitable for windows IIS (Internet Information Service)
- Servlet: This gateway is suitable for those application servers who supports servlets.
- Apache_mod: This gateway is used for Apache servers.
There are many types of indexes used in Cognos. These are:
- Bitmap Indexes: A bitmap index is used to replace a list of rowids with the key values. Due to the presence of low cardinality and low updates, it is more efficient for data waring.
- B-tree indexes: B-tree indexes are used on the availability of high-cardinality, generally when we have too many distinct columns. It is mainly used for OLTP.
- Function-based indexes: These are created on the results of an expression or function.
- Reverse key and Composite Indexes: In the reverse key, the keys of an index are reversed and then stored.
To be considered a durable model, it needs to follow some criteria and these are given below.
- It is always a design language that is built up for the model which is different from the areas the clients can select from. As an example, English (Zimbabwe) is assigned for the language English.
- The name of the languages of other than the design language can be changed if required but for design language, it is not possible to change namespaces, query items, query languages that are published in a package.
- It is not possible to change the structure of the published package's query subject, query items, dimensions, namespace(s), shortcuts etc. The reason behind this is that the structure is kept in specification's definition with design language's name.
- The element must be in model.xml in the version IBM Cognos 8.4.1.
Few differences between Cognos Dynamic cube and Transformer/TM1 cube and PowerPlay cube are:
The different types of the report in Cognos are:
- Chart report
- List report
- Blank report
- Repeater report
- Cross report
A catalog is a file which contains information such as folders, columns, conditions, etc. on the database tables. The highlighted point here is that it doesn’t contain any data instead it contains metadata definitions and table structures. It has mainly four types:
- Personal Catalog: It is a type of catalog in which only one user can create or modify a catalog or report.
- Shared Catalog: In this, one user (creator) can create the catalog but other users can also create their own reports with the use of this catalog.
- Distributed Catalog: In this, anyone can change or modify their own created distributed catalogs and can also make their own reports.’
- Secured Catalog: In this type of catalog, no one can modify the reports or catalogs. They are fully secured.
Aggregate Advisor help in increasing the overall performance.
- It is a component of the Dynamic Query Analyzer
- It can be Support database either in-memory aggregate or aggregate tables or maybe both.
- Aggregate Advisor utilize statistics and cube’s model
- Aggregate Advisor utilize workload log files which are available from the execution of reports
- To recover the summed value and put the values in their specific aggregate cache so that they can be used for query processing, aggregate execute the important SQL statement after the cube restart.
- It works during the off-peak meanwhile in the non-critical business hours
Cognos was an Ottawa company which is based on Ontario performance management software.
Dynamic cubes are present in-memory Online Analytical Program (OLAP) which loads data from the relational data source.
- It is built in schemes like star or snowflake.
- It is introduced in the versions IBM Cognos BI 10.2 or above.
- It is the technology of IBM Cognos BI query stack.
- Its main purpose is to a response regarding reports and studies on a great volume of data.
- It enables the high performance for interaction and studies the data.
Following are the ways to change non-durable models into durable model
- In the case of the model does not have a design language, then the steps are-
- It is the vital part of the method to attach the design language to the model. After that combine it to the languages of each package which is on the basis of the model. When it is done, save and close the model.
- Open the model.xml file with the help of using XML or text editor. After that modify the value to en-zw from en which is present near the top of the file.
- If you are using IBM Cognos 8.4.1 you need to add a blank line into the model.xml file which is located just under the modified element. When it is done, insert true to the new blank line.
- If you are using IBM Cognos 10 you need to fix the project level property. It is advised to make use of Design Locale for Reference ID to a value of true.
These are mainly of three types:
- Standard Folder
- Package Folder
- Metrics Folder
It indicates the group of users who have the same rights and required to access the same data. It is the duty of the administrator to build the user class and catalog. This job of the creating and maintaining user class can also be performed by the other users of the organization in their geographical area.
You are also able to build a catalog and you can add the user to it. You can also add the users to the personal copy of a distributed catalog. You will get all the rights from the administrator so you can insert or make changes of user classes
It is the studio in which Cognos perform its Ad-hoc queries.
- It is possible to analyze, monitor and report on information that is collected from different devices with the help of IBM Predictive Maintenance and Quality. Moreover, you can get the information regarding the actions from Predictive Maintenance and Quality.
- IBM Predictive Maintenance and Quality can employ to accomplish the following assignments:
- It is helpful in preventing costly unexpected break by earlier prediction of failure of any instrumented.
- It makes the changes in the maintenance to diminish break and help in reducing the cost of repair.
- It looks after the least costly and most powerful solution to the problem by maintaining the logs. It also helps in maintaining the cycles.
- It goes to the root of the cause of the problem vigorously so that clients can take suitable action to repair it.
- It determines the safety, security and quality issue to minimize the chances of failure.
There are different types of securities that can be applied in Framework manager.
- Data Security: It is used to secure the data in framework manager. In this, first, you create a security filter which is then applied to a specific query. The process involves the control of data which is done by the filter. This checks the data to be shown to the users when they set up their reports.
- Object Security: It is able to secure the object just by supporting the users access to the object, by keeping hidden from the users and by rejecting users access to the object.
- Package Security: In this kind of security, the user is able to apply security to a package and can check who has access to the package.
Related Interview Questions
-
A+ interview Questions
-
Git Interview Questions
-
GWT interview questions
-
IELTS Interview Questions
-
Interview Questions for Hiring managers
-
Linux Interview questions
-
Matlab Interview Questions
-
OpenGL Interview Questions
-
Openstack Interview Questions
-
Aerospace Interview questions
-
PLC Interview Questions
-
Memcached Interview Questions
-
Product manager interview questions
-
Soap interview questions
-
Teacher Interview Questions
-
Xml interview Questions
-
XSLT interview questions
-
Yarn Interview Questions
-
Soap UI Interview Questions
-
Catia V5 Interview questions
-
Software Engineer | https://www.onlineinterviewquestions.com/cognos-interview-questions/ | CC-MAIN-2019-22 | refinedweb | 1,435 | 53.61 |
First an estate tax is just that... a tax on the estate of the person who has died (to be paid BY the estate) before distribution to those that inherit (the heirs, or beneficiaries)...a death tax calculated based on the net value of property owned by a deceased person on the date of death.
In inheritance tax, is a tax on what IS INHERITED by someone (paid BY the person that has received something FROM an estate), ...a death tax that is calculated based on who receives a deceased person's property.
Regarding INHERITANCE tax in Mew Jersey:."
Regarding ESTATE tax in New Jersey:).
How does the state on NJ treat gifts? Are they pulled back into the estate value calculation? Is there a time limit as with the federal gifting rules whereby it is not considered part of the estate? What estate tax rates apply when the beneficiaries are the children of the deceased?
Let me pull that for you
ok, take your time, I just want to get it right.
thanks
Just to make sure I understand what you wrote. If there is an estate valued at $1,500,000 and the Donor makes completed gifts to her children in the amount of $800,000 ,then dies the next day , the estate (for NJ estate tax purposes would be valued at $700,000 and tax would be due on $25,000($700,000- $675,000). Is this correct?
I have looked at page 10 of the NJ estate tax return and I am confused by the tax associated with the ranges, Where does this table take into account the $675,000 exemption? It appears as if taxes are applied on values over $615,000. Please clarify.
What do you expect from government work? Let me know what you find out.
Bob
I saw that as I was awaiting your response. Page 10 rates appear to apply to filing a "Simple Form" as there appears to be a $60,000 exemption amount. Is there a "Long Form" rate schedule?
The figure on line 7 from the NJ inheritance tax return would not come into play in this case, since all the beneficiaries are children of the "future" deceased, correct? or do you need to complete it , even though no tax is due so you have certain figures which then get transposed onto the Estate Tax return?
In looking at the NJ Estate Tax Return there are 2 methods by which one can calculate the NJ Estate Tax:
1) Column A - Simple Form
2) Column B - Form 706(2001)
Is one free to choose a method or are there qualifiers that direct you a specific method? In addition the second method uses figures from the Federal Form 706, If you do not file form 706 are you then
compelled to file under the Simple Method( Column A)?
Furthermore, in looking at page 1 of the NJ Estate Tax return, lines 1 and 2 reference form IT-R , can you provide a link to this form??
Thanks,
The children are exempt, as class A beneficiaries... the inheritance tax form is just used to calculate the "SIMPLE" method for calculating the ESTATE tax, when no federal 706 is done.
.
In looking at the NJ Estate Tax Return there are 2 methods by which one can calculate the NJ Estate Tax:
1) Column A - Simple Form
2) Column B - Form 706(2001)
The Director has prescribed a Simplified Tax System method pursuant to the provisions of the revised statute. This method may only be used in those situations where a Federal estate tax return has not and will not be filed nor is a tax return required to be filed with the Internal Revenue Service. The Simplified Tax System is not intended for use in all estates. Any attempt to develop a tax system which could be used in all situations and which would produce a tax liability similar to that produced using the Form 706 method would, of necessity, result in a tax system as complex as the Federal tax itself. The Simplified Tax System requires that a Form IT- Estate be prepared and filed along with a New Jersey inheritance tax return (Form IT-R) completed in accordance with the provisions of the inheritance tax statute in effect on December 31, 2001.
Inheritance Tax Resident Return
from:?
My apologies, this IS correct, although others have contested. From a tax administration standpoint, the department has proscribed that it be added back.
AND (may be stating the obvious now) ... the estate tax form IS the simple method .. Using the federal 706 is the method to be used when a federal 706 is needed.
The estate tax form is used either way (transferring from the 706, when a 706 is needed ... transferring from the inheritance tax form - the simple method - when no 706 is needed)
... sorry it took us tis long to get here.
I just read that taxable gifts are pulled back in the NJ Estate Value calculation, therefore, gifts of $14,000 /year or less would not be pulled back for NJ? Would you agree with this?
Thanks, XXXXX XXXXX it for now | http://www.justanswer.com/tax/82dxy-1-regarding-state-new-jersey-what-difference.html | CC-MAIN-2015-27 | refinedweb | 862 | 69.82 |
#include <string.h> size_t strxfrm(char *restrict s1, const char *restrict s2, size_t n); size_t strxfrm_l(char *restrict s1, const char *restrict s2, size_t n, locale_t locale);.
On error, strxfrm() and strxfrm_l() may set errno but no return value is reserved to indicate an error.
The following sections are informative.
The fact that when n is 0 s1 is permitted to be a null pointer is useful to determine the size of the s1 array prior to making the transformation.
The Base Definitions volume of POSIX.1-2008, <string.h>
Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see . | https://man.linuxreviews.org/man3p/strxfrm.3p.html | CC-MAIN-2019-47 | refinedweb | 123 | 62.98 |
In the previous post in this series I showed you a prototype of what the PlaceOrder for an OrderService in an e-commerce application may look like. To demonstrate mocking, we’ll be developing a production version of this business logic using TDD. That means we start with a business requirement:
Assuming a user has logged into the application and placed items in a shopping cart, the application should enable to user to place an order based on the items in the shopping cart. Users should be able to order any number of different items they wish. Users may not order a zero or lower quantity of any item. If the user attempts to order a quantity of zero or less for any item an exception should be thrown and the entire order aborted (the shopping cart should NOT be emptied). Once the order quantity validation rule has been validated the order should be saved to the database via the Orders Database service and linked to the customer who placed the order. Calls will be made to the billing and order fulfillment systems to launch their respective workflows. A log entry shall be made to show that the order was placed. If the order cannot be placed for any reason other than a failure of the validation rules, the error will be logged and an exception will be thrown. Upon a successful placement of an order the shopping cart will be emptied and the order id (from the Order Database service) will be returned.
This is quite a big business requirement. It will surly result in several test cases we can use to write unit tests, but let’s start with the easiest first:
When a user attempts to order an item with a quantity of greater than zero the order id should be returned.
This is a simple case and gives us a good place to start. As this is the first requirement and first test, I have no code or even a Visual Studio solution. To get started I’m going to create a blank Visual Studio solution for my project, which will call TddStore. Next I’ll create a class library for my unit tests called TddStore.UnitTests and I’ll use NuGet to add a reference to NUnit to my unit test library. If you need a refresher on how to do any of these tasks you can check out the post for Day Two. After completing these steps my Solution Explorer should look something like Figure 1:
Figure 1 – My solution and unit test project are created and NUnit has been referenced
Using the JustCode rename command (also shown in the Day Two post) I rename Class1 to OrderServiceTests. I use the NUnit TestFixture and Test attributes to write my first test (I’ve used JustCode to get rid of my unused using statements):
1: using System;
2: using System.Linq;
3: using NUnit.Framework;
4:
5: namespace TddStore.UnitTests
6: {
7: [TestFixture]
8: class OrderServiceTests
9: {
10: [Test]
11: public void WhenUserPlacesACorrectOrderThenAnOrderNumberShouldBeReturned()
12: {
13:
14: }
15: }
16: }
(get code sample)
For this example we will assume that another team has been working this application as well and has provided me with the implementation of ShoppingCart and Order as well as interfaces for the OrderDataService, BillingService, FulfillmentService and Logging Service in a class library project. This TddStore.Core project is also where I’m expected to place my OrderService and any ancillary classes I create to support it. If you are following along with this example you should create a C# class library project in your solution called TddStore.Core and add these files. You should then add a reference to this new Core project from your unit test project.
For this test I need to create an order by calling a yet to be defined method on a yet to be created class. Per the TDD workflow, I write my test first:
1: [Test]
2: public void WhenUserPlacesACorrectOrderThenAnOrderNumberShouldBeReturned()
3: {
4: //Arrange
5: var shoppingCart = new ShoppingCart();
6: shoppingCart.Items.Add(new ShoppingCartItem { ItemId = Guid.NewGuid(), Quantity = 1 });
7: var customerId = Guid.NewGuid();
8: var expectedOrderId = Guid.NewGuid();
9: var orderService = new OrderService();
10:
11: //Act
12: var result = orderService.PlaceOrder(customerId, shoppingCart);
14: //Assert
15: Assert.AreEqual(expectedOrderId, result);
16: }
(get sample code)
The code in this test should be pretty easy to understand by now; in my “Arrange” section I’m creating an instance of my shopping cart and adding an item to it. I’m also creating dummy order and customer id’s to use for my test. Finally I’m instantiating an instance of the OrderService to test against. My “Act” and “Assert” sections are also pretty self-explanatory; I’m calling PlaceOrder with the customer Id and shopping cart I create in the Arrange and in the Assert I’m verifying that the order id I get back as my result is what I was expecting.
At this point I run the test to watch it fail and then start writing the simplest thing that will make the test pass. Eventually, I come up with this:
1: public class OrderService
2: {
3: public object PlaceOrder(Guid customerId, ShoppingCart shoppingCart)
4: {
5: // TODO: Implement this method
6: throw new NotImplementedException();
7: }
8: }
Based on my test case, I need a reference to the OrderDataService class. I refactor my OrderService class to accommodate injection of this via the constructor:
3: private IOrderDataService _orderDataService;
5: public OrderService(IOrderDataService orderDataService)
6: {
7: _orderDataService = orderDataService;
8: }
9:
10: public object PlaceOrder(Guid customerId, ShoppingCart shoppingCart)
11: {
12: // TODO: Implement this method
13: throw new NotImplementedException();
I’ve given my PlaceOrder method access to an instance of the OrderDataService (via the IOrderDataService interface), but now my test no longer compiles. The reason is that the default constructor for OrderService no longer exists; I need to pass something that implements the IOrderDataService interface to my OrderService.
As I mentioned in my previous post I’ll be using Telerik’s JustMock mocking framework for this series. In this post I’ll be using the JustMock Lite framework. Since I’m using JustMock Lite I can use NuGet to add JustMock to my TddStore.UnitTests library. Note: If you are using the NuGet console the name of the JustMock Lite package is JustMock.
Now that I have access to JustMock in my TddStore.UnitTests project I can create a stub for the IOrderDataService.
4: using TddStore.Core;
5: using Telerik.JustMock;
6:
7: namespace TddStore.UnitTests
8: {
9: [TestFixture]
10: class OrderServiceTests
11: {
12: [Test]
13: public void WhenUserPlacesACorrectOrderThenAnOrderNumberShouldBeReturned()
14: {
15: //Arrange
16: var shoppingCart = new ShoppingCart();
17: shoppingCart.Items.Add(new ShoppingCartItem { ItemId = Guid.NewGuid(), Quantity = 1 });
18: var customerId = Guid.NewGuid();
19: var expectedOrderId = Guid.NewGuid();
20:
21: var orderDataService = Mock.Create<IOrderDataService>();
22: OrderService orderService = new OrderService(orderDataService);
23:
24: //Act
25: var result = orderService.PlaceOrder(customerId, shoppingCart);
26:
27: //Assert
28: Assert.AreEqual(expectedOrderId, result);
29: }
30: }
31: }
The first step is to create the mock object. Before I can do that I need to add a “using” statement to the beginning of the OrderService file for the Telerik.JustMock namespace (line 5). On line 21 I create a mocked instance of the IOrderDataService interface. As this object implements the IOrderDataService interface I can pass it in as a constructor argument for OrderService, as seen on line 22. Next I need to add some logic to my business domain method (PlaceOrder on the OrderService) to use the IOrderDataService instance:
1: public object PlaceOrder(Guid customerId, ShoppingCart shoppingCart)
2: {
3: var order = new Order();
4: return _orderDataService.Save(order);
5: }
get sample code)
This code bears some explanation. According to our test case we simply need to pass the method under test a shopping cart with at least one item in it and get back an order id. There’s nothing specific in this test case about what to do if a validation rule fails or how to build an order object from the items in our shopping cart. The current unit test does meet our current test case, and therefore verifies that the code will perform as described by the test case.
We are still (and always) in the phase where we are trying to write the simplest code to make the test pass. At this point you probably have a couple questions. How are we actually sure the validation rules are present and working? How are we sure that the instance of Order that we are creating is being built correctly? The answer to both of these questions is that as we continue to fulfill the requirement we will create more test cases, including test cases to cover those two questions. Once we have test cases we can write unit tests. Once we have unit tests we can write code. For the purposes of this post we’ll continue with our current test cases, but we will address these two and other test cases from this requirement in future posts.
At this point I want to run my tests again. The test should be failing because while I’ve supplied all the needed dependencies that have enabled me to write the code that should make the test pass, I’m still not getting back the value for order id that I expected (Figure 2)
Figure 2 – Tests are failing
While we have created a mock object to stand in for our OrderDataService, we haven’t told it what to do when someone actually calls it. Referring to the list of types of mock in the previous post, what we currently have is a “Dummy". We need to elevate it to an actual Stub.
10: var orderDataService = Mock.Create<IOrderDataService>();
11: Mock.Arrange(() => orderDataService.Save(Arg.IsAny<Order>())).Returns(expectedOrderId);
12: OrderService orderService = new OrderService(orderDataService);
14: //Act
15: var result = orderService.PlaceOrder(customerId, shoppingCart);
16:
17: //Assert
18: Assert.AreEqual(expectedOrderId, result);
19: }
On line 11 I am using the Mock.Arrange command to setup my orderDataService mock object, which essentially makes this orderDataService mock object into a stub. The Arrange method takes a Linq expression describing the method on the specific stub object I want to define a behavior for. In this case I’m telling my stub that I want it to respond to calls to the Save method. As part of this Linq expression I can define a parameter list for my stub. I can specify specific value if I want. For example, if Save took an integer as a parameter I could state that I want this particular arrangement on this stub to only respond when the Save method is called with an integer value of 42. If I called the mock with a value other than 42 it would return the default value of its particular return type (for example, it would return zero in the return type was an int, long or any numerical type). This is why when I ran the tests previously the orderDataService mock returned an empty (all zero’s) Guid. This is referred to as “loose mocking.” Will discuss loose and strict mocking in a future post.
In this case I’m passing in an instance of an Order object. I don’t really need to be worried about making sure I get a specific Object right now, so I’m using what’s known as a “Matcher.” A Matcher is essentially a way of telling an arrangement that I don’t want it to be too concerned with the specifics of a parameter, I just want to define a behavior for parameters that meet a certain pattern. In this case I’m telling JustMock that I just care that an Order object has been passed in as a parameter. I don’t care where it came from or what’s in it so long as it’s an instance of Order. Matchers are powerful tools in mocking and I’ll be explaining them in a future post.
One thing to keep in mind when writing arrangements for stubs or mocks is the fundamental difference between value types and reference types in .NET. When I create an arrangement with a value type the mock evaluates whether the parameter matches or not by looking at the values. One is always one and true is always true. This mirrors how value types are treated in .NET in general. When working with reference types the mock is not going to validate that the two objects are equal, it’s instead going to attempt to verify that they are the same object. For this reason, if I were to create an Order object in my test and use that as my expected parameter it would not be able to match the parameters as they are two different instances of Order. I’ll discuss how to handle this in future posts, but for now I just wanted to remind you of this aspect of testing and .NET development.
Finally I’m telling my mock that if a call is made to my mock or OrderDataService and it matches the criteria defined in my Arrange expression I want it to return a specific value, in this case the expectedOrderId. And that’s it; with one line of code I have elevated my Dummy mock object to a Stub. If I run my test now I can see that it passes and I have fulfilled my current test case (Figure 3)
Figure 3 – The test passes
We’ve covered quite a bit of ground in this post. We defined a requirement then derived a test case from that requirement. We used our test case to define a unit test and then wrote just enough code to make that test pass.In addition to all that work we wrote our first mock object; a stub that responded to a method call.
Stubs are the one of the easiest and perhaps the most common types of mocks you will create. But they are one tool in your mocking tool box; there are others you will need to understand to truly be a productive TDD developer. Mocking is a deep, and at times, complex topic. We will be spending the bulk of the remaining posts in this series discussing the different types of mocks as well as patterns for successful mocking as well as anti-patterns to avoid. | http://www.telerik.com/blogs/30-days-of-tdd-day-12-working-with-stubs | CC-MAIN-2017-22 | refinedweb | 2,404 | 60.45 |
.
What is
-betterC, and Why do I Need it?
The short answer is that most D programmers don’t need it. The longer answer is that it does two things: first, it restricts the language to a lower-level subset (that’s still higher-level than C), and, second, it changes the implementation of compiled code a little so that it only depends on the C runtime, and not the D runtime.
If you just want control over things like GC and runtime features for performance reasons, you can already get it
without
-betterC. You can read more about that in this GC series on the official blog, and in my previous post about the D runtime itself.
What
-betterC does provide is an intermediate
language that integrates very well with both C code and D code. Walter envisions this as a way for D to penetrate more
into parts of the software world that are still dominated by C. For example, practically all languages today still run
on top of a layer of operating system libraries that are written in C (and C++ in the Windows world). D’s runtime
itself depends on this layer, so D can’t ever replace C unless runtimeless programming is possible.
The
-betterC switch is a little controversial, and I agree
that things could be better in the future. But
-betterC is
here today, and ultimately we’re only going to figure out how to use D as a better C by trying it out. That’s why I
originally published that post about runtimeless D, even though it was a horrible hack.
What’s New?
There are two main ways betterC programming has improved since I last wrote about it. One pain point was the
over-dependence on runtime reflection in the language implementation, even for things like integer array comparisons
that could be implemented with just
memcmp(). A lot of work
has been done since then to replace reflection with templates, which is good news even for programmers who aren’t doing
low-level stuff. Lucia Cojocaru presented some of this work at
DConf 2017.
The other area of improvement is in the
-betterC switch
itself. Back then there were only two places in the DMD compiler where
-betterC had any effect at all, so most runtime dependencies were left
in. Simply defining a struct, for example, would still cause the D compiler to insert
TypeInfo instances for runtime type information, which depend on base
class implementations that are defined in the D runtime library.
assert statements would still be implemented using a D runtime
implementation, not the C runtime implementation. These are the two most obvious problems that have been fixed.
-betterC Take Two
In that old post, I took some D code, compiled it, hacked out the runtime, and then linked it directly to some C code without the D runtime. Let’s see how things work now. Here’s the D code again:
module count; @nogc: nothrow: import core.atomic : atomicOp, atomicLoad; extern(C) { int count() { scope(exit) counter.addOne(); return counter.getValue(); } } private: shared struct AtomicCounter(T) { void addOne() pure { atomicOp!"+="(_v, 1); } int getValue() const pure { return atomicLoad(_v); } private: T _v; } unittest { shared test_counter = AtomicCounter!int(42); assert (test_counter.getValue() == 42); test_counter.addOne(); assert (test_counter.getValue() == 43); } shared counter = AtomicCounter!int(1);
And here’s the C code:
#include <stdio.h> int count(); // From the D code int main() { int j; for (j = 0; j < 10; j++) { printf("%d\n", count()); } return 0; }
Here’s what happens now (on a GNU/Linux system):
$ dmd --version DMD64 D Compiler v2.076.0-b2-dirty Copyright (c) 1999-2017 by Digital Mars written by Walter Bright $ ls count.d program.c $ dmd -c -betterC count.d $ gcc count.o program.c -o program count.o:(.data.DW.ref.__dmd_personality_v0+0x0): undefined reference to `__dmd_personality_v0' collect2: error: ld returned 1 exit status
Damn. So close. The D compiler has left in some exception handling data structures, even though
-betterC isn’t supposed to support exceptions. You’ll see I’m using the
new DMD beta, and there’s already an open bug report and pull
request for this kind of problem, so I expect it’ll be fixed soon.
I’ll update this post when it
is.
Here’s a quick workaround for now (read the original linker hacking article for an explanation):
$ objcopy -R .data.DW.ref.__dmd_personality_v0 -R .eh_frame count.o $ gcc count.o program.c -o program $ ./program 1 2 3 4 5 6 7 8 9 10
It might not look like much, but it’s a huge improvement. Thanks to all the developers who helped make it happen. | https://theartofmachinery.com/2017/08/26/new_betterc.html | CC-MAIN-2021-39 | refinedweb | 785 | 64.41 |
Hello,
I’m creating a music sequencer application that consists of the following:
– 6 ‘instruments’: different types of sound, e.g. melody, bass, percussion.
– 6 loops (all of the same length) per instrument: e.g. melody 1, melody 2, melody 3.
– Combinations of the above that are generated dynamically.
This means that always a maximum of 6 sounds will be playing. It is possible that no sounds play at all for one bar. Basically it is a DJ mixing application.
I’m running into a few problems and I hope you can help me fix these.
Syncing. This might sound a bit ridiculous, but when I run the playSound function 6 times by using a for-loop, the sounds wouldn’t start playing at [b:2n03vl8q]exactly[/b:2n03vl8q] the same moment, right? I mean, calculating this for-loop takes time, so the sounds are not completely in sync to start with. What is the best way, sync-wise, to start playing a group of sounds?
Stitching. I looked into the realtimestitching example and this looks like the way to go for me. In order for real-time [b:2n03vl8q]dynamic[/b:2n03vl8q] stitching to work I need to set up some sort of queue of the next samples that need to be played. What is the best way of achieving this in FMOD?
I’m looking forward to your advice, thanks a lot in advance!
- Abel.
BTW, The FMOD Ex version I’m using is 4.26.09.
- abeldebeer asked 8 years ago
- You must login to post comments
Thank you cxvjeff and Peter for your replies!
I have a question about your suggestions.
[code:3oscz78z]createChannelGroup( &channelGroup );
channelGroup->setPaused( true );
for ( count = 0; count < numChannels; count++ )
{
createSound( &sound[ count ] );
playSound( sound[ count ], &channel[ count ] );
channel[ count ]->setChannelGroup( channelGroup );
}
channelGroup->setPaused( false );[/code:3oscz78z]
With the above, when the channelGroup is unpaused, all sounds play at the same time. Could you explain what the big difference is between this approach and the setDelay example code? The setDelay code seems pretty thorough so I reckon that would be the preferred method, but I’d like to understand it a bit more.
Slightly off-topic, I’m developing on Linux (Fedora) and I can’t run the setDelay example because of some missing (Windows?) libraries. Is there a Linux version available as well or could you possibly point me in the right direction to get those libraries?
Thanks again for your time!
- Abel.
There is no guarantee that unpausing the channel group will make them synchronised. For performance reasons that function doesn’t lock the mixer so it’s effectively the same as looping though: channel[i]->setPaused(false).
To get the setDelay example running on linux you can use the ‘wincompat.h’ header shipped with the examples instead of windows.h, and try changing __int64 to int64 or ‘singed long long’. Let me know how it goes.
-Pete
I’ve got the setDelay example working, thanks!
Now, since I’m not an experienced C++ programmer, it’s pretty hard for me to filter out the way I can use this for my app. It would be very helpful if you could provide me with a code snippet to show the setDelay function behaviour. I want to start 6 different samples at the same time. They are of the same length.
I hope this is enough info.
Thanks again!
[quote:2mjrxkcg] I want to start 6 different samples at the same time. They are of the same length. [/quote:2mjrxkcg]
To highlight some important parts of the example:
You need to make sure m_min_delay is set:
[code:2mjrxkcg]ERRCHECK(m_system->getDSPBufferSize(&m_min_delay, 0));
m_min_delay *= 2;[/code:2mjrxkcg]
Scheduling a channel to play immediately:
[code:2mjrxkcg]// the previous channel isn’t playing; schedule ASAP
ERRCHECK(m_system->getDSPClock(&start_time.mHi, &start_time.mLo));
start_time += m_min_delay;
ERRCHECK(m_channels[i]->setDelay(FMOD_DELAYTYPE_DSPCLOCK_START,
start_time.mHi, start_time.mLo));[/code:2mjrxkcg]
-Pete
Hi again,
I tried experimenting with the setDelay() code, but I can’t get it to work properly.
When I use the code provided below, there is still an audible delay compared to (un-)pausing a ChannelGroup. So I reckon I am not implementing the setDelay() function the right way.
Your help is much appreciated!!!
- Abel.
PS: the loaded samples are all of the same length: 7500 ms.
[code:3vxdgf2c]#include <string>
include <vector>
include <map>
include "../fmod/inc/fmod.hpp"
include "../fmod/inc/fmod_errors.h"
include "../fmod/common/wincompat.h"
void fModErrorCheck( FMOD_RESULT result )
{
if (result != FMOD_OK)
{
printf("FMOD error! (%d) %s\n", result, FMOD_ErrorString(result));
exit(-1);
}
}
void fModVersionCheck( unsigned int version )
{
if ( version < FMOD_VERSION )
{
printf( "Error! You are using an old version of FMOD %08x. This program requires %08x\n", version, FMOD_VERSION );
exit(-1);
}
}
// simple 64-bit int handling types
typedef unsigned long long Uint64;
class Uint64P
{
public:
Uint64P(Uint64 val = 0)
: mHi(int(val >> 32)), mLo(int(val & 0xFFFFFFFF))
{ }
Uint64 value() const { return (Uint64(mHi) << 32) + mLo; } void operator+=(const Uint64P &rhs) { FMOD_64BIT_ADD(mHi, mLo, rhs.mHi, rhs.mLo); } unsigned int mHi; unsigned int mLo;
};
using namespace std;
define NUM_SOUNDS 30
int main(int argc, char argv[])
{
FMOD::System system;
FMOD::Channel* channels[ NUM_SOUNDS ];
FMOD::Sound* sounds[ NUM_SOUNDS ];
unsigned int version, count;
unsigned int m_min_delay;
Uint64P start_time;
const char* fileNames[ NUM_SOUNDS ] =
{
".."
};
// Initialize FMOD
fModErrorCheck( FMOD::System_Create( &system ) );
fModErrorCheck( system->getVersion( &version ) );
fModVersionCheck( version );
fModErrorCheck( system->init( 32, FMOD_INIT_NORMAL, 0 ) );
// Get DSP clock
fModErrorCheck(system->getDSPBufferSize(&m_min_delay, 0));
m_min_delay *= 2;
fModErrorCheck(system->getDSPClock(&start_time.mHi, &start_time.mLo));
start_time += m_min_delay;
// Create sounds
for ( count = 0; count < NUM_SOUNDS; count++ )
{
fModErrorCheck( system->createSound( fileNames[ count ], FMOD_LOOP_NORMAL, 0, &sounds[ count ] ) );
fModErrorCheck( system->playSound( FMOD_CHANNEL_FREE, sounds[ count ], false, &channels[ count ] ) );
fModErrorCheck( channels[count]->setDelay(FMOD_DELAYTYPE_DSPCLOCK_START, start_time.mHi, start_time.mLo) );
}
// Simple interface
printf("\n");
printf("Press Esc to exit.\n");
printf("\n");
// Main loop
int key;
do {
if ( kbhit() )
{
key = getch();
}
}
while ( key != 27 ); // 27 = Esc
// Shut down application
for ( int count = 0; count < NUM_SOUNDS; count++ )
{
fModErrorCheck( sounds[ count ]->release() );
}
fModErrorCheck( system->close() );
fModErrorCheck( system->release() );
}[/code:3vxdgf2c]
Never mind. Looks like the original wave file is at 22050 Hz and the default output sampling rate at 48000 Hz.
You should also be starting those sounds as paused (3rd parameter for playSound should be true). You should be calling system->update in your do loop.
I suspect that could be related to resampling. If you want to do looping you could just set the looping flag in FMOD. Sequencing via setDelay is done post resamper so it can be slightly less accurate.
- Guest answered 7 years ago
OK that would make sense. However, from looking at the code, it isn’t clear to me where resampling is taking place. I don’t see any mention of sampling rate, or any resampling function anywhere. Could there be a default output sampling rate that differs from the input wave file’s sampling rate? I’m not looking to loop, but I want to understand where artifacts come from. For example, if I were to slice a waveform in small chunks, and then concatenate those in their normal order, would there be any difference with the original waveform? If yes, then why?
[quote="peter":3aqurp2x]
There is an example on the wiki of how to do to real time sample accurate sequencing using setDelay: … h_setDelay
[/quote:3aqurp2x]
So I compiled and ran that code and compared its output (even as written to a file) to a realtime looping of the original wav file in a sound editor. There’s a noticeable sound difference! The stitching isn’t as accurate/clean in the FMOD code as it is in the sound editor. Can someone explain what could be going slightly wrong? Bug? Resampling and distortions in other parts of the FMOD chain? I’m confused. Thanks.
So I won’t have the exact answers you’re looking for but here’s something to chew on in the mean time for your issue #1 — ChannelGroup. Create all of your sound objects and load them into a ChannelGroup. From there you can use the ChannelGroup::setPaused function to start/stop the sounds all together and let FMOD handle starting the sounds at the exact same moment in time. Check out the ChannelGroups example to help you along. This will offload your work out of your For…Loop block of code and put the work on FMOD’s side which is always a better option it seems. Best of luck!
You may also continue to check back to see if one of the FMOD gurus has a better suggestion as mine is just right off the top of my head from your question.
- cxvjeff answered 8 years ago
Hi abeldebeer,
[quote:2vh47xdu]Syncing. This might sound a bit ridiculous, but when I run the playSound function 6 times by using a for-loop, the sounds wouldn’t start playing at exactly the same moment, right?[/quote:2vh47xdu]
Yeah you’re on the right track. When you call playsound or setpaused it is from the main thread. The sounds are getting processed in the mixer thread. They will get out of sync if the mixer thread starts a mix half way through your for loop. A better option would be to schedule the sounds to start at the same time. There is an example on the wiki of how to do to real time sample accurate sequencing using setDelay: … h_setDelay
-Pete | https://www.fmod.org/questions/question/forum-32167/ | CC-MAIN-2017-51 | refinedweb | 1,563 | 64.2 |
The QTextBrowser class provides a rich text browser with hypertext navigation. More...
#include <QTextBrowser>
The QTextBrowser class provides a rich text browser with hypertext navigation.
This class extends QTextEdit (in read-only mode), adding some navigation functionality so that users can follow links in hypertext documents.
If you want to provide your users with an editable rich text editor, use QTextEdit. If you want a text browser without hypertext navigation use QTextEdit, and use QTextEdit::setReadOnly() to disable editing. If you just need to display a small piece of rich text use QLabel. factory.
If a document name ends with an anchor (for example, "#anchor"), the text browser automatically scrolls to that position (using scrollToAnchor()). When the user clicks on a hyperlink, the browser will call setSource() itself with the link's href value as argument. You can track the current source by connecting to the sourceChanged() signal.().
See also QTextEdit and QTextDocument.
This property holds whether the contents of the text browser have been modified. path. It also checks for optional anchors and scrolls the document accordingly
If the first tag in the document is <qt type=detail>, the document is displayed as a popup rather than as new document in the browser window itself. Otherwise, the document is displayed normally in the text browser with the text set to the contents of the named document with setHtml().
By default, this property contains an empty URL.
Access functions:
This property holds whether the text browser supports undo/redo operations.
By default, this property is false.
Constructs an empty QTextBrowser with parent().
Reimplemented from QObject::event().
Reimplemented from QWidget::focusNextPrevChild().
Reimplemented from QWidget::focusOutEvent().().
Reimplemented from QWidget::keyPressEvent().
The event ev is used to provide the following keyboard shortcuts::
Reimplemented from QWidget::mouseMoveEvent().
Reimplemented from QWidget::mousePressEvent().
Reimplemented from QWidget::mouseReleaseEvent().
Reimplemented from QWidget::paintEvent().
Reloads the current set source.
This signal is emitted when the source has changed, src being the new source.
Source changes happen both programmatically when calling setSource(), forward(), backword() or home() or when the user clicks on links or presses the equivalent key sequences. | http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qtextbrowser.html | CC-MAIN-2014-15 | refinedweb | 349 | 50.84 |
Command change recognition
A Command task is similar to a file task: It is intended to create on file which may depend on other files. But instead of creating the file by executing a block of Ruby code, the Command task creates the target file by executing a shell command.
A file task rebuilds the target file if at least one of the following two conditions is met:
- The target file doesn’t exist.
- One of the prerequisites changed since the last build.
A Command task rebuilds the target file if at least one of the following three conditions is met:
- The target file doesn’t exist.
- One of the prerequisites changed since the last build.
- The command to create the target file changed since the last build.
General usage
Consider the following Rantfile for rant 0.4.6:
var :CFLAGS => "-g -O2" # can be overriden from commandline file "foo" => ["foo.o", "util.o"] do |t| sys "cc -o #{t.name} #{var :CFLAGS} #{t.prerequisites.join(' ')}" end gen Rule, ".o" => ".c" do |t| sys "cc -c -o #{t.name} #{var :CFLAGS} #{t.source}" end
The problem with this buildfile is, that it won’t recognize a change of CFLAGS, i.e. foo.o, util.o and foo should get rebuilt whenever CFLAGS changes.
Since Rant 0.4.8, it is possible to do the following:
import "command" var :CFLAGS => "-g -O2" # can be overriden from commandline gen Command, "foo" => ["foo.o", "util.o"] do |t| # notice: we're not calling the sys method, we # are returning a string from the block "cc -o #{t.name} #{var :CFLAGS} #{t.prerequisites.join(' ')}" end # if the block to Rule takes two arguments, # it is expected to return a task gen Rule, ".o" => ".c" do |target, sources| gen Command, target => sources do |t| "cc -c -o #{t.name} #{var :CFLAGS} #{t.source}" end end
Now, whenever the command to build foo or a *.o file changes, it will be rebuilt.
There is also a more concise syntax:
import "command" var :CFLAGS => "-g -O2" # first argument (string) is the task/file name, second # argument (string, array or filelist) is a list of # prerequisites (notice: no `target => prereqs' syntax!) # third argument is a command string, which will be # executed by a subshell. gen Command, "foo", ["foo.o", "util.o"], 'cc -o $(>) $[CFLAGS] $(<)' gen Rule, ".o" => ".c" do |target, sources| gen Command, target, sources, 'cc -c -o $(>) $[CFLAGS] $(-)' end
For the last syntax:
Interpolation of variables into command strings:
Which variables are interpolated?
- Instance variables (mostly for internal usage), e.g.:
@cc = "cc"
- "var" variables (can be set from commandline, easy synchronization with environment variables), e.g.:
var :cc => "cc"
- Special, task specific variables
more task specific variables might get added later
Syntax of variable interpolation
Variable names must consist only of "word characters", i.e. matching \w in Ruby regexes.
- Plain interpolation. Example command:
"echo $[ARGS]"
The contents of variable ARGS (either @ARGS or var[:ARGS]) are converted to a string and interpolated. If the variable contains an array, ARGS.join(’ ’) is interpolated.
- Escaped interpolation. Example command:
"echo ${ARGS}"
Like plain interpolation, but spaces will be escaped (system dependent). Consider this Rantfile:
import "command" @args = ["a b", "c d", "ef"] @sh_puts = "ruby -e \"puts ARGV\"" gen Command, "foo", '$[sh_puts] ${args} > $(>)'
Running rant will give on Windows:
ruby -e "puts ARGV" "a b" "c d" ef > foo
and on other systems:
ruby -e "puts ARGV" a\ b c\ d ef > foo
- Path interpolation. Example command:
"echo $(ARGS)"
Like escaped interpolation, but additionally, forward slashes (as used for filenames in Rantfiles) will be replaced with backslashes on Windows.
More on semantics of variable interpolation
Interpolation is recursive, except for special target variables.
There is a small semantic difference between the verbose special target variables (name prerequisites source) and the symbolic ones (< > -): The symbolic ones are interpolated after checking if the command has changed since the last build, the verbose forms are interpolated before.
Consider this (artifical, using the Unix tool "cat") example: Rantfile (symbolic special target vars):
import "command" @src = ["src1", "src2"] @src = var[:SRC].split if var[:SRC] gen Command, "foo", @src, 'cat $(<) > $(>)' % echo a > src1 % echo b > src2 % echo b > src3 % rant cat src1 src2 > foo
"foo" didn’t exist, so it was built anyway. Now let us change the prerequisite list:
% rant "SRC=src1 src3"
won’t cause a rebuild of foo. Dependencies of foo changed from ["src1", "src2"] to ["src1", "src3"] but since src2 and src3 have the same content and $(<) isn’t expanded for command change recognition, rant considers foo up to date.
Now change Rantfile to (verbose special target vars):
import "command" @src = ["src1", "src2"] @src = var[:SRC].split if var[:SRC] gen Command, "foo", @src, 'cat $(prerequisites) > $(name)'
Starting from scratch:
% echo a > src1 % echo b > src2 % echo b > src3 % rant cat src1 src2 > foo % rant "SRC=src1 src3" cat src1 src3 > foo
This time, Rant expanded $(prerequisites) for command change recognition, and since the prerequsite list changed, it caused a rebuild.
See also
If you want more details, look in the test/import/command directory of the Rant distribution. | http://make.rubyforge.org/files/doc/command_rdoc.html | CC-MAIN-2016-26 | refinedweb | 849 | 63.29 |
.NET Brain DroppingsI'm a Microsoft Certified Architect (MCA)... Feel free to ask me about the program... Server2005-03-10T13:44:00ZINETA Chalk Talk on Thursday at 9:00 am<font size="2"><span style="font-family: Tahoma;">If you work in the Atlanta area, feel free to stop by the INETA Chalk Talks going on the morning of the MSDN Event. From 9am in the morning until ??? Myself, </span><a style="font-family: Tahoma;" href="">Mark Dunn</a><span style="font-family: Tahoma;">, and </span><a style="font-family: Tahoma;" href="">Shawn Wildermuth</a><span style="font-family: Tahoma;"> will be answering questions on topics that you pick.</span><br style="font-family: Tahoma;" /><br style="font-family: Tahoma;" /><span style="font-family: Tahoma;">Feel free to stop by. I'm really looking forward to getting out and meeting everyone in the community.</span><br style="font-family: Tahoma;" /><br style="font-family: Tahoma;" /><span style="font-family: Tahoma;">The talks are this Thursday (Aug 25) at the </span><a style="font-family: Tahoma;" href="">Regal Cinemas Chamblee theater</a><span style="font-family: Tahoma;">.</span><br style="font-family: Tahoma;" /><br style="font-family: Tahoma;" /><span style="font-family: Tahoma;">Hope to see you there!</span></font><br /><br /><img src="" width="1" height="1">dbrowning, This is Don's blog speaking...<p><span style="FONT-SIZE: 12pt; FONT-FAMILY: 'Times New Roman'; mso-fareast-font-family: 'Times New Roman'; mso-ansi-language: EN-US; mso-fareast-language: EN-US; mso-bidi-language: AR-SA"><font face="Tahoma" size="2">If you see him, will you let him know that I'm staler than that bag of pretzels sitting on his desk and he needs to get back to blogging. Especially now that reinforcements have arrived...</font></span></p> <p> </p><img src="" width="1" height="1">dbrowning Waiting is Over<p>I got "Connected" today. Looking foward to testing not only Vista, but also Longhorn Server (who knows, I may even put FireFox down for a few minutes to give IE7 a run)</p> <p> </p><img src="" width="1" height="1">dbrowning... Waiting... Waiting... of you who know what Microsoft Connect is will understand my plight. I keep checking that page every hour to see if:<br /><ol><li>My status goes from Pending to Active</li><li>If there are any bits up there for download (though I hear that will be 8/3)</li></ol>I'm dying here... I can't wait for this drop. If it's even halfway solid it'll be installed on every machine I have...<br /><br /><br /><img src="" width="1" height="1">dbrowning Smith shows how everyone should be using VPC Differencing Disks<a href="">Don's</a> post on <a href="">how to leverage differencing disks in Virtual PC</a> is truly enlightening. I am the worst about creating ad hoc VPC images. I have one base image with WinXP SP2 and Office on it; anytime I need a new "machine" I copy that base image, run <a href="">NewSid</a>, and start installing software. I know this is a waste of drive space (and effort), but I never really thought to much about it.<br /> <br /> Well, I have now. Don does a great job of laying out how he uses his base images combined with differencing disks to support multiple envionments. This is kick ass stuff. I'm linking his VPC Model here, but <a href="">go read the entire thing to get the details</a>.<br /> <br /><img src="" /><p></p><img src="" width="1" height="1">dbrowning whitepaper on Table Partitioning in SQL 005<font face="Tahoma" size="2">I understand this whitepaper has been out for about a year now, but I'm just now catching up when it comes to new Yukon features. Kim Tripp has <a href="">published an excellent paper</a> that describes when/how/why to use <a href="">Partitioned Tables</a> in Yukon. Excellent article! I've been wondering how the mechanics of this worked since hearing about Kim's rocking demo of Partitioned Tables using a series of USB keys. Great stuff, you can check out that .NET rocks episode <a href="">here</a>.<br /> <br /> <font color="#000080">[Watching: John Stewart Show]</font></font><br /> <br /> <img src="" width="1" height="1">dbrowning Best Session at TechEd<p><font face="Tahoma" size="2">Aside from finally getting to meet a few of my fellow </font><a href=""><font face="Tahoma" size="2">MCA </font></a><font face="Tahoma" size="2">friends, the best part about </font><a href=""><font face="Tahoma" size="2">TechEd</font></a><font face="Tahoma" size="2"> </font><a href=""><font face="Tahoma" size="2">Kirk </font></a><font face="Tahoma" size="2">actually caught it on film. That morning </font><a href=""><font face="Tahoma" size="2">Julie</font></a><font face="Tahoma" size="2">, </font><a href=""><font face="Tahoma" size="2">Rich</font></a><font face="Tahoma" size="2">, </font><a href=""><font face="Tahoma" size="2">Don</font></a><font face="Tahoma" size="2">, </font><a href=""><font face="Tahoma" size="2">Clemens</font></a><font face="Tahoma" size="2">, </font><A href=""><font face="Tahoma" size="2">Christian</font></a><font face="Tahoma" size="2">, and I kicked around topics ranging from </font><a href=""><font face="Tahoma" size="2">System.Transactions </font></a><font face="Tahoma" size="2">to </font><a href=""><font face="Tahoma" size="2">Borland development tools </font></a><font face="Tahoma" size="2">of yesteryear (we even talked about Visual Basic right Rich <font face="Courier New">:P</font> ).</font></p> <p><font face="Tahoma" size="2">This years TechEd kicked ass. I have three pages of notes, and about 20 follow up items that I need get started on before I forget. To everyone I talked to, thanks for giving me the time to knock ideas off you (or for the good ones you gave me).</font></p><img src="" /><img src="" width="1" height="1">dbrowning to TechEd? Looking for a Job? Email me...<p><font face="Tahoma">I work for </font><a href=""><font face="Tahoma">Turner Broadcasting </font></a><font face="Tahoma">(you know </font><a href=""><font face="Tahoma">TNT</font></a><font face="Tahoma">, </font><a href=""><font face="Tahoma">TBS</font></a><font face="Tahoma">, </font><a href=""><font face="Tahoma">TCM</font></a><font face="Tahoma">, </font><a href=""><font face="Tahoma">CTN</font></a><font face="Tahoma">, </font><a href=""><font face="Tahoma">TSO </font></a><font face="Tahoma" </font><a href=""><font face="Tahoma">VSTS</font></a><font face="Tahoma">, </font><a href=""><font face="Tahoma">Yukon </font></a><font face="Tahoma">and </font><a href=""><font face="Tahoma">Indigo</font></a><font face="Tahoma">) email me. I have my </font><a href=""><font face="Tahoma">BlackBerry </font></a><font face="Tahoma">so I will get the email immediately. We can set up a time to get together...</font></p> <p><font face="Tahoma">You can e-mail me though this blog, or directly at <font size="3"><a href="mailto:Mojavecoders@turner.com">Mojavecoders@turner.com</a></font></font></p> <p><font face="Tahoma"><font size="3"><font color="#ff0000"><strong>[Update: So it turns out my spam filter is being a bit overactive with people's email, so if I don't respond in a timely manner, try e-mailing me again.]</strong></font></p></font></font> <p><font color="#000080" size="2"></font> </p> <p> </p><img src="" width="1" height="1">dbrowning Service Contracts with Angle Brackets... Yes/No?<p><font face="Tahoma" size="2").</font></p> <p><font face="Tahoma" size="2">Anyone out there have any opinions? I'll post his response later because I want some honest feedback on this one. I'm about to define the architecture for a pretty large app, and the decision I make now I will have to live with for a <em><strong>l o n g t i m e</strong></em>...</font></p> <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> <p><font face="Tahoma" size="2"><emailSnip></font></p> <p><font face="Tahoma" size="2">So let me run this by you. I'm always open to opinions/critique... </font></p> <p><font face="Tahoma" size="2">-- If there's one thing I learned from doing VB.COM back in the day it's that using a programming language to define a interoperable, *language agnostic* interface can get you in trouble. The tendency to define concepts that are not understood by all consumers of the interface is to great. I understand that I am (very) loosely comparing Indigo to VB.COM, but theoretically the same issue exists. I understand there is a perfectly valid counter argument saying that you can define concepts in XSD that are not supported by all languages, but these are edge cases that are rarely an issue (substitution groups come to mind).</font></p> <p><font face="Tahoma" size="2">-- I don't have to see pointy brackets anymore. Tools like XMLSpy are getting better and better at abstracting away the pointy brackets. In fact, XMLSpy is getting close to being a DSL for defining the aforementioned interoperable interface. :)</font></p> <p><font face="Tahoma" size="2">-- The idea behind this whole contract-first hoo ha is that we define contracts, that represent documents that are passed from consumer --> service --> and (maybe) back. IMHO, there is no better (or more natural) method for creating a document definition than XSD. </font></p> <p><font face="Tahoma" size="2">-- I also love the fact that I can expose a XSD on a server-side endpoint and a consumer is free to use it in order to gain access to cursory business logic that would otherwise cost them a round-trip to the server.</font></p> <p><font face="Tahoma" size="2">Example: <br /> int withdrawalAmount; // This could be 450,000 <br />Vs. <br /> <element name="withdrawalAmount"> <br /> <simpleType> <br /> <restriction base="positiveInteger"> <br /> <maxExclusive value="400"/> <br /> </restriction> <br /> </simpleType> <br /> </element> <br />My ATM limit per transaction is 400 bucks. In the XSD world, I can push that constraint down to the client for validation prior to sending me the entire document. Additionally, to validate it on the server, I don't have to write trivial code such as:</font></p> <p><font face="Tahoma" size="2">If(withdrawalAmount > 400) return false; <br />That stuff is handled for me by the schema validator. </font></p> <p><font face="Tahoma" size="2"></emailSnip></font></p></blockquote> <p><font face="Tahoma" size="2">Thoughts?</font></p> <p><font face="Tahoma" color="#000080" size="2">[Listening to: The Bravery - Unconditional]</font></p> <p><font face="Tahoma" size="2"></font> </p><img src="" width="1" height="1">dbrowning Mono on Fedora Core 3 is a great (and brief) summary of how to install Mono on Fedora Core 2 (but also works on 3) here <a href="">on Equivocal Rambling</a>. The only thing you need to do differently is set up the yum channel in yum.conf like this:<br /> [mono]<br /> name=Mono 1.0<br /> baseurl=<br /> gpgcheck=0<br /> <br /> <img src="" width="1" height="1">dbrowning the Visual Studio 2005 Slip (and the infoworld article)<p>Be on the lookout for some news today. It ain't going to be good my friend... <font face="Courier New">:S</font></p> <p><font color="#000080">[Listening to: Pixies - Trompe le Monde]</font></p> <p> </p><img src="" width="1" height="1">dbrowning Outlet Etiquette<p class="MsoNormal" style="MARGIN: 0in 0in 0pt">Ok, so here’s the deal.<span style="mso-spacerun: yes"> </span>When you are hanging out at an airport waiting for your plane to depart please don’t sit in a seat next to a power outlet if you’re not going to use it!<span style="mso-spacerun: yes"> </span>This drives me freaking nuts.<span style="mso-spacerun: yes"> </span>I got out to the airport pretty early today.<span style="mso-spacerun: yes"> </span>My flight didn’t leave until 4:30 (supposedly) and I was standing at the gate at 3:00.<span style="mso-spacerun: yes"> </span>Once I saw the flight was delayed about 45 minutes I decided to get a bit of work done.<span style="mso-spacerun: yes"> </span>One problem… There were no free seats next to the power outlets in <?xml:namespace prefix = st1<st1:place w:<st1:PlaceName w:Atlanta</st1:PlaceName> <st1:PlaceType w:Airport</st1:PlaceType></st1:place>.<span style="mso-spacerun: yes"> </span>This drives me nuts.<span style="mso-spacerun: yes"> </span>There should be an abundance of power outlets in the airport.<span style="mso-spacerun: yes"> </span>Everything is rechargeable now, so come on guys.<span style="mso-spacerun: yes"> </span>Anyway, I had to search around for an available outlet.<span style="mso-spacerun: yes"> </span>I finally found one outside the women’s restroom.<span style="mso-spacerun: yes"> </span>Not a bad place to sit b/c there was a lot of traffic <font face="Courier New">;)</font>, but kind of uncomfortable after about 30 minutes…</p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><?xml:namespace prefix = o<o:p> </o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt">Anyway, the moral of this story is…<span style="mso-spacerun: yes"> </span>If you don’t need a power outlet, don’t sit in a seat that is within 20 feet of one.<span style="mso-spacerun: yes"> </span>I’ll guarantee you there is a person with a laptop sitting next to the bathroom somewhere who would like to have your seat.</p><img src="" width="1" height="1">dbrowning Looks Like the InfoWorld Article is Wrong've got word (from a <b>very</b> reliable source) that the <a href="">InfoWorld</a> article (see my post <A href="">here</a>) is incorrect and Beta 2 is still on target for the first quarter. As for the long-term date, that one looks solid as well... <b>Shew...</b> <p><font color="#000080" size="2">[Listening To: Ministry - Impossible]</font> </p> <p></p><img src="" width="1" height="1">dbrowning 2005 Beta 2 Slips into April can read all about it at <a href="">InfoWorld</a>. This is a serious bummer. We are currently in planning on projects that are using VS 2005 and VSTS, and this delay didn't fit into our schedules. <font face="'Courier New',Courier,monospace">:S</font> <p>I can only hope these slips will stop the closer we get to September... (MS still swears they will ship in September)</p> <p><font color="#000080"><font size="2">[Listening to: Nitzer Ebb - Without Belief]</font> </font> <p></p><img src="" width="1" height="1">dbrowning on the VB6 insanity...<a href="">Geoff Appleby</a> does an <a href="">excellent job summarizing my thoughts</a> on this whole VB6 debacle. I was <a href="">to busy being a smart ass</a> about it to write a well thought out post... (mostly because I really can't believe these guys are serious) <p> <font color="#000080" size="2">[Listening to: The Bravery - Out of Line]</font> <img src="" width="1" height="1">dbrowning | http://weblogs.asp.net/dbrowning/atom.aspx | crawl-002 | refinedweb | 2,581 | 52.29 |
Jason Bock
strongJason Bock/strong is a senior consultant for Magenic Technologies (a). He has worked on a number of business applications using a diverse set of substrates and languages such as C#, .NET, and Java. He is the author of iCIL Programming: Under the Hood of .NET/i and i.NET Security/i, both published by Apress, as well as iVisual Basic 6 Win32 API Tutorial/i. He has also written numerous articles on technical development issues associated with both Visual Basic and Java. Jason holds both a bachelor's and a master's degree in electrical engineering from Marquette University. You can find out more about him at a.
Jason Bock's Books
.NET Development Using the Compiler API
- Publication Date: July 2, 2016
- ISBN13: 978-1-484221-10-5
- Price: $17.99
This is the first book to describe the recent significant changes to the .NET compilation process and demonstrate how .NET developers can use the new Compiler API to create compelling applications. Learn More …
Applied .NET Attributes
- Publication Date: October 7, 2003
- ISBN13: 978-1-59059-136-9
- Price: $24.99
Attributes are used to modify the runtime behavior of code in the .NET Framework. This insightful guide explores the application of .NET attributes and how developers can write custom attributes that provide the maximum level of code reuse and flexibility. Learn More …
.NET Security
- Publication Date: July 8, 2002
- ISBN13: 978-1-59059-053-9
- Price: $31.99
.NET Security shows you what you need to know by covering the different aspects of the .NET security model through detailed discussions about the key namespaces. Learn More …
CIL Programming
- Publication Date: June 18, 2002
- ISBN13: 978-1-59059-041-6
- Price: $34.99
The Common Intermediate Language (CIL) is the core language of .NET. In this book, author Jason Bock offers an in-depth tutorial on programming in CIL. Learn More … | http://www.apress.com/author/author/view/id/2427 | CC-MAIN-2016-44 | refinedweb | 315 | 59.9 |
Configuring JSX in DenoConfiguring JSX in Deno
Deno has built-in support for JSX in both
.jsx files and
.tsx files. JSX in
Deno can be handy for server-side rendering or generating code for consumption
in a browser.
Default configurationDefault configuration
The Deno CLI has a default configuration for JSX that is different than the
defaults for
tsc. Effectively Deno uses the following
TypeScript compiler
options by default:
{ "compilerOptions": { "jsx": "react", "jsxFactory": "React.createElement", "jsxFragmentFactory": "React.Fragment" } }
JSX import sourceJSX import source
In React 17, the React team added what they called the new JSX transforms. This enhanced and modernized the API for JSX transforms as well as provided a mechanism to automatically import a JSX library into a module, instead of having to explicitly import it or make it part of the global scope. Generally this makes it easier to use JSX in your application.
As of Deno 1.16, initial support for these transforms was added. Deno supports both the JSX import source pragma as well as configuring a JSX import source in a configuration file.
JSX runtimeJSX runtime
When using the automatic transforms, Deno will try to import a JSX runtime
module that is expected to conform to the new JSX API and is located at either
jsx-runtime or
jsx-dev-runtime. For example if a JSX import source is
configured to
react, then the emitted code will add this to the emitted file:
import { jsx as jsx_ } from "react/jsx-runtime";
Deno generally works off explicit specifiers, which means it will not try any
other specifier at runtime than the one that is emitted. Which means to
successfully load the JSX runtime,
"react/jsx-runtime" would need to resolve
to a module. Saying that, Deno supports remote modules, and most CDNs resolve
the specifier easily.
For example, if you wanted to use Preact from the
esm.sh CDN, you would use
as the JSX
import source, and esm.sh will resolve
as a
module, including providing a header in the response that tells Deno where to
find the type definitions for Preact.
Using the JSX import source pragmaUsing the JSX import source pragma
Whether you have a JSX import source configured for your project, or if you are
using the default "legacy" configuration, you can add the JSX import source
pragma to a
.jsx or
.tsx module, and Deno will respect it.
The
@jsxImportSource pragma needs to be in the leading comments of the module.
For example to use Preact from esm.sh, you would do something like this:
/** @jsxImportSource */ export function App() { return ( <div> <h1>Hello, world!</h1> </div> ); }
Using JSX import source in a configuration fileUsing JSX import source in a configuration file
If you want to configure a JSX import source for a whole project, so you don't
need to insert the pragma on each module, you can use the
"compilerOptions" in
a configuration file to specify
this. For example if you were using Preact as your JSX library from esm.sh, you
would configure the following, in the configuration file:
{ "compilerOptions": { "jsx": "react-jsx", "jsxImportSource": " } }
Using an import mapUsing an import map
In situations where the import source plus
/jsx-runtime or
/jsx-dev-runtime
is not resolvable to the correct module, an import map can be used to instruct
Deno where to find the module. An import map can also be used to make the import
source "cleaner". For example, if you wanted to use Preact from skypack.dev and
have skypack.dev include all the type information, you could setup an import map
like this:
{ "imports": { "preact/jsx-runtime": " "preact/jsx-dev-runtime": " } }
And then you could use the following pragma:
/** @jsxImportSource preact */
Or you could configure it in the compiler options:
{ "compilerOptions": { "jsx": "react-jsx", "jsxImportSource": "preact" } }
You would then need to pass the
--import-map option on the command line (along
with the
--config option is using a config file) or set the
deno.importMap
option (and
deno.config option) in your IDE.
Current limitationsCurrent limitations
There are two current limitations of the support of the JSX import source:
- A JSX module that does not have any imports or exports is not transpiled properly when type checking (see: microsoft/TypeScript#46723). Errors will be seen at runtime about
_jsxnot being defined. To work around the issue, add
export {}to the file or use the
--no-checkflag which will cause the module to be emitted properly.
- Using
"jsx-reactdev"compiler option is not supported with
--no-emit/bundling/compiling (see: swc-project/swc#2656). Various runtime errors will occur about not being able to load
jsx-runtimemodules. To work around the issue, use the
"jsx-react"compiler option instead, or don't use
--no-emit, bundling or compiling. | https://deno.land/manual@v1.16.1/jsx_dom/jsx | CC-MAIN-2022-21 | refinedweb | 792 | 52.19 |
- Type:
Bug
- Status: Patch Available
- Priority:
Critical
- Resolution: Unresolved
- Affects Version/s: 1.2.0, 1.3.0, 1.4.0, 1.5.0
- Fix Version/s: None
- Component/s: proc-v2, Region Assignment
- Labels:None
Problem:
During Master initialization we
- restore existing procedures that still need to run from prior active Master instances
- look for signs that Region Servers have died and need to be recovered while we were out and schedule a ServerCrashProcedure (SCP) for each them
- turn on the assignment manager
The normal turn of events for a ServerCrashProcedure will attempt to use a bulk assignment to maintain the set of regions on a RS if possible. However, we wait around and retry a bit later if the assignment manager isn’t ready yet.
Note that currently #2 has no notion of wether or not a previous active Master instances has already done a check. This means we might schedule an SCP for a ServerName (host, port, start code) that already has an SCP scheduled. Ideally, such a duplicate should be a no-op.
However, before step #2 schedules the SCP it first marks the region server as dead and not yet processed, with the expectation that the SCP it just created will look if there is log splitting work and then mark the server as easy for region assignment. At the same time, any restored SCPs that are past the step of log splitting will be waiting for the AssignmentManager still. As a part of restoring themselves, they do not update with the current master instance to show that they are past the point of WAL processing.
Once the AssignmentManager starts in #3 the restored SCP continues; it will eventually get to the assignment phase and find that its server is marked as dead and in need of wal processing. Such assignments are skipped with a log message. Thus as we iterate over the regions to assign we’ll skip all of them. This non-intuitively shifts the “no-op” status from the newer SCP we scheduled at #2 to the older SCP that was restored in #1.
Bulk assignment works by sending the assign calls via a pool to allow more parallelism. Once we’ve set up the pool we just wait to see if the region state updates to online. Unfortunately, since all of the assigns got skipped, we’ll never change the state for any of these regions. That means the bulk assign, and the older SCP that started it, will wait until it hits a timeout.
By default the timeout for a bulk assignment is the smaller of (# Regions in the plan * 10s) or (# Regions in the most loaded RS in the plan * 1s + 60s + # of RegionServers in the cluster * 30s). For even modest clusters with several hundreds of regions per region server, this means the “no-op” SCP will end up waiting ~tens-of-minutes (e.g. ~50 minutes for an average region density of 300 regions per region server on a 100 node cluster. ~11 minutes for 300 regions per region server on a 10 node cluster). During this time, the SCP will hold one of the available procedure execution slots for both the overall pool and for the specific server queue.
As previously mentioned, restored SCPs will retry their submission if the assignment manager has not yet been activated (done in #3), this can cause them to be scheduled after the newer SCPs (created in #2). Thus the order of execution of no-op and usable SCPs can vary from run-to-run of master initialization.
This means that unless you get lucky with SCP ordering, impacted regions will remain as RIT for an extended period of time. If you get particularly unlucky and a critical system table is included in the regions that are being recovered, then master initialization itself will end up blocked on this sequence of SCP timeouts. If there are enough of them to exceed the master initialization timeouts, then the situation can be self-sustaining as additional master fails over cause even more duplicative SCPs to be scheduled.
Indicators:
- Master appears to hang; failing to assign regions to available region servers.
- Master appears to hang during initialization; shows waiting for the meta or namespace regions.
- Repeated master restarts allow some progress to be made on assignments for a limited period of time.
- Master UI shows a large number of Server Crash Procedures in RUNNABLE state and the number increases by roughly the number of Region Servers on master restart.
- Master log shows a large number of messages that assignment of a region has failed because it was last seen on a region server that has not yet been processed. These messages come from the AssignmentManager logger. This message should normally only occur when a Region Server dies just before some assignment is about to happen. When this combination of issues happens the message will happen repeatedly; every time a new defunct SCP is processed it’ll happen for each region.
Example of aforementioned message:
2019-04-15 11:19:04,610 INFO org.apache.hadoop.hbase.master.AssignmentManager: Skip assigning test6,5f89022e,1555251200249.946ff80e7602e66853d33899819983a1., it is on a dead but not processed yet server: regionserver-8.hbase.example.com,22101,1555349031179
Reproduction:
The procedure we currently have to reproduce this issue requires specific timings that can be hard to get correctly, so this might require multiple tries.
Before starting, the test cluster should have the following properties:
- Only 1 master. Not a strict requirement but it helps in the following steps.
- Hundreds of regions per region server. If you need more, fire up your preferred data generation tool and tell it to create a large enough table. Those regions can be empty, no need to fill them with actual data.
- At least ten times more region servers than available CPUs on the master. If the number of CPUs is too high, set hbase.master.procedure.threads in the safety valves to number-of-region-servers divided by 10. For example, if you have 24 cores and a 10 nodes cluster, set the configuration to 2 or 3.
Set your environment the following way:
- Access to cluster-wide shutdown/startup via whatever control plane you use
- Access to master-role specific restart via whatever control plane you use
- A shell on the master node that tails the master log (use tail -F with a capital F to ride over log rolls).
- A shell on the master node that’s ready to grab the HMaster’s pid and kill -9 it.
The procedure is:
- Start with a cluster as described above.
- Restart the entire HBase cluster.
- When the master log shows “Clean cluster startup. Assigning user regions” and it starts assigning regions, kill -9 the master.
- Stop the entire HBase cluster
- Restart the entire HBase cluster
- Once the master shows “Found regions out on cluster or in RIT”, kill -9 the master again.
- Restart only the Master.
- (Optionally repeat the above two steps)
You know that you hit the bug when the given indicators show up.
If it seems like it’s able to assign a lot of regions again, try kill -9 again and restart the Master role.
Workaround:
Identified and tested workaround to mitigate this problem involves configuration tunings to speed up regions assignment on Region Servers and also reduce the time spent by BulkAssignment threads on the Master side. We do not recommend setting this configurations for normal operations.
- hbase.master.procedure.threads - This property defines the general procedure pool size, and in the context of this issue, is the pool for executing SCPs. Increasing this pool would allow more SCPs for different Region Servers to run in parallel, allowing for more regions assignments to be processed. However, on the core of this problem is the fact that Master may have multiple SCPs for same Region Server. These are not run in parallel, therefore tuning this parameter will not be sufficient. We recommend setting this parameter to the number of Region Servers in the cluster, so that under normal scenarios where there is one SCP for each Region Server, all those can run in parallel.
- hbase.bulk.assignment.perregion.open.time - This property determines how long a bulk assignment thread on Master's BulkAssigner should wait for all its regions to get assigned. Setting to a value as low as 100 (milliseconds) will allow the no-op SCPs to complete faster. Which will open up execution spots for SCPs that can do actual assignment work.
- hbase.master.namespace.init.timeout - The master has a time limit for how long it takes to assign the namespace table. Given that we want to limit master restarts, this is better adjusted upwards.
Since this issue is especially pronounced on clusters with a large number of regions-per-region-server the following additional config can also help:
- hbase.regionserver.executor.openregion.threads - This relates to the number of threads on each Region Server responsible for handling the region assignment requests. Provided individual Region Servers are not overloaded already tuning this value higher than the default (3) should help expedite region assignment.
Acknowledgements:
Thanks to the ton of folks who helped diagnose, chase down, and document this issue, its reproduction, and the workaround. Especially Jean-Daniel Cryans, Wellington Chevreuil, Ankit Singhal, AMIT VIRMANI, Shamik Dave, Esteban Gutierrez, and Josh Elser.
Attachments
- Blocked
HBASE-22626 Master assigns the region successfully, but updates the state of region failed, and then keeping the state of the region is OPENNING in zookeeper, If master restarted, those OPENNING regions will not be assign forever.
- Open
- relates to
HBASE-16488 Starting namespace and quota services in master startup asynchronously
- Patch Available
- links to
- | https://issues.apache.org/jira/browse/HBASE-22263?attachmentOrder=desc | CC-MAIN-2020-34 | refinedweb | 1,619 | 59.74 |
I'm wondering if a kind soul, has the source-code to this tutorial.?
I'm getting a few errors, (something about C99 standards), when compiling in CodeBlocks, when i copied the text from the page. (In the ForLoop mainly)
I'm fairly much a newbie and have limited C and OGL skills. I do however understand most of what is happening, but i was hoping to compile this example and simplify.... I'm looking for C Source and not C++ which is all i seem to find when looking for similar examples.
Technically unlike that Tutorial, i just want to render a 32x32 map of noise. No smoothing or octaves or mipmapping is needed. I just need to see something, before i go back and read up on every line of the code.
The example tutorial is actually a little more advanced than what I'm trying to do. Eventually i would like to do my own terrain displacement, but for now the following would suffice.
I'm trying to simply:
1)Make a texture array,
2)Generate procedural noise
3)Fill Map array with a random procedural noise,
4)Display the array via OGL.
Any help on getting this a step further would be appreciated:
Basically below i believe i have done steps 1 & 2.
by declaring a MAP32, and generating a Noise function.
I would like to now i guess fill the array with the data from noise function. But the pseudo-code gave me compiler errors, so anyone help me add this function....
PS No I'm not a student! (They generally have a much better grasp of what they are doing!)PS No I'm not a student! (They generally have a much better grasp of what they are doing!)Code:#include <GL/gl.h> //include the gl header file #include <GL/glut.h> //include the glut header file float map32[32 * 32]; //Texture Array float Noise(int x, int y, int random) // Generate Noise { int n = x + y * 57 + random * 131; n = (n<<13) ^ n; return (1.0f - ( (n * (n * n * 15731 + 789221) + 1376312589)&0x7fffffff)* 0.000000000931322574615478515625f); } void int main (int argc, char **argv) { glutInit (&argc, argv); //initialize the program. glutInitDisplayMode (GLUT_SINGLE); //set up a basic display buffer (only singular for now) glutInitWindowSize (512, 512); //set whe width and height of the window glutInitWindowPosition (100, 100); //set the position of the window glutCreateWindow ("A basic OpenGL Window"); //set the caption for the window glutDisplayFunc (Noise); //call the display function to draw our world glutMainLoop (); //initialize the OpenGL loop cycle return 0; } | http://cboard.cprogramming.com/c-programming/101901-procedural-noise-opengl-help.html | CC-MAIN-2014-52 | refinedweb | 427 | 62.98 |
Like. In general, the result of an expression in an XML query may consist of a heterogeneous sequence of elements, attributes, and primitive values, all of mixed type. This set of objects might then serve as an intermediate result used in the processing of a higher-level expression. The heterogeneous nature of XML data conflicts with the SQL assumption that every expression inside a query returns an array of rows and columns. It also requires a query language to provide constructors that are capable of creating complex nested structures on the fly -- a facility that is not needed in a relational language.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-.
--Sean McGrath
Read the rest in ITworld.com - XML IN PRACTICE - APIs Considered Harmful
the benefits of binary XML over text XML are only significant in edge cases, and in most of those a custom, non-Infoset-based serialization would almost certainly be better. Having a completely different XML serialization adopted widely could significantly screw up the benefits a common format offers.
--Danny Ayers on the atom-syntax mailing list, Monday, 29 Dec 2003
The idea of escaping markup goes against the fundamental grain of XML. If this hack spreads to other vocabularies, we'll very quickly find ourselves mired in the same bugward-compatible tag soup from which we have struggled so hard to escape.
--Norm Walsh
Read the rest in XML.com: Escaped Markup Considered Harmful [Aug. 20, 2003].
--Don Box
Read the rest in Microsoft's Box Riffs on Life Inside The Empire
Frankly, all these hundreds of patents whose only *new* feature is that they use XML or HTML or XHTML or whatever to do things, should be considered junk. Just as you can't patent the mere substitution of plastic for wood in some application, you shouldn't be able to patent the mere use of XML or HTML in some well-known application. The intent of those who defined XML was to support an entire class of applications. Only if one can show that the specific use of XML is not anticipated in that class of applications should one be able to get a claim on a specific use. The current practice is simply a land-grab. It has nothing to do with originality or furthering any of the goals specified for patents in the US constitution.
--Bob Wyman on the xml-dev mailing list, Thursday, 18 Dec 2003
Putting QNames into your XML content is like using TCP packets as delimiters in an application protocol.
--Mark Nottingham
Read the rest in mnot’s Web log: QNames are Evil
Last time I checked (maybe 6 months ago), there were over 300 patents or patent applications with the word XML in the title. Think about that.... I personally have seen very, very little, that I think is new since 1994 or so, so almost every one of those 300 will have significant prior art. Prior art alone is *not* sufficient for have a patent deemed invalid though, and that is at least part of the problem.
--Gavin Thomas Nicol on the xml-dev mailing list, Wed, 17 Dec 2003
xs:duration is broken and should never have made it into the W3C XML Schema REC in the first place. Simple question; Is an xs:duration representing 3 months equivalent to an xs:duration representing 90 days?
--Dare Obasanjo on the xml-dev mailing list, Saturday, 28 Sep 2002
There are no security loopholes in XML, only in the software that you use to process it.
--Michael Kay on the xml-dev mailing list, Tuesday, 18 Nov 2003.
--Anders Hejlsberg
Read the rest in Innapropriate Abstractions-hoc compression on parts of the file, because the tool can look across all the data and exploit all repetition in the information.
--Eric S. Raymond
Read the rest in Data File Metaformats
XML is more like contractors who take care to ensure that the foundation and building are up to code and will last a long time for their homeowners, while Macromedia is a company that builds feature-filled homes quick without much concern for the building code and rents them out to whoever needs a house in a hurry...
--Simon St. Laurent
Read the rest in oreilly.com - - From the Editors List
Indeed the XML brand grows worryingly, but it is terribly important for the integrity of that brand that when someone says "this data is available as XML", that they provide unicode characters with angle brackets on demand.
--Tim Bray on the xml-dev mailing list, Thursday, 05 Dec 2002
The whole point of XML based interop is that you can send the data on the wire independent from the API. By defining interop on the API level, you are missing the point of using data instead of code to achieve information exchange and interoperability. Using an XML API over relational databases or ASN.1 is fine, but it only gives you the wrapper part of a mediator architecture that allows you to integrate information from different sources (which is basically just another way at looking at the loosely-coupled aspect).
--Michael Rys on the xml-dev mailing list, Tuesday, 18 Nov 2003
XML is just text. It is not a level lower than text, consisting of the bytes, or in another context the glyphs, which are rendered from particular encoding as the characters of that text. Nor is XML an abstract precursor to text, nor any abstraction of the syntax in which text is manifested. XML is text; a body of XML syntax is fundamentally a text (whatever else it might be agreed, or even misunderstood, to be).
--W. E. Perry on the XML DEV mailing list, Tuesday, 25 Nov 2003
Now you are dealing with so-called XML _datatypes_, which only exist in terms of applications layered on XML such as XML Schema or RDF. What I am saying is that _for XML_ any so-called datatype is just another piece of XML.
--Jonathan Borden on the xml-dev mailing list, Sunday, 23 Nov 2003?
--James Turner
Read the rest in *LINUXWORLD SPECIAL* Is Linux Desktop-Ready Yet...or Not? (LinuxWorld)
one major value of XML to me (employee of XML DBMS vendor) is to avoid the necessity of separating the "text" from the "structured info." While XML DB's don't have the advanced fuzzy/baysean capabilities of high-end text DBs (yet!), they do have the ability to query for text matches IN THE CONTEXT OF the structure. Given a certain amount of predictability about the tagging of a resume, one could look for people with actual EXPERIENCE with some technology combination (Java on Linux, for example) rather than just "Java" and "Linux" mentioned somewhere near each other or whatever.
--Mike Champion on the xml-dev mailing list, Monday, 28 Oct 2002
the XML:DB API got a triple wammy of problems. It was intended to be database agnostic, language agnostic and also Java like. The DOM set the precedent, we definitely should have known better. :-)
Just removing the language agnostic bit would have helped a ton. At least it would have done away with the need to be expressible in IDL. I will definitely avoid ever trying to define another language agnostic API. It's just a bad idea that leads to a crappy result for all languages.
--Kimbro Staken
Read the rest in Inspirational Technology: The pain of over abstraction
Relational data is "flat" -- that is, organized in the form of a two-dimensional array of rows and columns. In contrast, XML data is "nested", and its depth of nesting can be irregular and unpredictable. Relational databases can represent nested data structures by using structured types or tables with foreign keys but it is difficult to search these structures for objects at an unknown depth of nesting. In XML, on the other hand, it is very natural to search for objects whose position in a document hierarchy is unknown. An example of such a query might be "Find all the red things", represented in the XPath language by the expression
//*[@color = "Red"]. This query would be much more difficult to represent in a relational query language.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-
I have no time for Windows, no patience for rebooting and I lack that ability to forget the last 10 times it crashed on me. Most Windows people forget. Marc Fleury is an example of this, regaling me with "buy a real computer" (how Windows/Intel qualifies for any bravado as a real computer is beyond me). Marc says his Windows laptop never crashes; however, every time I've been at a JBoss related conference, his Windows laptop undergoes some reboot or crash.
My Mac is overall very stable. I only reboot it when I do something that I couldn't do on Linux and doesn't work very well on Windows... That is, rapidly plug in and disconnect strange projector devices. Windows would give me a sure Purple Screen of Death (they removed the BSoD by making it purple). Linux, well I'd spend days reconfiguring X.
--Andrew C. Oliver
Read the rest in Hacking Log 4.0: Phase II
we need to be very cautious about assuming that the world needs both a binary and text encoding -- people gripe about parse speed of XML as if it will be faster when it's binary, and I think this is incorrect from two perspectives -- first is that we have shown XML-oriented protocols to be faster than binary in many cases, and second because there is still tons of room for improvement in text parsing speeds (the fact that gen 1 of XML parsers is slow simply proves that they are gen 1 parsers, not that text is inherently slower than binary).
--Joshua Allen on the xml-dev mailing list, Tuesday, 18 Nov 2003
[X]HTML + [CSS] + [Javascript] rule the portable web app interface game today, as evidenced by, well, just about anybody's homepage. But we're still not happy with it. Why? One big reason for me: Rich interfaces are possible, but anything sufficiently advanced requires massive amounts of DHTML-fu. I've been there. Three weeks and a dozen hacks later, I've still got an application to write.
--Chris Wilper on the xml-dev mailing list, Tuesday, 4 Nov 2003
--Robert Scoble
Read the rest in The Scobleizer Weblog shaper of the class remaining unchanged between encoding and decoding. The XMLEncoder takes a completely different approach here: instead of storing a bitwise representation of the field values that make up an object's state, the XMLEncoder stores the steps necessary to create the object through its public API.
--Joe Winchester and Philip Milne, Java Developers Journal, June 2003, p. 28
PowerPoint is for sissies. All right, not for sissies, exactly, but it's being done to death. PowerPoint Makes Everything Really Important in a Telegraphic Way. That's Fine in Some Cases, But It Gets Tiring When It Happens Too Much. Besides, PowerPoint is the triumph of the quick "fact" over the art of argumentation. And a lecture is, or should be, a kind of argument. It's more, too -- a chance to observe a voice, a body, a brain, and a personality engaging an audience with similar interests. If you put your bulleted ideas up on slides, your audience will look at the slides, not at you. You'll also be teaching them that What You Have to Say Can Be Summarized in a Few Words. Can it?
--William Germano
Read the rest in The Chronicle: 11/28/2003: The Scholarly Lecture: How to Stand and Deliver
RSS, by design, is difficult to consume safely. The RSS specification allows for description elements to contain arbitrary entity-encoded HTML. While this is great for RSS publishers (who can just “throw stuff together” and make an RSS feed), it makes writing a safe and effective RSS consumer application exceedingly difficult. And now that RSS is moving into the mainstream, the design decisions that got it there are becoming more and more of a problem.
HTML is nasty. Arbitrary HTML can carry nasty payloads: scripts, ActiveX objects, remote image “web bugs”, and arbitrary CSS styles that (as you saw with my platypus prank) can take over the entire screen.
--Mark Pilgrim
Read the rest in New York Times: NYT HomePage.
--Don Chamberlin
Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com-
In dusting off my presentation for XML 2003 I have a slide that explains why we needed XML and the problems with SGML. This slide is very old (at least 6 years):
SGML Problems
- High initial investment
- Complexity
- Too many options/features
- Vendors supported a subset of features
- Applications weren't portable because of various feature sets
- Lack of intuitive end-user software
- Fear of "pointy brackets" (<>)
As I read this list and after my experiences with XML Schema - I ask myself where are we? About the only thing I think we can take off of the list above is the "Fear of 'pointy brackets'" and that is a result of HTML, not XML.
--Betty Harvey on the xml-dev mailing list, Tuesday, 25 Nov 2003.
--W. E. Perry on the XML DEV mailing list, Monday, 24 Nov 2003
XML is based on free and open standards, so every time I see that phrase MSXML, I get a little nervous.
--Charles Goldfarb, October 23, 2001 (XML Journal 2-12, p. 34)
Is the message that is transmitted *what is transmitted* or rather *something else* that is encoded in the message. If you are primarily looking from the vantage point of an application which communicates with another application *as if via RPC*, then the interface is primary and the bits on the wire are secondary. On the other hand if you are sending a document from one place to the other, then the document is primary. XML was not designed to be the "perfect" RPC protocol. XML remains a great way to "encode" documents -- saying this seems like a tautology -- because to a large extent the XML *is* the document.
--Jonathan Borden on the xml-dev mailing list, Sunday, 23 Nov 2003
most people develop binary formats because they're simpler than XML, and they'd rather be getting on with writing their application than bothering with DTDs and SAX and DOM and stuff, in my experience. The only reason to be convered with XML goo is if you have an overriding concern regarding 'being able to view and edit the files in a text editor'!
--Alaric B Snell on the xml-dev mailing list, Saturday, 22 Nov 2003.
--Charles Petzold
Read the rest in Code Name Avalon: Create Real Apps Using New Code and Markup Model -- MSDN Magazine, January 2004.
--Michael Rys on the xml-dev mailing list, Tuesday, 18 Nov 2003
for any new protocols, XML should be considered to be the default encoding. The only exceptions, and there are always exceptions, would be applications like sending telemetry from Mars where every bit counts... (But, even in such cases, it would still be useful to have the XML support if only to assist in debugging during development...)
--Bob Wyman on the xml-dev mailing list, Wed, 19 Nov 2003
XML has succeeded in large part because it is text and because it is perceived as "the obvious choice" to many people. The world was a lot different before XML came around, when people had to choose between a dizzying array of binary and text syntaxes (including ASN.1). Anyone who tries to complicate and fragment this serendipitous development is, IMO, insane.
--Joshua Allen on the xml-dev mailing list, Tuesday, 18 Nov 2003
I have long held out hope that the big lessons of HTTP would be
(a) keep it readable (design time)
(b) maximise statelessness (design time)
then
(c) scale it horizontally (deploy time)
I'm on the verge of concluding that this message is a stillborn. A real tragedy given the amazing scaleability of the Web. If ever there way an example of how to "optimize" an IT system the Web is it :-)
So why doesn't the message get through?
Perhaps because it is not obvious where the $$$ are in a "keep it simple, keep it readable, scale it horizontally on the cheap." view
--Sean McGrath on the xml-dev mailing list, Monday, 22 Sep 2003
Unless the world goes to semantics free, invent your own elements, XML, HTML and maybe XHTML, will be the main users of CSS for a long time to come. Both semantics free XML and presentational HTML are bad things, in my view.
--David Woolley on the www-style mailing list, Monday, 20 Oct 2003
The evolution of the web interface for the humans away from the browser and into other parts of the operating system has been predicted on this list and elsewhere from some time now. Browsers are a kind of vestigial organ from the early days of hypertext that reemerged for the WWW. They aren't required, but they made a great HTML handler and a nice sandbox for newcomers to hypermedia to learn and develop the technologies over the Internet. They aren't necessary; just cheap and convenient. Like XML.
--Claude L Bullard on the xml-dev mailing list, Tuesday, 28 Oct 2003
It is amusing to me that at the time of its inception the impetus for the creation of "SGML for the Web" aka XML was to throw off the shackles of HTML and the notion of one-size-fits-all markup vocabularies..
--Dare Obasanjo on the xml-dev mailing list, Monday, 3 Nov 2003
Overall, there's a real dilemma for the Powers that Be: On one hand, "design by committee" is not a great strategy, at least <subtleDig target="XQuery"> if you want a Recommendation within 5 years of the Workshop. </subtleDig> :-) Coming up with a solid proposal by some back channel communications can improve both time to complete and quality. On the other hand, they just can't help themselves from getting wrapped up in Big Power Politics whenever some obvious need (like SOAP extensions for reliable messaging across diverse networks) comes along.
--Mike Champion on the xml-dev mailing list, Tuesday, 7 Oct 2003
The systems that have succeeded at scale have made simple implementation the core virTuesday,.
--Clay Shirky on the nec mailing list, Friday, 7 Nov 2003.
--James Gosling
Read the rest in Visualizing Complexity
I think there are a lot of developers out there doing silly things with XML, especially in web services. But, I do know that the most of developers I've come across in my work with XML much prefer working with Plain Old XML mapped onto Plain Old Objects. Anything else is treated as gorp that somebody or some tool is imposing on them, supposedly for their own good.
--Bill de hóra on the xml-dev mailing list, Sunday, 09 Nov 2003
the new features in XML 1.1 and Namespaces 1.1 are of very marginal value compared to the costs created by the discontinuity, for vendors and users alike.
--Michael Kay on the xml-dev mailing list, Monday, 21 Oct 2002.
--Simon St.Laurent on the xml-dev mailing list, Wed, 22 Oct 2003.
--Greg Papadopoulos, Chief Technology Officer, Sun Microsystems
Read the rest in On the hot seat at Sun |CNET.com.
--Tim Bray
Read the rest in Taking XML's measure |CNET.com
Companies interested in doing the WS-Thing would do well to limit the investment to HTTP, XML, WSDL, and SOAP (in that order), steer clear of /WS-.*/ for a while and concentrate on what matters: the *services*.
Take web data-mining. Web scraping is alive and brittle in 2003. Not because of the adoption of or completion of UDDI, WS BPEL, WS-CAF, WSDM, WSDM, WSIA, WSRP, WSRM, WS-Security (or lack thereof). But for the simple reason that web-based data providers are still thinking only of one user-agent. That's where the change needs to happen.
--Chris Wilper on the xml-dev mailing list, Tuesday, 7 Oct 2003
If we could go back in time and I were appointed Infallible Grand Dictator of XML, I would not have allowed entities in the first place (though I might allow named character references of some kind). Realistically, though, some people do like entities quite a bit, so it's unlikely that they would have been dropped even if we had known then what we know now.
--David Megginson on the xml-dev mailing list, Tuesday, 21 Oct 2003
In general it's a bad idea to have user-visible content in attributes, because you can't put markup inside attributes. Consider (for example) needing something like
Calculate d<sup>2</sup>:
or a possible need to mix right-to-left and left-to-right scripts with markup to differentiate source languages.
--Liam Quin on the xml-dev mailing list, Tuesday, 28 Oct 2003
Something that has been only slightly alluded to in this (and many other) "why use XML" discussions is the leverage the tool set brings you. If the there is one blessing that the adoption of XML has brought to the IT industry it is the institutionalization of best practices for parsing, tree building and traversal and other common CS algorithms. OO reuse is much easier when you have syntactic reuse.
--Peter Hunsberger on the xml-dev mailing list, Wed, 29 Oct 2003
Microsoft wants to take over the internet by simply turning Windows Longhorn itself into a browser that use its own next-gen markup languages that go way beyond the W3C standards.
--Gerald Bauer on the xml-dev mailing list, Tuesday, 28 Oct 2003®-based!
--Charles Petzold
Read the rest in Code Name Avalon: Create Real Apps Using New Code and Markup Model -- MSDN Magazine, January 2004
it would be incorrect to say that data model is more important than syntax (for example, it is wrong to say that binary XML is equivalent to text XML "because they both just serve as serialization for the data model, which is primal"). In other words, syntax *is* primal, but that doesn't mean data model is unimportant -- data model is subordinate to syntax, but it is very nice gravy.
--Joshua Allen on the www-tag, Friday, 24 Oct 2003
XInclude is a wrong-size-fits-nothing kind of thing. Other specs that need XInclude-like functionality often need something with _slightly_ different semantics, and end up specifying it themselves instead of reusing XInclude (<xsl:import>, for instance).
--Joe English on the xml-dev mailing list, Friday, 09 May 2003
The TAG wars were a feature, not a bug. They contributed to an incredible evolution in html and browser technology
--David Orchard on the www-tag mailing list, Wed, 22 Oct 2003
I think Web Architecture has a preference for error behaviour that encourages folks to Do The Right Thing... a 404 error message is useful because it can say "contact the author of the _referring page_" and such. XML halt-and-catch-fire was useful/necessary to make broken XML docs sufficiently painful to consumers that they'd contact the producer and get it fixed rather than just tolerating it.
--Dan Connolly on the xml-dev mailing list, Tuesday, 21 Oct 2003.
--David Woolley on the www-style mailing list, Monday, 20 Oct 2003
Microsoft's biggest enemy is themselves. They do things that make people very upset and engenders a lot of resentment.
--Mike Silver, Gartner Group
Read the rest in Linux took on Microsoft, and won big in Munich Victory could be a huge step in climb by up-and-comer
Debating the pros and cons of XML namespaces is like arguing about whether the internal combustion engine is a good idea. The world has moved on.
--Dare Obasanjo on the xml-dev mailing list, Tuesday, 22 Jul 2003.
--Simon St.Laurent on the xml-dev mailing list, Sunday, 17 Nov 2002
I learned from watching the DOM WG debate whether HTML in a browsers is "really" a tree or "really" lists embedded in lists that it is more or less impossible to persuade anyone who strongly believes in the opposite point of view. That's why the W3C DOM has both tree-traversal methods and list indexing methods; it would be logically complete with one or the other, but to choose one or the other would create winners and losers, which is bad politics in the standards world.
--Mike Champion on the XML Dev mailing list, Saturday, 01 Mar 2003
Any Windows user who goes on using Explorer after he or she learns that Mozilla is available is a masochist who should seek immediate psychiatric help
--Robin 'Roblimo' Miller
Read the rest in NewsForge: The Online Newspaper of Record for Linux and Open Source
Apple's experiment is a hopeful sign, but it will only be effective if independent labels are allowed compete directly with the major labels. When some are able to offer songs at 15 cents a copy to compete with songs offered at 99 cents a copy, then we will have the kind of competition that could explode the Internet as a medium for distribution.
--Lawrence Lessig
Read the rest in Online NewsHour: Forum -- Copyright Conundrum
We have requests to support things such as the Japanese calendar or complex numbers in DSDL and I think we'd better find a way to let people define what they need rather than attempting to be "universal" and define all the possible datatypes.
--Eric van der Vlist on the xml-dev mailing list, Friday, 10 Oct 2003.
--Noah Mendelsohn on the xml-dist-app mailing list, Thursday, 9 Oct 2003
On a lot of machines, "text" and "binary" files are the exact same thing, only the text files hold all their data in a form which a simple interactive editor like "vi" can manipulate. "Binary" usually just means that the data is packed in nice and tight, and nobody would even consider messing with it by hand. They'd use a library which has some functions built into it to manipulate the data in semantically self-consistent ways. For instance, many "binary" files have checksums in them. If you change a bit anywhere, you need to update the checksum. Many formats have ways of mangling them completely: if your file is block-coded with block headers describing the content and length of each block, one insert within a block will kill the file. Unless you update the block length, the file has become meaningless. If a file describes matrix-like data, the matrix dimensions must be consistent. You insert or delete an entry, and the matrix rows are skewed. The file is invalid. Many files have complex, built-in self consistency which must be respected by any tool which manipulates them.
--Graydon Hoare
Read the rest in grieve with me, blue master chickenz
in the WWW world, conformance to SGML has never been much more than lip service. Nothing that the WWW related specifications say about SGML should be taken at face value. It suffices to say that none of the common browsers ever supported HTML as defined as an SGML application, e.g. supporting tag minimization.
--Jukka K. Korpela on the www-style mailing list, Thursday, 9 Oct 2003
Garbage in is easily detected. One the more intimidating things about working with XML is that everybody can read it and you can't waffle your way out of problems half as easily :) I've had business owners point out problems with data in XML because they were standing behind me and noticed something. I have never seen a business owner do that with binary formats even with said (not so little) viewing tools, and that includes rdbmses. I have a business user who is happy (nay, wants) to have the raw XML emailed, until will build out reporting during a future iteration. XML is a 'visually tactile' technology, which I think is important.
--Bill de hòra on the xml-dev mailing list, Saturday, 20 Sep 2003
Google and Amazon did some cool stuff. XMethods has nice listing of (mostly "toy") public web services. But in large part, the useful "services" on the web are still HTML-only.
--Chris Wilper on the xml-dev mailing list, Tuesday, 7 Oct 2003.
--Bill Venners
Read the rest in The Human Side of XML.
--Tim Bray
Read the rest in Taking XML's measure |CNET.com
XML operates in an abstract universe substantively divorced from the thought patterns of the vast majority of human beings. For many users of computer systems XML is, quite bluntly, bloody difficult. To many, XML is incomprehensible.
--Andrew Watt on the xml-dev mailing list, Wed, 1 Oct 2003
When people ask me to list the advantages of markup languages like XML, this is usually one of the first things that comes up -- XML is extremely Sneakernet compatible, so that you can burn all of your data onto a CD or floppy, carry it 200 meters, and then load it into the secure computer with no loss. Try that with CORBA or DCOM.
--David Megginson on the xml-dev mailing list, Friday, 3 Oct 2003
almost all users are now using a browser that is not going to be upgraded, short of replacing their operating system.
--John Cowan on the Unicode mailing list, Friday, 26 Sep 2003
Beware programmers trying to treat XML like a set of database records. Many such programmers blame XML when then cannot load an *entire* record set into memory. The same programmers would never contemplate loading an entire database into memory. Approaching XML processing the wrong way (its just a database, right?) can lead to non-sequiters like binary XML.
--Sean McGrath on the xml-dev mailing list, Thursday, 25 Sep 2003
It's no secret that the advantages of upgrading operating systems or application software has diminished quite significantly over the last few years. If you look back over history, there were great advantages from one release to another. You just don't get that anymore. You just don't get the bang for your buck switching from 2000 to XP.
--Toni Duboise
Read the rest in Microsoft feeling Office heat? | CNET News.com
except for trivial, academic cases RDF Schema and OWL do not have the robustness to capture the dynamically changing nature of real-world semantics. To do so, we must go beyond these ontology languages.
--Roger L. Costello on the xml-dev mailing list, Sunday, 28 Sep 2003
XHTML, although it is technically an XML application, is essentially HTML with a tighter, more consistent set of rules. If you know HTML, you can learn XHTML in fifteen minutes -- and check your work for free via the W3C or WDG online validation services. Using these services helps you remember the rules and avoid many browser problems. Some popular browsers don't support the proper XHTML MIME type and for that reason some standards geeks won't use XHTML yet. But the benefits of automated workflow and seamless interaction with XML-based publishing tools, syndication and description formats and Web services to my mind far outweigh that issue.
--Jeffrey Zeldman
Read the rest in Meet The Makers - Creative people in a technical world.
One of the most important characteristics of XML, as compared with many, many competing formats for the storage and/or transmission of data is that it is textual (in the sense of being conceptually a sequence of characters, and represented on the wire -- at least so far -- as such). Since much of the XML which I care about is also a digital representation of texts (in the sense of being natural-language utterances with a certain degree of intra-document linguistic and thematic cohesion), it troubles me to think that labeling XML as text buys us nothing of value.
--C. M. Sperberg-McQueen on the www-tag mailing list, 17 Sep 2003
Good browsers support GIF, JPEG and PNG without any problem: if the infrastructure allows plurality, then having a few different mainstream alternatives with different tradeoffs gives richness. This is where, most notably, W3C XML Schemas fails: it does not provide a mechanism to allow parts of it that fail to be readily improved or swapped out. People are stuck with the whole thing.
--Rick Jelliffe on the XML Dev mailing list, Thursday, 25 Sep 2003
People seem eager to forget how the world was before XML 1.0. Too-clever people can argue all day that "XML 1.0 is qualitatively not much better than CSV". But this misses the point. XML 1.0 has been able to achieve a degree of ubiquity and platform support that makes it "the obvious choice" for people who previously had to choose between various CSV, ASN.1, etc. The impact of that contribution is hard to overestimate. Why people are so hasty to go back to a world of multiple, incompatible encoding techniques is beyond me. For God's sake lets be happy that we have XML 1.0 and progress to the new millennium where we get to argue about incompatible schemas instead.
--Joshua Allen on the xml-dev mailing list, Wed, 24 Sep 2003
The number of complaints W3C gets about spam just because it's written in HTML with a <!DOCTYPE...> pointing to us... well, it doesn't make me feel like more end users should get to see the textual codes underneath the hood.
--Dan Connolly on the www-tag mailing list, Thursday, 18 Sep 2003
The sad fact of the matter is that the majority of the software development community suffer from premature optimization tendencies. Its like an illness we fight all our lives. I'm still fighting it but at least now, after twenty years, I realize I'm ill :-)
Vendors know all about this predisposition to myopic optimization, so they give the market what the market wants (as opposed to what the market actually needs) which is stuff that feeds the urge to optimize. Phrases like "tight integration" or "industrial strength" are used to deflect attention from what is really going on - more often than not - dubious engineering. It amounts to the same thing - feed the craving for optimization - however ill conceived it might be.
--Sean McGrath on the xml-dev mailing list, Monday, 22 Sep 2003
In my experience, actual XML parsing generally takes so little time in a real-world project that optimizing it to zero would still bring no noticeable gain. The only exception would be something dealing with a constant, fat stream of XML data in real time (news feeds are too low-volume to qualify).
--David Megginson on the XML Dev mailing list, Sunday, 21 Sep 2003". Yes I do draw a direct correlation with text wire formats over internet protocols with cost-effective high relevancy integrations shipped earlier. Call me biased, but I honestly don't know how anything got done at all in middleware before XML and HTTP came along.
--Bill de hòra on the xml-dev mailing list, Saturday, 20 Sep 2003
XML (well, SGML) was invented to be a convenient way for humans to "mark up" text, interactively, so it could be later searched, formatted, indexed, referenced, and hyperlinked by a computer. this is a good goal, because plain text is conventionally quite difficult to process by a computer. computers don't have any of the brains that we do, and can't extract "meaning" out of text. it's a hard problem. decades of research, little real progress. so for plain text, SGML was a good idea (badly implemented tho). XML fixed the bad implementation aspect, but otherwise its goal was mostly the same.
The XML standardization process, however, took place during this period of very rapid growth of the internet (still underway), and as a result it really had the web as a backdrop to all the discussion about text. And the web really hilighted something people had sort of known for a long time but not had the guts to mention, which is that proprietary storage and transmission formats really suck. You can't interoperate with someone serving up a proprietary standard unless you buy their product, which is the whole idea, but it sort of feels like blackmail when you know there are free standards kicking around. It feels like someone trying to put a tax on your communications, which is really crappy.
--Graydon Hoare
Read the rest in grieve with me, blue master chickenz
It is a sign of the maturing and vitality of XML applications and the expertise of its users that books are starting to appear about advanced extensions to XML, or about applications built atop it. So, for example, some have written about XLink and XQuery. But those are very specialised extensions. By contrast, Harold has put together an advanced overview of ALL XML.
--Dr. Wes Boudville
Read the rest in Barnes & Noble.com - Effective XML: 50 Specific Ways to Improve Your XML
If I own a URI for my car and I assert my car is Blue, that doesn't make it true. And if eleven other people assert that it's Green, the fact that they're other people doesn't make their assertions false.
--Norm Walsh
Read the rest in IRC log of tagmem on 2003-08-18
Is it true to say that the size of "" is 743x1155 pixels, or is it true to say that it is 77x53 cm? It can't be both.
--John Cowan on xml-dev mailing list, Thursday, 24 Jul 2003
XML is about syntax and nothing else. People who think otherwise have been misled by the XML hype of yesteryear.
--Dare Obasanjo on the xml-dev mailing list, Tuesday, 12 Aug 2003
BitPass' predecessors failed for a variety of reasons and.
--Scott McCloud
Read the rest in Misunderstanding Micropayments - Scott McCl.
--Clay Shirky
Read the rest in Shirky: Fame vs Fortune: Micropayments and Free Content
<>
--Bill Venners
Read the rest in The Human Side of XML
Choice, to most users, is the ability to choose any program they wish, and have it install and run seamlessly, without affecting any other application already installed; without requiring them to know which GUI they're running (or even that they're running a GUI); without altering path statements; without editing configuration files; without facing a command prompt; and without having to compile any source code; create any makefiles, or any other programming task that only developers are fond of..
So far, the open source community has been highly sensitive to the needs of power users, hobbyists, and centralized IT departments, but highly insensitive to the needs of average, technically (and sometimes literally) illiterate users. Many people will argue that the public should be educated to value software choice and to see Microsoft's impositions and removal of choice for what it is. But it is a grave mistake to stake Linux' future on the hope that millions of people will be inspired to software activism, that they will take the ideological high road when all they want is to buy a piece of software that works with a piece of electronics.
--A. Russell Jones
Read the rest in Linux vs. Windows: Choice vs. Usability
In my experience it's a serious source of interoperability problems when people try to pretend that XML is a data model, not a syntax; they are (to quote you) "trying to mask the existence of XML altogether". Your application and mine probably have profoundly different needs in the data-modeling department, XML lets us interoperate anyhow. I've always thought that's where the big win was.
--Tim Bray on the xml-dev mailing list, Monday, 10 Feb 2003
When XML 1.0 began, it used some fairly simple criteria to create a markup technology that was (relatively) easy for programmers to implement and use. I think XML failed the Desperate Perl Hacker test, but it succeeded to the point where lots of parsers became available and tools around those parsers became available.
--Simon St.Laurent on the xml-dev mailing list, Friday, 02 Aug 2002
A lot of the business in the Internet marketplace, for reasons that I cannot quite fathom, had what I call "shoestring operations". One example of "shoestringing" would be to not mirror critical drives. Another would be to skip backups, or to only do them infrequently. Another would be to hire a "technical staff" that couldn't code/support/maintain/whatever their way out of a paper klein bottle.
Basically, it is "operating on the cheap". As with lacking insurance, there's a gamble involved. If nothing goes wrong, you win. If something goes wrong, you lose.
--Andrew Gideon on the wwwac mailing list, Thursday, 28 Aug 2003
DSDL is necessary because other XML schema languages (primarily W3C XML Schema) do not meet the needs of "document heads", and document validation is too complex to be done using a single language. Our goal is to propose a set of specifications which will include a framework, several schema languages (including Relax NG and Schematron), a datatype system, and other pieces needed for document validation.
--Eric van der Vlist
Read the rest in XML.com: DSDL Interoperability Framework [Apr. 30, 2003]?
--Mike Champion on the xml-dev mailing list, Friday, 11 Apr 2003
I've never been one for pussyfooting around when it comes to liberating what some corporation or mogul calls "private property." I don't really give a shit about capitalism. I think it's a scam. Rich guys who own everything trade stocks, and the rest of us, who own the vast majority of nothing, watch welfare wither away. If we make something beautiful and try to make a living by selling it, we can't own it. My beautiful thing will be the property of some company that has slapped a cover on it.
I'll leave it to Lawrence Lessig to explain how copyright limitations can nourish free trade and moneymaking. I'll let Declan McCullagh explain why there is no contradiction between capitalism and civil liberties for all. I don't care if my file-sharing cripples the economy. I want to rebel against the property holders, the people who took away our beautiful things and called them commodities. Until culture belongs to all of us equally, I will continue to infringe.
--Annalee Newitz
Read the rest in AlterNet: TECHSPLOITATION: Why I Infringe.
Between viruses and spammers and just plain old bad code, the net is now subject to a heavy, and increasing level of background packet radiation. And the net has very long memory - I still get DNS queries sent to IP addresses that haven't hosted a DNS server - or even an active computer - in nearly a decade. Search engines still come around sniffing for web sites that disappeared (along with the computer that hosted them, and the IP address on which that computer was found) long ago.
Sure, most of this stuff never makes it past the filters in my demarcation routers, much less past my inner firewalls. But it does burn a lot of resources. Not only do those useless packets burn bits on my access links, but they also waste bits, routing cycles, and buffers on every hop that those useless packets traverse.
--Karl Auerbach
Read the rest in Boston.com / News / Nation / Saboteurs hit spam's blockers on the xml-dev mailing list, Tuesday, 26 Aug 2003.
--Dave Thomas
Read the rest in Plain Text and XML
All font tags do is double the bandwidth that's needed to view your site, while undercutting underlying document semantics that might help your page travel beyond the traditional desktop browser. They hurt you, they hurt your users and they offer no compelling benefit to offset the damage they do.
Close to 100% of all desktop browsers now in use can at the very least understand font styling created in CSS1. Anyone using a browser that can't do this -- and we're talking Internet Explorer 2 and Netscape 3 -- doesn't care whether your site is set in Verdana or Georgia or Times. They just want the information. There's no need to feed them 20K of font tags per page and there's certainly no need to force the rest of your users to download that junk with every new page they load. One little CSS file can handle all your site's font styling and much more and once downloaded, it stays in the visitor's cache, saving bandwidth on every page of your site.
Traditionally, font tags have been used not only to control presentation, but also to replace rudimentary HTML or XHTML structural markup, such as the paragraph tag or headline tags like h1 and h2. Even in a non-CSS environment, and there are very few of those out there, at least in the browser space, if you use good simple structural elements like the paragraph tag and the h3 headline, the meaning comes through whether people see your chosen font or not. Using structured markup instead of font tags and other 1997 junk makes your site as friendly to a Newton handheld or a text-only browser as it is to a modern desktop browser or a Web-enabled cell phone and at a fraction of the bandwidth and development cost.
--Jeffrey Zeldman
Read the rest in Meet The Makers - Creative people in a technical world.
Postel's dictum is in fact the problem here. If implementation A is liberal in what it accepts and implementation B is liberal but in different places, then finding conformance bugs in A and B can be difficult. It can also lead to interop problems when implementation C is brought into the picture, and has to "work around" the two non-overlapping liberalities.
--Rich Salz on the xml-dev mailing list, Sunday, 24 Aug 2003
There comes a point in the life cycle of any system when adding one more patch is the wrong solution to every problem. Eventually, it's time to rethink, refactor, and rewrite. For DocBook, I think that time has come.
--Norm Walsh
Read the rest in More Ruminations on DocBook
Since the idea of client-side XSLT is pretty much dead in the water, a good case can be made that it doesn't need a single standard either. We might be better off with multiple competing implementations, each with its own unique feature set. Stylesheets might not be portable amongst different processors, but you only use one processor at a time anyway!
Freeing XSLT's evolution from the W3C process would allow more room for experimentation and innovation. It would also provide an opportunity to see which ideas are worth keeping and which can be discarded, and then throwing the latter away!
--Joe English on the xml-dev mailing list, Thursday, 19 Jun 2003.
--Rick Jelliffe on the www-tag mailing list, Monday, 21 Apr 2003
the whole issue with the Web is that it has been designed for people but that because of an unexpected huge success people have always wanted to use it for programs.
That was one of the goal of XML (SGML on the web so that programs can use the web to exchange usable documents) and this is what Web Services and the Semantic Web are trying to leverage on the success of a system designed for people to exchange between applications.
--Eric van der Vlist on the xml-dev mailing list, 24 Jul 2003
It is unlikely that a spec will be successful unless specialists in the field complain that it is far too simple for real-world use ("I've been working with markup since 1988, and I know that in industrial-strength projects we need to ..."). Pre-W3C HTML is the classic example, but remember that networking types once looked down their noses at TCP/IP as well ("It's fine for academic research, but ..."). Sometimes the specialists get their revenge in v.2 by joining the process and smothering the spec to death with new features.
--David Megginson on the xml-dev mailing list, Sunday, 27 Oct 2002
Stop trying to pin down arbitrary concepts using a unique URI. It is not necessary for there to be a canonical URI to identify "the Porsche 911". It is sufficient to be able to say "the car in *this picture*" or the car "described in *this advertisement*". Question: does tel:555-1234 identify a company, a department, a telephone handset, the person who answers it, the employee role of the person who answers it, the person who is *supposed* to answer it, or none of the above? Answer: it doesn't matter! Because any reasoning agent can say "the company whose number is..." or "the employee who answers when you call..." or any other necessary clarification. If telephone numbers do not identify a unique concept, neither do URIs.
--Michael Day on the www-tag mailing list, Thursday, 24 Jul 2003.
--Tim Bray on the xml-dev mailing list, Thursday, 05 Dec 2002
in the general case it is not possible for any XSLT implementation to define the appropriate collation rules for all possible uses of sort--the variance even within a single language is too great, as evidenced by, for example, the discussion of back-of-the-book index sorting in the _Chicago Manual of Style_. In addition, the Unicode standard is very clear that the ordering of characters in the Unicode character set does not define the collation sequence for any language or writing system. While most alphabetic languages have a natural or default collation order, sylabic and ideographic languages mostly do not.
--W. Eliot Kimber on the xsl-list mailing list, Saturday, 09 Aug 2003
The.
--Jakob Nielsen
Read the rest in Information Pollution (Alertbox)
The first thing to remember is XML is syntax not semantics. Repeat it to yourself often. It is the biggest mistake made by many people who work with XML, even the supposedly experienced old hands. This mistaken assumption leads to statements like "XML is a self describing format" when in truth it is no such thing since self describing implies that the semantics of an XML document are self contained and inherent in the document which is a bogus claim that is obvious to anyone who's spent 5 minutes working with XML.
--Dare Obasanjo
Read the rest in kuro5hin.org || Another One For Jon Udell
My attitude with reuse is that reuse is something you evolve. You build an application to solve the problems of the application. If you build another similar application, then you begin to factor out some common pieces. If you build a third similar application, you factor out more common pieces. Then you'll begin to have something like a reusable framework. My definite recommendation is don't attempt to define a reusable framework first and then build applications on top of it. Rather, evolve the framework while building the applications.
--Martin Fowler
Read the rest in Flexibility and Complexity.
--Jim Gray
Read the rest in ACM Queue - Content
RSS should be a poster child for XML namespaces, because everyone and his dog wants to extend it but keep the core syntax / semantics. Instead, namespaces are (as Danny Ayers points out elsewhere, can't remember where offhand) one of the principal cleavage points in the RSS world. Are those resisting namespaces just being stupid/stubborn, or are they the "canaries in the coalmines" dropping over from the toxic namespace fumes? I don't know ... but I hear the sound of people voting with their feet. Maybe the XML Supreme Court should steal the election and suspend civil rights until this non-orthodoxy is corrected :-) But seriously, this challenges the XML world to show that the namespace spec really adds more benefit than cost in the real world, or to clean it up until it does.
--Mike Champion on the xml-dev mailing list, Monday, 21 Jul 2003.
--Simon St.Laurent on the xml-dev mailing list, Thursday, 31 Jul 2003
If we want to assign a URI to everything including digital documents, then we have the choice that a URI representing something other than a digital document must return an error when GETted, or that some digital documents cannot be given RDF properties.
TBL has said that "" is the URI for the W3C consortium. But he has not said what the URI is for the hyperdocument which begins "The World Wide Web Consortium was created in October 1994". It cannot be the same URI on pain of contradiction.
--John Cowan on the xml-dev mailing list, Thursday, 24 Jul 2003
I could use PUT and DELETE in an XForm. Only GET and POST are allowed in HTML
--Bill De Hòra on the xml-dev mailing list, Wed, 23 Jul 2003
Xerces, like the rest of Apache, is open source software. If you have complaints about code quality, you are more than welcome to get involved in helping to improve it. Or you can go with a purchased copy rather than a free copy; generally, the largest advantage of doing so is that you get an actual customer support team.
--Joseph Kesselman on the xerces-j-user mailing list, Thursday, 24 Jul 2003
In case anyone doesn't know, there is an xml-technology specific newsgroup: comp.text.xml. It seems of real use to people who are, e.g., having trouble using XML Schema or DTDs, and the "Is XML a bagel or a database?" kind of question usually draws a constructive response.
--Bob Foster on the xml-dev mailing list, Wed, 23 Jul 2003
If one accepts the assertion that XML itself is only syntax, then namespaces work reasonably well. If one attempts to use them with any other semantic, results vary.
--Claude L Bullard on the xml-dev mailing list, Tuesday, 22 Jul 2003
RSS is much like HTML -- it evolved because the need for it was there, but is now reaching a point where it is becoming too integral to the overall structure of the web not to be pushed into a formal process. Personally, I would love to see it be submitted as a note into the W3C, which would in turn pretty much force it into at least standards compliance with the rest of the XML technology base.
--Kurt Cagle on the xsl-list mailing list, Monday, 21 Jul 2003.
--Jim Fuller on the xsl-list mailing list, Monday, 21 Jul 2003
Standardizing XML's syntax - and whatever semantics were needed for that, as I acknowedged - had clear benefits. The waves of supposedly semantic standardization that have followed have a much more muddled record, to put it politely.
XML's creators had the substantial advantage of three decades of work to reduce to the smallest possible set of tools, and I hope that after another decade we'll be able to boil those down further. In the meantime, however, I don't feel at all committed to standardizing and standardizing and standardizing.
--Simon St.Laurent on the xml-dev mailing list, Friday, 18 Apr 2003.
--W. E. Perry on the XML DEV mailing list, Friday, 28 Mar 2003
Today most. In the future, the image could be reused by reference and scaled to the right size, even if it had been included as inline markup in the Google main page. more complex, shape-based layouts like slate.com or cnn.com. SVG could noticeably improve the performance of the whole Web. But better performance is just the beginning. Things get really exciting when you consider the much richer Web that SVG enables.
--Paul Prescod
Read the rest in XML.com: SVG: A Sure Bet [Jul. 16, 2003].
--Dennis Sosnoski on the xml-dev mailing list, Thursday, 19 Jun 2003.
--Jakob Nielsen
Read the rest in PDF: Unfit for Human Consumption (Alertbox)
--Mitch Kapor
Read the rest in Mozilla Foundation Announcement.
--Benjamin Franz on the xml-dev mailing list, Monday, 14 Jul 2003
In my experience users of XML Schema language, particularly W3C XML Schema, fall into two broad classes. Those who want validation of XML documents (i.e. a message contract) and those who want to create Type Augmented Infosets (i.e. strongly typed XML). RELAX NG does the former but not the latter. W3C XML Schema does both, in my experience it doesn't do enough of the former and too much of the latter.
--Dare Obasanjo on the xml-dev mailing list, Thursday, 10 Jul
saying "thou shalt always use an XML API, or thou will perish" is probably a bit much. As another poster said, the experience of the developer is a key factor in the decision. Of course I've seen enough carelessness out there that advising people to use an API has become my default position. It's pert self-preservation. Let me tell you, it really bites to have to follow workflow back and forth in a complex project to see who is spitting out the broken XML.
--Uche Ogbuji on the xml-dev mailing list, Thursday, 19 Jun 2003
Un schéma ne suffit pas à documenter un vocabulaire XML et il est regrettable que les expressions "publier un schéma" ou "répertoire de schémas" soient rentrées dans le langage courant laissant à penser qu'un schéma suffit à ce qu'un vocabulaire soit immédiatement utilisable par tout informaticien.
--Eric van der Vlist
Read the rest in Quel langage de schéma XML choisir pour chaque usage ?
The test of whether a company understands the value of standards is when it recognizes that conformance is good even when the standard is bad.
--Michael Kay on the xml-dev mailing list, Wed, 9 Jul 2003
I am really tired of reading press releases about W3C accomplishments when none of it gets enabled on the Web until someone at Apache makes it happen.
--Roy T. Fielding on the www-tag mailing list, Wed, 9 Jul 2003.
--Amelia A. Lewis
Read the rest in XML.com: Not My Type: Sizing Up W3C XML Schema Primitives [Jul. 31, 2002].
--Dave Thomas
Read the rest in Plain Text and XML.
--David Megginson on the xml-dev mailing list, Saturday, 19 Apr 2003
the name "default namespace" was poorly chosen because it implies it acts as a default in the absence of something. It also implies there's just one. Simply think of a default namespace as any namespace that happens to have an empty string prefix.
--Jason Hunter on the jdom-interest mailing list, Thursday, 03 Jul 2003 Bray on the xml-dev mailing list, Thursday, 05 Dec 2002
Premature standardization with little-to-no implementation experience has led to some really awful standards, and a lot of the things being made into Official W3C Recommendations don't really need to be standardized in the first place, or at least not yet! XQuery is IMO a good example of this.
--Joe English on the xml-dev mailing list, Thursday, 19 Jun 2003
The IETF has an explicit contract with Unicode: "We'll use your normalization algorithm if you promise NEVER, NEVER to change the normalization status of a single character." Unicode has already broken that promise four times, so its credibility is shaky.
--John Cowan on the xml-dev mailing list, Friday, 27 Jun 2003
In a parallel universe where CSS layout and structured markup had always been the norm, it would be hard to wrap your brain around table layouts and spacer pixels. You'd be like, "But I can do this with the CSS margin. Why do I have to use this stupid transparent pixel GIF image?" In that same parallel universe where all browsers had always supported the same standard scripting language and Document Object Model, if you suddenly had to code for six incompatible browsers, you'd say, This is impossible. I'm sorry, doesn't this medium have any standards any more? Six code forks just to verify that address text entered in a form field is not malformed? You've got to be kidding.
--Jeffrey Zeldman
Read the rest in Meet The Makers - Creative people in a technical.
--Guido van Rossum
Read the rest in Programming at Python Speed
Python uses whitespace to *force* you to lay out your code so that the control structure is *obvious*. That leaves your mind free to concentrate on what the program really does.
Its sort of like separating content from presentation (an XML mantra) but applied to software.
--Sean McGrath on the xml-dev mailing list, Wed, 23 Oct 2002.
--Frank Sommers
Read the rest in Why Use SOAP?.
--Sam Ruby
Read the rest in Web services visionary.
--Martin Fowler
Read the rest in Tuning Performance and Process
XML output is almost always more complex than you think at any given time. I'm sure that even having written a lot of XML output code, I haven't exhausted the realm of potential gotchas. Using an XML API keeps the specialization where it belongs.
--Uche Ogbuji on the xml-dev mailing list, Thursday, 19 Jun 2003
The other sore spot is Microsoft's FrontPage, which many developers at public sector institutions are stuck with. FrontPage is not a Web editor, it's an Internet Explorer production tool. There is a difference. FrontPage is designed to make sites that only work in Microsoft's browser. This is not just my opinion, it's what Bill Gates said in testimony during the trial. Few commercial designers and developers use FrontPage, but as I said, too many folks in the public sector are told they must use it because there's no money in the budget for an additional Web editor like Dreamweaver.
--Jeffrey Zeldman
Read the rest in Meet The Makers - Creative people in a technical world.
One of my regrets about XML is that we didn't get rid of attribute normalization and I think we should either have required > to be escaped everywhere or should have allowed ]]> outside CDATA sections.
--James Clark on the xml-dev mailing list, Friday, 20 Jun 2003 on the xml-dev mailing list, Thursday, 19 Jun 2003
Dataheads think of an element as a container for its content, and if the container is removed, the content goes to Tumbolia with it. Docheads think of elements as basically annotations of ranges, and if the annotation is removed, the underlying content remains.
--John Cowan on the xom-interest mailing list, Wed, 18 Jun 2003
You can get an idea of how long browser support will take by looking at Cascading Style Sheets -- CSS 1 became a recommendation in 1996, CSS 2 in 1998. Years later we're still waiting for the major browsers to give us full compliance. Given the radical nature of XHTML2, I wouldn't expect any browser maker to jump in ahead of the game, making implementations before the recommendation is final.
--Owen Leonard on the XHTML-L mailing list, Tuesday, 25 Mar 2003
The Web *is* a triumph of keep it simple, 80/20, evolutionary design principles. Up-front design to eliminate the corner cases we have to deal with today would not have led to a better Web, it would have led to no Web; we'd be faced today with far worse problems of interoperability between more highly designed but incompatible systems. If HTTP and HTML hadn't been dirt simple in their first generation, they wouldn't have spread like wildfire. Think HTML is kludgy and under-designed? How would you like to be using Blackbird on Windows, PDF on Macs, Frame on Unix, and trying to build anything resembling what we have today out of all these highly-designed pieces?
--Mike Champion on the xml-dev mailing list, Thursday, 24 Jan 2002).
--Joel Spolsky
Read the rest in Joel on Software - Good Software Takes Ten Years. Get Used To it.
I've developed a number of vocabularies that could have used XInclude, but ended up using a simpler ad-hoc solution instead. The main reasons I didn't use XInclude are: XPointer was unnecessary for my needs (relative URIs were sufficient, I only needed whole-document transclusion); XPointer was unimplementable (at least by me); and in a number of cases leaving XInclude out meant I could leave out XML namespaces as well (since nothing else in the vocabulary called for them).
--Joe English on the xml-dev mailing list, Friday, 09 May 2003
Back in the days when I had time to hang out on the xslt list I found myself giving a use case where strong typing would help us. Now-a-days, I've worked around it so much I no longer want it. Essentially, we can annotate a node from the back end with a type attribute and be done with it once and for all; pretty much everything we ever needed to do with types is now possible.
--Peter Hunsberger on the xml-dev mailing list, Tuesday, 10 Jun 2003
OOP has made a virtue of binding data and behavior. (OOP's XML integration problems strike me as eerily familiar to its RDBMS integration problems, where a similar separation is popular.)
Trying to explain that mixing data and processing is fine while you're processing the data and awful when you're transferring the information between dissimilar systems doesn't always seem to go over well. Programmers seem to put a lot of effort into abolishing "dissimilar" instead of taking advantage of the separation. At least that's the lesson I've seen in most of this Web Services stuff...
--Simon St. Laurent on the xml-dev mailing list, Monday, 13 Jan 2003.
How this arises is clear: standards are increasingly being viewed as competitive weapons rather than as technological stabilizers. Companies use standards as a way to inhibit their competition from developing advantageous technology. As soon as technical activity is observed by political/economic forces, their interest rises dramatically because they see a possible threat that must be countered before it gains strength.
The result of this is a tremendous disservice to both users and consumers of technology. Users get poor quality technology, and because of the standards process, they're stuck with it.
--James Gosling
Read the rest in Phase relationships in the standardization process!
--Rick Jelliffe
Read the rest in XML.com: XML at Five [Feb. 12, 2003]
The big issue is not whether you use GIF or PNG. The big issue is whether you let a patent holder become a censor for your communications.
--Don Marti
Read the rest in Bell tolling for PNG graphics format? | CNET News.com
the rude shock is that XPath 2.0 has kept the name XPath giving the false impression that XPath/XSLT 1.0 users would need to migrate from one to the other!
To me they are different languages and there is no special reason to move to XPath 2.0 (unless of course you really need its features).
--Eric van der Vlist on the xml-dev mailing list, Friday, 06 Jun 2003
SAX is a terribly difficult way for programmers to deal with XML, especially if they are already struggling just to get their minds around XML itself..
--David Megginson on the xml-dev mailing list, Saturday, 19 Apr 2003
"Pull" APIs sometimes make the consumer's life considerably easier at the cost of sometimes making the producer's life considerably more difficult. It's easier to report good performance numbers in your component when you're in control of the processing loop. As a result, every step in the processing chain really wants a pull API on its input side and a push API on its output side...
--Joseph Kesselman on the xerces-j-user mailing list, Monday, 12 May 2003
Read the rest in Plain Text and XML
Truth is that designing a data format -- in XML or plaintext -- is hard work, and most people do a very poor job of it. (Raw) XML simplifies many of these issues. With XML, all of a sudden people can interchange data files without having to think about nasty grammar issues. Either an XML file you generate/receive is syntactically correct, or it isn't.
There's a *huge* benefit there, as Tim Bray points out every month or so. If I'm writing a program to parse your XML data, and my program barfs due to a well-formedness error, I know you are the guilty party. If your data parses xmlwf, then I know the problem is with my buggy code.
--Adam Turoff on the xml-dev mailing list, Tuesday, 6 May 2003.
--Ken Arnold
Read the rest in Perfection and Simplicity.
--Paul Kocher
Read the rest in Slashdot | Security Expert Paul Kocher Answers, In Detail
XML today certainly isn't what I anticipated by the phrase "SGML on the Web".
--Andrew Watt on the xml-dev mailing list, Thursday, 13 Feb 2003.
--Patrick Stickler on the www-tag mailing list, Wed, 22 Jan 2003
Simplicity has real value on its own that makes the system more usable. It's the difference between reading a 100-page manual and reading a 500-page manual. It is more than five times the size.
--Ken Arnold
Read the rest in Taste and Aesthetics
createAttribute creates an Attr which is not fully compatable with namespace-aware processing. It is *NOT* equivalent to calling createAttributeNS() with the namespace either blank or null.
DO NOT use createAttribute(), setAttribute(), setAttributeNode() or createElement() if you have any choice whatsoever. Nodes produced using DOM2 calls will be compatable with DOM1... but the reverse is *not* true.
--Joseph Kesselman on the xerces-j-user mailing list, Wed, 23 Apr 2003
It is true that there is some useless cruft in XML that was included only for political reasons: public identifiers, notations and external entities serve no function that MIME types (or URIs -- sorry, Simon) and URLs couldn't serve, but we had to keep them in XML as part of an unwritten ceasefire agreement with the SGML old guard (*), which was still powerful at the time and could have seriously hindered acceptance of XML both inside and outside the W3C; the other part of that ceasefire was to pretend that XML and SGML would coexist, with XML for lightweight Web and SGML for so-called "serious enterprise applications" (the vendors put paid to that idea by abandoning SGML so fast that we couldn't keep up with the press releases).
--David Megginson on the xml-dev mailing list, Saturday, 15 Feb 2003
A standard syntax enables communication between separate components. A standard object model is useless for getting your bytes from "here" to "there."
--Rich Salz on the xml-dev mailing list, Friday, 23 May 2003.
--Joel Spolsky
Read the rest in User Interface Design for Programmers?
--Ralph Nader
Read the rest in hot seat Ralph Nader, Patent Buster
The notion that you have to have an XML Schema or a DTD to work with this is one of the more unfortunate myths about XML out there. What you really need are basic relationships between senders and receivers. It's not like I need to create a formal contract every time I have a conversation - only some conversations require those.
--Simon St.Laurent on the cbp mailing list, Monday, 14 Apr 2003
My grandma used to say "a standard is, as a standard does". Participating in several list-servers for emerging standards as a potential user, I can add my observation that one reason for low success-rates from standards bodies is that they tend to be naïve about management of marketing requirements. Software companies work hard to support product marketing staff to force R&D to consider what's actually required by intended users. Many standards groups are like R&D departments enjoying a vacation from pesky market requirements (present company excepted!)
--Allen Razdow on the xml-dev mailing list, Monday, 19 May 2003.
--Brian Goetz
Read the rest in Tweak your IO performance for faster runtime
For some piece of data to be marked up, some one has to first define, document, publish, and publicize the schema. After that the fun part starts: dealing with competing schemas and standardization process. After standardization, millions of users have to learn how to use the new schema. Whole lot of work and wait, for all parties involved, just to see a glimpse of XML Heaven.
What if we can skip all that? What if people markuped content using their own names and structures, not those dictated by the central committee? Will the resulting chaos be unsurmountable? I believe not. I believe that constraints and mechanisms inherent within human languages and social structures lead to what I call Emergent Markup Languages, common tags and structures that emerge from natural behaviors of individuals following simple rules like "call it what it is" and interacting with their immediate surroundings and neighbors.
--Don Park
Read the rest in Mozilla
--John Battelle
Read the rest in Business 2.0 - Magazine Article - Is TiVo NeXT?
The notion that every single XML vocabulary needs to be blessed by six high priests in some standards org in order to be a viable format for data interchange is so 1980's.
--Don Box
Read the rest in Don Box's Spoutlet.)
--Jakob Nielsen
Read the rest in Will Plain-Text Ads Continue to Rule? (Alertbox April 2003)
That seems to illustrate a problem fundamental to the committee design process. In committee, members hash and rehash arguments seemingly endlessly over every point in the specification, but by a political process that depends on the personalities on the committee, their intransigence or lack thereof, their personal and company agendas, and their knowledge or lack thereof of the problem domain, eventually arrive at a compromise. New members on the committee might tip the balance in another direction at any level of detail, resulting in the discard or rethinking of the entire body of work. It is difficult for committees to see such an event as progress. As a consequence, committees develop a hard shell that repels challenges to basic assumptions as a turtle's shell repels predators. In the end, nothing is so precious to a committee as its hard-won consensus, as, in truth, committee members often cannot even recall the chain of reasoning, politics and personality that led them to a particular conclusion.
A reviewer of committee work, on the other hand, is often in the dark about the consequences of committee decisions until the entire process has unfolded. A "Holy shit!" comment at that stage cannot penetrate the committee's shell, and the committee work product marches on of its own momentum.
--Bob Foster on the xml-dev mailing list, Thursday, 8 May 2003
The kindest description I've heard of the W3C XML Schema datatypes collection is "baroque". The most accurate is perhaps "baroquen".
--Amelia A Lewis on the xml-dev mailing list, Monday, 12 May 2003
Virtually any program that's going to operate on text of some sort can operate on plain text as the lowest common denominator. Very often you get into a state where you want to work with some program, but it's.
--Andy Hunt
Read the rest in Plain Text and XML
Back in the Good Old Days, you could choose the base features that your problem needed for interoperability (or the code you were willing to write in the name of interoperability). DTD-Validity? Optional. Namespaces? Good idea, but optional. RDF? Take it, leave it, incorporate it later when people actually understand it. But at the end of the day, your data is either well-formed or it's not XML, and that's a surprisingly solid foundation for doing a lot more than we ever could with icky HTML, binary files or a plethora of plaintext formats.
What do we have today? A standards body that's dismissive of its hacker roots, acting more like an arbiter between vendors who are trying to build solutions to problems we're not trying to solve, and standardize products that we do not want to buy.
--Adam Turoff on the xml-dev mailing list, Saturday, 10 May 2003.
--Bill Venners
Read the rest in Silicon Valley?"
--Norman Walsh on the xml-dev mailing list, Friday, 24 Jan 2003
What many people really fear vis a vis Microsoft and XML is something like "don't worry you're pretty little heads about all that complexity, just use our GUI and wizards ... all will be fine ... trust us, trust us." So sorry, but trusting y'all up there in Redmond to ensure interoperability has not been a winning proposition in the past. Sticking with the simple subset of XML technology that can be understood by ordinary mortals and can be proven to interoperate with other Web Services tools is simple prudence, not paranoia about evil.
--Mike Champion on the xml-dev mailing list.
--Dave Thomas
Read the rest in Plain Text and XML.
--Kurt Cagle on the xsl-list mailing list, Wed, 19 Feb 2003.
--Michael Sperberg-McQueen
Read the rest in XML.com: XML at Five [Feb. 12, 2003].
--Guido van Rossum
Read the rest in Designing with the Python Community
There are web sites that hold very strange data for me, because they insist that my phone number has to be 10 digits and my address has to have a two-letter "state" with a limited range of values.
Yesterday I used a web site that demanded I enter two phone numbers, that had to be different. So I invented one of them.
I've always been sceptical about validation. It encourages people to enter incorrect data in order to get it past incorrect rules. It's one of the things that gives computers a reputation for being inflexible.
--Michael Kay on the xml-dev mailing list, Friday, 21 Feb 2003
XML is like sex, even when it's bad it's still pretty good.
--Tim Bray
Read the rest in ongoing iTunes Music Store and the WWW
King Canute has always been a role model at the W3C.
--Arjun Ray on the xml-dev mailing list, Wed, 30 Apr 2003
I'm not crazy about the concept of universal devices. I think the way you serve people is by optimizing the functionality of whatever it is you're building. If you try to make something universal, it does not do any of those things very well. The carriers have gone just a little bit too far in trying to consolidate all these things into one gadget--telephones, PDAs, cameras, MP3 players. They've become so difficult to use and thus compromise the features.
--Martin Cooper
Read the rest in Tech News - CNET.com
As a budding animator I've had tons of offers from everyone to display my animations on their web sites, set up agreements where people would host or mirror my animations. Some of these people just want to co-opt my work for their own gain. These media dinosaurs don't seem to understand that I can own AND distribute my own content by myself. Why should I attempt to strengthen someone else's brand with my own work?
Although the recording industry needs a wake up call, artists can benefit themselves from not making bad agreements. The simplicity and stupidity of the Internet can keep artists from having to make more complex and limiting agreements with greedy third parties. Then the idea of a recording industry as we know it would be just as outdated as the media dinosaurs themselves.
--Chris Hill, Ubergeek
Read the rest in SuperVillain: He Has More Friends Now
It has become abundantly obvious that the Internet does not only consist of the browser. Now people are very actively using IM, music jukeboxes, video players, online games, alternative interfaces. While the browser is as important as ever, it's not the be-all end-all.
-- Kim Polese
Read the rest in Future: Is there life after the browser? | CNET News.com
When finally the imperialist capitalist states implode and we get in socialist lalaland, when people will no longer steal, or take a sick day to protest, when that great day has come, people will also create honest and accurate meta-data.
Until then, don't hold your breath.
--Berend de Boer on the xml-dev mailing list, Thursday, 24 Apr 2003
one of the things that appeals to me about things like XHTML and particularly XML is the fact that it gives you a great excuse to block people running Netscape 4 from even being allowed onto your site, whilst still proclaiming your holier-than-thou-W3C-standards-compliant credibility.
--Steve Sweeney-Turner on the XHTML-L mailing list, Thursday, 27 Mar 2003.
--Jakob Neilsen
Read the rest in Low-End Media for User Empowerment (Alertbox April 2003)
there seems to be a persistent and growing misalignment between the dogmas of OOP and the realities of XML development. The WHOLE POINT of XML is to *break* the encapsulation of data behind interface methods, thus allowing interoperability at the data level rather than the object or API level. Of course, specific applications can and probably should build abstractions of the data so that their programmers can think of the XML as "invoice" or "catalog entry" objects, but it's not at all clear that the implementation of these objects would be well-served by being abstracted away from the "under the hood" details of XML (and HTTP, etc.).
--Mike Champion on the xml-dev mailing list, Monday, 16 Sep 2002
Does anybody really believe that XML is just UnicodeWithAngleBrackets, or is that hyperbole? For instance, does anyone really think it is an error for us to discuss an XML document using terms from the XML 1.0 specification such as elements, attributes, children, etc? These really do seem to strongly imply an interpretation of those characters into structures that can form a hierarchy.
Here's a view that I might call the "Only Structure" view: XML, like any language, has a surface syntax and associated semantics. "Unicode With Angle Brackets" describes the surface syntax, and the BNF in the XML Recommendation tells how to parse this syntax and what structures it expresses. The labelled structures expressed by an XML document are the semantics of the document.
--Jonathan Robie on the xml-dev mailing list, Thursday, 27 Feb 2003
For a one-shot application (ie. startup; process one document instance; terminate) the classloading overhead for the DOM API can be quite significant even with the most recent VMs. In some work I've done lately I've seen DOM classloading account for as much as 10% of wall clock time with the Sun 1.4.1_02 VM on Linux, Solaris and the other one. SAX, being a much smaller API, is at an advantage here.
--Miles Sabin on the xml-dev mailing list, Saturday, 19 Apr 2003
It's beginning to look like people have finally figured out the browser ought to be a browser, and if you need other tools, you can build other tools. Instead of trying to make the browser a Swiss Army knife, why not get a screwdriver and a wrench and other tools designed for the job?
--Barry Parr
Read the rest in Future: Is there life after the browser? | CNET News.com.
--Tim Bray on the xml-dev mailing list, Thursday, 05 Dec 2002.
--Sean McGrath on the xml-dev mailing list, Saturday, 18 Jan 2003
XML is human readable. I can open the iTunes Music Library.xml file, and INSTANTLY I know the format of that file. Using NSDictionary, I can write a program to display that information in like 3 hours (and I'm fairly novice). Compare that to before, without XML. How often do you see old reverse engineered file specs start off with "This first int has to be 9202 or the file won't work. Don't ask us why, it is just that way."
--Steven Canfield
Read the rest in NSLog(); - Deatherage Drivel on X's "Open Formats"
What if we had kept the Web open rather than having a zillion proprietary extensions from companies like Microsoft? Wouldn't a standards-compliant Web be a better place?. We have a sick market--an unhealthy market--because most of the Web is browsed with a single vendor's browser. That's not a free market. A free market would have genuine competition. A free market would never have allowed a single vendor to become so dominant.
--Bruce Perens
Read the rest in Upstarts: Evolution creates second wave | CNET News.com)
--Michael Sperberg-McQueen
Read the rest in XML.com: XML at Five [Feb. 12, 2003].
--Paul Graham
Read the rest in The Hundred-Year Language
We've never believed that Microsoft would truly make their XML format interoperable. Standard operating procedure with standards seems to be embrace, extend and exterminate. Despite the hype from their public relations department, I've seen no reason to believe that they would act any differently with XML.
--Gregg Nicholas
Read the rest in Microsoft limits XML in Office 2003 | CNET News.com
From the beginning, there was a question whether Microsoft was going to buy in completely to XML. Microsoft is often trying to spin their message, and they want to appear as if they buy into (open) standards. But they always put in the proprietary hooks somewhere in the final release of the product.
--Bob Sutherland
Read the rest in Microsoft limits XML in Office 2003 | CNET News.com
I always thought the <?xml-stylesheet?> processing instruction was a bad idea anyway, data should not define its own presentation rules even indirectly.
--Michael Kay on the xml-dev mailing list, Thursday, 2 May 2002.
--Guido van Rossum
Read the rest in Designing with the Python Community
In the long run, I think it's easier to make a URNs retrievable than it is to make HTTP URLs permanent, and that the W3C should stop trying to make an anti-URN policy.
--Larry Masinter on the www-tag mailing list, Tuesday, 8 Apr 2003."
--Rick Jelliffe on the xml-dev mailing list, Thursday, 6 Mar 2003
One reason XSLT seems especially mysterious, I think, is because people think of a stylesheet as a "program" that gets "executed". It's not: it's just a *specification* for a transformation, and rightly should contain the minimum possible programming logic. Mike Kay (or the guys at IBM, or MS, or your other friendly XSLT engine developer) wrote the program; I'm just running their code with my inputs.
--Wendell Piez on the xsl-list mailing list, Friday, 14 Feb 2003.
--Jean Paoli, Microsoft
Read the rest in XML makes its mark - Tech News - CNET.com
SGML was designed to make life easier for users, while XML was designed to make life easier for developers. Fortunately, it turned out that when developers are happy, they write lots of software, and that ends up making users happy as well.
--David Megginson on the xml-dev mailing list, Sunday, 16 Feb 2003.
--Simon St.Laurent on the XHTML-L mailing list, Tuesday, 19 Nov 2002.
--Michael Sperberg-McQueen
Read the rest in XML.com: XML at Five [Feb. 12, 2003]
The "Desperate Perl Hacker" argument was a bogus claim for XML 1.0 because of the existence of entities and CDATA sections but is quite farcical now with the existence of the Namespaces in XML recommendation (and it's bastard spawn "QNames in content").
--Dare Obasanjo on the xml-dev mailing list, Friday, 28 Mar 2003 Bray
Read the rest in ongoing--XML Is Too Hard For Programmers.
--Alaric B. Snell on the xml-dev mailing list, Thursday, 20 Mar 2003
when you nest one XML document inside another, you should nest it directly, without using CDATA. XML is specifically designed to allow such nesting. CDATA is saying "the angle brackets in here may look like markup, but they aren't". So if they are markup, why put them in CDATA?
--Michael Kay on the xsl-list mailing list, Friday, 7 Mar 2003
XLink is too complicated for the truly simple stuff and too [limited? / naive? / attribute-twisted? / URI-blinded?] for the harder stuff. From my perspective, it's a good clean miss of any 80/20 point whatsoever.
--Simon St.Laurent on the xml-dev mailing list, Monday, 6 Jan 2003
The presentation of a table is typically 2-dimensional. And paper and CRT screens work well for 2-dimensional presentation. And markup languages let you express data in a 2-dimensional way. And quite often this is really useful.
But I suspect that the power and convenience of the available tools for entering and presenting 2-dimensional data lead us to use this model even when a higher-dimensional model would be more suitable. The use of cells spanning rows or columns is a sign that the data is bursting the seams of the 2-dimensional model.
--Terrence Enger on the docbook mailing list, Thursday, 20 Mar 2003.
--Rick Jelliffe
Read the rest in XML.com: XML at Five [Feb. 12, 2003]
Read the rest in The Free Network Project
the slithering vermin who extort licenses of bogus patents have just heard that this is the Year of XML and are going after XML vendors rather than .coms or telcos or whoever they tried to leech off last year. Dealing with parasites is simply the price one pays for success. I guess it's better than having to deal with vultures, which is the price of failure.
--Mike Champion on the xml-dev mailing list.
--Don Park on the xml-dev mailing list, Wed, 12 Mar 2003
Any HTTP client can retrieve data from Amazon, eBay, or any other Web site, whereas a Web service client can only retrieve data from the subset of all Web services which expose the interface(s) it recognizes.
--Mark Baker on the xml-dev mailing list, Tuesday, 11 Mar 2003.)
--Rich Salz on the xml-dist-app mailing list, Friday, 07 Mar 2003
document creators who assume that consumers of their documents will be using all of the same ancillary technologies as they themselves (e.g. WXS) create informationally impoverished documents. If our goal is to insure the widest variety of the most useful documents, best practice demands that the syntactic text be the whole of it. If because you as author can delegate to a schema significant semantics which would otherwise be expressed in attributes or other inherent document content, you limit, if not preclude, some unexpected but valuable uses which I might make of your documents downstream. Every additional ancillary technology which you use effectively transforms your documents from what I understand as XML to something which is not 'my' XML, at least until I join in the proliferation and match your latest escalation.
--W. E. Perry on the XML DEV mailing list, Thursday, 27 Feb 2003.
--David Mertz
Read the rest in developerWorks: XML zone : XML Matters: Kicking back with RELAX NG, Part 1
Code first, then specify. Anticipatory specs for problems people haven't tried to solve yet are just wild, random shots in the dark; at best, they waste everyone's time, and at worst, they cause confusion and hostility. Most existing XML-related specs should not have been written yet: we don't need a spec to cover X until many, many people have been trying to implement X for a while and have discovered where a common spec might be beneficial. A new field of development shouldn't *start* with a spec; it should *end* with one.
--David Megginson on the xml-dev mailing list, Sunday, 27 Oct 2002
The very nature of XSLT development is vastly different than, say, Java development. While some shops may use XSLT extensively, I doubt that you'll ever see very many XSLT equivalents of the canonical 100,000 line Java enterprise system.
(The DocBook XSLT stylesheets come close at 69KLOC, which is quite impressive. To be fair, though, there's a lot of repetition there, and most XML vocabularies I've styled with XSLT aren't nearly as large, nor do they need to target as many outputs.)
--Adam Turoff on the xsl-list mailing list, Thursday, 6 Mar 2003
Life in the markup world used to be a lot woolier. We lived in caves carved out of ancient brick and mortar, ate consultants half cooked over a fire of burning DoD contracts, carried spears made of left-over dedicated word processors and dipped in DSR poison, wore the skins of our marketing staffs, and met once a year in Boston to celebrate the summer solstice with the local SGML Wiccans. We collected coffee cups from CALS contractors who traded them for the arcane incantations to invoke the Internet gods to command telnet demons to fetch an RFP from a bulletin board. We abused WYSIWYGers and sat in council huts debating the uselessness of modeling document structures in relational tables, all to the beat of the drums of The SGML Way (whatever happened to that band? one hit and pooof.. gone from the charts).
Life was short and brutal. Standards were long and brutal. Trips to DC were just brutal.
Things are better now. Where are my anti-psychotics?
--Claude L (Len) Bullard on the xml-dev mailing list, Wed, 5 Mar 2003
My.
--Don Box, Microsoft, on the xml-dev mailing list, Sunday, 9 Feb 2003
I wonder how many people that try to shoot down XML have ever had to hand write a parser for complex data transfers that use arbitrary data models. One parser bug can ruin your whole project.
--Doug Royer on the xml-dev mailing list, Monday, 03 Mar 2003.
--Nicholas Petreley
Read the rest in KDE 3.1 vs. GNOME 2.2: How GNOME became LAME - Feb 28, 2003
In most projects, accessibility has fairly low priority because project managers underestimate the number of people who are impacted by design problems. They think that they are just losing a handful of customers, whereas in fact they may be turning away millions of customers, especially among senior citizens, who constitute a big and rich group that's getting more and more active online.
--Jakob Nielsen
Read the rest in Reality Check for Web Design.
--Cory Doctorow
Read the rest in O'Reilly Network: Cory Doctorow's Bitchun' World: P2P Gone Wild [Feb. 27, 2003].
--Norbert Mikula
Read the rest in XML.com: XML at Five [Feb. 12, 2003]
XSD is being driven principally by Microsoft, because it is a critical first stage to Post Schema Validated Infosets (PSVI), which in turn is a way to deal with XML as binary objects. X# is on the horizon, and it will likely be the next logical stage of that - loading an XML entity will create a data-aware class that can be treated as a first class citizen in a binary format (and which can also conveniently get away from all of those pesky openness requirements that XML in its current form exposes).
--Kurt Cagle on the xsl-list mailing list, Wed, 19 Feb 2003
Interoperability is the key to XML's success -- let's keep it that way: say 'no' to subsets, profiles and alternative formats.
--Henry S. Thompson
Read the rest in XML.com: XML at Five [Feb. 12, 2003] and SMIL, since these open specifications have allowed for greater choice and more competition.
--Eve Maler
Read the rest in XML makes its mark - Tech News - CNET.com
The primary benefits of XML are its widespread, CONSISTENT usage which allows for the availability of several off-the-shelf tools and reduces vendor lockin. Encouraging consumers of XML to support ill-formed XML reduces the power of XML and induces fragmentation. If we arbitrarily pick bits and pieces of a standard to support then we cheapen the technology and reduce it to worthlessness. I'd hate to see XML on the 'web reduced to HTML during the browser wars with people simply checking if "it works well with Mark Pilgrim's program" or creating ill-formed markup simply to satisfy broken tools.
--Dare Obasanjo
Read the rest in XML.com: This Article is Quite Depressing."
--Michael Sperberg-McQueen
Read the rest in XML.com: XML at Five [Feb. 12, 2003]
The bare minimum requirements for a non-validating XML parser are unambiguous and hardly onerous. I'm tired of people making gross generalizations and using them as excuses for nonconformance, as if the complexity of the XML rec approached that of WXS.
--Evan Lenz on the xml-dev mailing list, Friday, 21 Feb 2003
SOAP is using XML because of the lingering hype wave from XML, not because XML itself is particularly well-suited to the tasks that SOAP performs. I'd say SOAP used HTTP early on for similar reasons, and I hope that just as SOAP appears to be leaving HTTP behind as it becomes clearer that SOAP and HTTP have little to do with each other, that SOAP may yet leave XML behind.
--Simon St.Laurent on the xml-dev mailing list, Thursday, 04 Jul 2002
Keep in mind, the same guy that provided Reagan his "plausible deniability" and the "creative solutions for Iran/Contra" that got him convicted of perjury now runs the Total Information Awareness program for DARPA. And he loves XML. Whether we like it or not, we are all making his job easier.
--Claude L (Len) Bullard on the XML-DEV mailing list, Wed, 19 Feb 2003.
--Tom Maglieri, Corel
Read the rest in XML makes its mark - Tech News - CNET.com.
--Jakob Nielsen
Read the rest in Intranet Usability: The Trillion - Dollar Question (Alertbox Nov. 2002).
--Marc Andreesen
Read the rest in Wired News: Conversation With Marc Andreessen.
--James Clark on the www-tag mailing list, Monday, 17 Jun 2002
HTTP was designed for a class of applications. Other applications which have very similar requirements for latency, transaction semantics, security, error recovery, communication of options, client and server capabilities, etc., might well fit as "reasonable" to layer on top of HTTP.
Unfortunately, most of the applications that layer on top of HTTP are, unfortunately, very much unlike the application that HTTP was designed for, and have very different requirements for security, transaction semantics, error recovery, etc. etc.
--Larry Masinter on the www-tag mailing list, Friday, 7 Feb 2003.
--Liam Quinn, W3C
Read the rest in XML makes its mark - Tech News - CNET.com users from the big-vendor hegemony that has ruled the computer industry for the last 50 years. The ability of user communities to develop their own data formats is a powerful force for freedom from vendor control.
--Jon Bosak, Sun Microsystems
Read the rest in XML makes its mark - Tech News - CNET.com.
--Ron Schmelzer
Read the rest in XML makes its mark - Tech News - CNET.com.
--Dave Hollander
Read the rest in XML makes its mark - Tech News - CNET.com
the issue is not whether validation (against XML Schema or any other schema language) is useful, but whether it's useful for that validation to change/augment the information that you get about the XML document within the transformation.
--Jeni Tennison on the xsl-list mailing list, Wed, 5 Feb 2003
SGML allowed extensive syntactic abbreviation, and SGML failed; XML forbade it, and XML succeeded. That's not the only reason that SGML failed, of course, but it was a contributing factor (SGML tools were just too hard to write, and markup errors were often too hard to locate and fix).
--David Megginson on the xml-dev mailing list, Friday, 17 Jan 2003.
--Uche Ogbuji on the xml-dev mailing list, Thursday, 23 Jan 2003
I used to wallow in conceptual filth and complexity in the false belief that hard work could lead to redemption. The horror of dealing with namespaces in DOM caused me to pledge my soul to Occam, and accept Pareto as my personal savior :~)
--Mike Champion on the xml-dev mailing list, Saturday, 20 Jul 2002
The strength and flexibility of REST comes from the pervasive use of URIs. This point cannot be over-emphasized. When the Web was invented it had three components: HTML, which was about the worst markup language of its day (other than being simple); HTTP, which was the most primitive protocol of its day (other than being simple), and URIs (then called URLs), which were the only generalized, universal naming and addressing mechanism in use on the Internet. Why did the Web succeed? Not because of HTML and not because of HTTP. Those standards were merely shells for URIs.
--Paul Prescod
Read the rest in XML.com: REST and the Real World [Feb. 20, 2002].
--Joe English on the xml-dev mailing list, Thursday, 23 Jan 2003.
--Tim Bray on the xml-dev mailing list, Friday, 15 Nov 2002'.
--W. E. Perry on the XML DEV mailing list, Friday, 31 Jan 2003.
--Bill de hÓra on the www-tag mailing list. The SOAP/WSDL/RPC paradigm works pretty well (at least compared to the easily available alternatives) for application integration on secure, highspeed LANs, and the people actually doing that stuff resist being lectured to by RESTifarians.
Some interesting dialogues are going to take place when people start working in the fuzzy middle where services must be loosely connected to each other yet tightly integrated with exisiting back-end systems. IMHO, neither the pure "design everything as a resource and access representations over the Web" nor the pure "design everything as an object and pretend the Web isn't there" approaches will suffice, so they'll have to cross-pollinate each other.
--Mike Champion on the xml-dev mailing list, Thursday, 30 Jan 2003
Between the last essay and now, the Supreme Court also decided the Eldred case, saying that Congress has unlimited power to extend copyright, thus making the limit of the "limited duration" unlimited.
This is Mancur Olson territory, where the effort required by the many to police the predations of the few is so high that special interests carry the day. For the average Congressperson, the argument is simple: copyright is a palatable tax that transfers wealth from the many to the few, and the few are better donors than the many. When the primary advantage of repealing that tax is something as unpredictable as cultural innovation, its not hard to see where to vote.
The Eldred decision costs us a shortcut. This will now be a protracted fight.
--Clay Shirky on the nec mailing list, Tuesday, 21 Jan 2003
The axis against Microsoft, certainly rooted in the carriers but with the Japanese CE companies starting to make overtures there, is looking more toward open-source solutions, so Opera is between the rock of Microsoft and the hard space of open source.
--Ross Rubin, eMarketer
Read the rest in Was Mac Opera gored on Safari? - Tech News - CNET.com
News flash: XML not invented as a serialization syntax for binary objects. Details at 11.
--Norman Walsh on the xml-dev mailing list, Sunday, 19 Jan 2003
Before the DOM, we had Netscape-flavored JavaScript and MSIE-flavored JavaScript. *That* was the problem that the DOM tries to solve.
There were and are plenty of language-specific APIs for SGML/XML processing, and none of them need to interoperate with each other. This isn't something that the W3C needs to standardize IMO.
--Joe English on the xml-dev mailing list, Friday, 20 Sep 2002
I get nervous when programming language experts set out to fix what's wrong with XML. Markup is supposed to save us from people like that. :-)
--Claude L (Len) Bullard on the xml-dev mailing list, Monday, 6 Jan 2003
Mixed content is the very meat of natural languages, and for that matter of poetry, whether the admixture is of verbal time, aspect, or mood. Perhaps the most fascinating characteristic of the variety in human language is the myriad ways in which they deal idiosyncratically--but deal beyond a doubt--with the mixed content of human syntax, semantics, and ultimately communication.
--W. E. Perry on the XML DEV mailing list, Thursday, 16 Jan 2003
"Markup" is information that you add to a text document in order to delimit, contain, or define the borders of certain content. If you want to indicate that some text within an outline or in an article should be treated as a heading, wrap it with <h1> or one of the other heading tags. The same goes for quotes, addresses, lists and list items, and more. The markup isn't part of the content, it just sits alongside, marking the content up into specific chunks. You can think of it as an overlay, like a transparent sheet showing political boundaries lying atop a picture of the Earth from orbit. The artificial boundaries change everything, and yet they don't really exist in the physical world.
--Steven Champeon
Read the rest in The Secret Life of Markup.
--Jason Brooks
Read the rest in Scoping Out Apple's Safari.
--B. Tommie Usdin on the xml-dev mailing list, Monday, 13 Jan 2003.
--Simon St.Laurent on the xml-dev mailing list, Monday, 30 Sep 2002?
--Rick Jelliffe on the xml-dev mailing list, Sunday, 12 Jan 2003
Scripts tend to do useful things, but in a way that's not obvious for machines to understand..
--Ian Jacobs
Read the rest in W3C releases scripting standard, caveat - Tech News - CNET.com
Use the DOM the least you can. If you have an alternative technique to do exactly what you want without using the DOM, then use it. This technology will be more accessible, and it will be easier for someone else to read it.
--Philippe Le Hégaret, W3C DOM activity lead
Read the rest in W3C releases scripting standard, caveat - Tech News - CNET.com
Forget the idea that namespace names are URIs. There's a lot of wishful thinking in the spec that says using URIs for namespace names is a good idea, and indeed a lot of people do use namespace names that look like URIs, but the bottom line is that they are actually random character strings and they don't mean anything.
--Michael Kay on the xml-dev mailing list, Saturday, 14 Dec 2002
Where RPC protocols try as hard as possible to make the network look as if it is not there, REST requires you to design a network interface in terms of URIs and resources (increasingly XML resources). REST says: "network programming is different than desktop programming -- deal with it!" Whereas RPC interfaces encourage you to view incoming messages as method parameters to be passed directly and automatically to programs, REST requires a certain disconnect between the interface (which is REST-oriented) and the implementation (which is usually object-oriented).
--Paul Prescod
Read the rest in XML.com: REST and the Real World [Feb. 20, 2002]
Linking is such a fundemental component to information processing that it's surprising there isn't one universal method to link two pieces of information out there.
It's also rather bothersome that nearly every W3C WG needs to invent a form of hypertext to handle simple linking. Look at <xsl:include/> vs. <xsl:import/>; perhaps there's a need for both types of inclusion in XSLT, but why did WXS need to reinvent nearly the same thing with <xsd:include/> and <xsl:import/>? (And let's not get started on XHTML and HLink.) And let's not forget the bandage XInclude provides....
--Adam Turoff on the xml-dev mailing list, Monday, 6 Jan 2003
Applications that depend on a PSVI now require a very complex, heavy-weight schema validation process, rather than a relatively simple parsing process.
--James Clark on the www-tag mailing list, Monday, 17 Jun 2002
It sounds so easy. First, get a bunch of people together who share a common need to interchange some type of data - say, invoices. Explain XML to them. Explain the significant technical benefits of having an industry standard schema for invoices.
Get technically minded individuals into a room with plenty of whiteboards and caffeine. Sometime later they'll emerge with a consensus model of what it is to be an "invoice" enshrined in some schema language (UML/XML Schema/DTD/RelaxNG/whatever).
Thereafter, all interested parties use the schema for data interchange and all is sweetness and light.
This makes 100% technical sense but it often doesn't work in the real world. The reasons it doesn't work have nothing to do with flavors of schema language or indeed flavors of markup language. It often doesn't work because of soft issues concerning people.
--Sean McGrath
Read the rest in XML - Journal - Soft Issues Surrounding Industry Standard Schemas
For years vendors have been telling me "don't bother your pretty little head over the bits on the wire, just put the data in here and it will come out there." Except when it doesn't. Except when I have the audacity to use a programming language or operating system that doesn't have the library. Except when I don't have the budget to acquire the library. If there's a library there I'll cheerfully use it, but I want guaranteed access to my own data in an open format as a basic condition of being willing to play. I think I'm pretty well in line with the CIOs of the Global 2000 in my feelings on this; these guys are all covered in scars from the API wars.
--Tim Bray on the xml-dev mailing list, Friday, 06 Dec 2002
We've spent a lot of time trying to undo .NET vs. Web services. Web services are a technology. We implement in Web services. They implement in .NET.
It is confusing. .NET seems to be every bit of software Microsoft has created. When it started, it was focused, but it's widened to such an extent that it's hard to get a handle on it. When you deal with Microsoft and .NET from a development perspective, you're talking about building on a Windows platform. The world is not restricted to that. The importance of Web services is that you have to communicate. If Microsoft has standards and can't interoperate with COBOL, the company has failed. The whole spirit of Web services is the communications aspect and interoperability. You don't have to ask Web services what they're running.
--Robert S. Sutor, Director of e-business Standards Strategy, IBM
Read the rest in Fawcette.com - IBM, Java, and the Future of Web Services.
--Jakob Nielsen
Read the rest in In the Future, We'll All Be Harry Potter (Alertbox Dec. 2002).
--David Megginson on the xml-dev mailing list, Wed, 11 Dec 2002
Quotes in 2002 Quotes in 2001 Quotes in 2000 Quotes in 1999 | http://www.ibiblio.org/xml/quotes2003.html | CC-MAIN-2016-40 | refinedweb | 18,551 | 59.43 |
67150/attributeerror-module-numpy-has-no-attribute-version
Hi Guys,
I have NumPy in my system. But I am not able to import the version function from the NumPy module. I got this bellow error.
from numpy import version
AttributeError: module 'numpy' has no attribute '__version__'
How can I solve this error?
Hi@akhtar,
To avoid this error you can use the commands one by one.
Uninstall previous version of numpy and setuptools
$ pip uninstall -y numpy
$ pip uninstall -y setuptools
Reinstall latest version of numpy and setuptools
$ pip install setuptools
$ pip install numpy
Hi,
I am using the latest version of Setuptools 47.3.1 and the latest version of Numpy 1.19.0. It is totally working fine in my system. If you are using Anaconda, then you can create a new environment and install the software.
>>> import scipy
>>> scipy.misc
Traceback (most recent call ...READ MORE
Hey @Nagya, replace
wn.mainlopp()
with
turtle.mainloop()
...READ MORE
Hey @Hannh, I had a similar issue. ...READ MORE
Hi@akhtar,
You have used unit8 in your code. ...READ MORE
You can use the rename function in ...READ MORE
The major difference is "size" includes NaN values, ...READ MORE
key error. I love python READ MORE
You'll have to install the pyaudio module ...READ MORE
Hi@akhtar,
I think numpy version is not compatible ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/67150/attributeerror-module-numpy-has-no-attribute-version | CC-MAIN-2021-21 | refinedweb | 233 | 69.58 |
25 April 2012 18:02 [Source: ICIS news]
LONDON (ICIS)--Most German drivers still avoid 10% bioethanol-blended gasoline (E10), more than a year after the ‘green’ fuel was first offered at the nation’s petrol pumps, industry officials said on Wednesday.
Karin Retzlaff, spokeswoman for Hamburg-based refiners’ trade group MWV, said E10’s market share was just 10 to 13%.
“That’s certainly far below what we had been hoping for,” Retzlaff said.
“We thought when E10 was introduced that it would become the standard fuel for gasoline-driven passenger cars in ?xml:namespace>
But many German drivers fear long-term use of E10 could damage car engines.
However, Bjorn Dosch, a spokesman for the German motorist lobbying group ADAC, said those fears were unfounded.
So far, there had not been a single reported case of engine damage in vehicles that carmakers certified as E10-compatible, Dosch said.
As such, ADAC expects market demand for E10 to grow, he added.
German drivers and environmental groups have also criticised E10 because of its potential impact on agricultural and food supplies.
But
Meanwhile, in | http://www.icis.com/Articles/2012/04/25/9553737/germanys-drivers-still-wary-of-e10-even-one-year-after.html | CC-MAIN-2015-22 | refinedweb | 184 | 62.38 |
I have some state values using in the
App.js as below
constructor(){ super() this.state = { text: "hello world" } } handleClick() { this.setState({text: "good morning}) } render() { return ( <BrowserRouter> <div> <Header /> <Switch> <Route exact path="/" component={() => <Home handleClick={this.handleClick()} {...this.state}/>} /> <Route path="/login" component={Login} {...this.state}/> /> </Switch> </div> </BrowserRouter> ); }
Inside my
Home component I have a button click function and on clicking the
handleClick function will trigger and the state value will change. I want to retain the new state value and pass it to
login component as well as
Home component after the login process is completed.
Right now when I goes to
/login and go back to
/ the default state value is getting inside the
Home component.
The state value is changing to new value on
onClick function and there is no issue with that. I have the binding methods added in the
App.js for the functions.
How can I retain the new state value to use inside
Home component after some other router is called?
Setting state in handleClick should be: | https://techqa.club/v/q/how-to-retain-and-pass-the-new-state-values-to-another-router-component-in-reactjs-c3RhY2tvdmVyZmxvd3w1NjEwOTYwMQ== | CC-MAIN-2021-39 | refinedweb | 177 | 66.94 |
basic_istream::readsome
Visual Studio 2005
Read from buffer only.
This method is potentially unsafe, as it relies on the caller to check that the passed values are correct. Consider using basic_istream::_Readsome_s instead.
Parameters
- _Str
The array in which to read the characters.
- _Count
The number of characters to read.
The unformatted input function extracts up to count elements and stores them in the array beginning at _Str. If good is false, the function calls setstate(failbit). Otherwise, it assigns the value of rdbuf->in_avail to N. If N < 0, the function calls setstate(eofbit). Otherwise, it replaces the value stored in N with the smaller of _Count and N, and then calls read(_Str, N). In any case, the function returns gcount.
// basic_istream_readsome.cpp // compile with: /EHsc /W3 #include <iostream> using namespace std; int main( ) { char c[10]; int count = 5; cout << "Type 'abcdefgh': "; // Can read from buffer or console // Note: cin::read is potentially unsafe, consider // using cin::_Read_s instead. cin.read(&c[0], 2); // Can only read from buffer, not from console // Note: cin::readsome is potentially unsafe, consider // using cin::_Readsome_s instead. cin.readsome(&c[0], count); // C4996 c[count] = 0; cout << c << endl; }
Referencebasic_istream Class
iostream Programming
iostreams Conventions
Other Resourcesbasic_istream Members
Show: | https://msdn.microsoft.com/en-US/library/d5t46152(v=vs.80).aspx | CC-MAIN-2015-48 | refinedweb | 208 | 57.77 |
Compile time Difference between // and /* */
Ramakrishna Nalla
Ranch Hand
Joined: Apr 21, 2005
Posts: 61
posted
Apr 21, 2005 15:53:00
0
Hi...
what is the difference between two type of comment lines between // and /*..*/
Check below code...and give me detail discription...
class Var{ public static void main(String args[]) { char c; //c='\u000a'; 'Line end' character so illegal literal /* and als0 '\u000d' */ //NOTE:comment using... //.. giving compile error,using...comment.. type.. /*..*/, no error c=65; System.out.println(c); } } /*Complile Errors: test.java:6: unclosed character literal //c='\u000a'; 'Line end' character so illegal literal ^ test.java:6: unclosed character literal //c='\u000a'; 'Line end' character so illegal literal ^ test.java:6: unclosed character literal //c='\u000a'; 'Line end' character so illegal literal ^ 3 errors */
No errors If I use /*...*/ type comment....please...help
Steven Bell
Ranch Hand
Joined: Dec 29, 2004
Posts: 1071
posted
Apr 21, 2005 16:39:00
0
It's not the comments that are causing the problem it's the unicode character.
The first thing the compiler does is replace all unicode characters with their value so the line:
//c='\u000a'; 'Line end' character so illegal literal
becomes
//c=' '; 'Line end' character so illegal literal
When you use /*...*/ The line becomes:
/*c=' '; 'Line end' character so illegal literal*/
understand?
[ April 21, 2005: Message edited by: Steven Bell ]
Ramakrishna Nalla
Ranch Hand
Joined: Apr 21, 2005
Posts: 61
posted
Apr 22, 2005 14:48:00
0
Thanks..
...One more doubt
class chara{ public static void main(String args[]) { char c='\145'; //octal representation which equal to 101 in decimal and that one in ASCII=e System.out.println("Char c (octal c=\'\\145\')="+c); //but char c='\400'; //giving Compile error: unclosed character literal System.out.println("Char c (octal c=\'\\400\')="+c); } }
As quoted in comments Compile time errors are comming.. Please explain...
Layne Lund
Ranch Hand
Joined: Dec 06, 2001
Posts: 3061
posted
Apr 22, 2005 15:27:00
0
I have a guess, but I suggest that you comment out that line and see if it compiles and runs. I bet the System.out.println() immediately afterwards will give you a clue to why you are getting this compiler error. Specifically, what character does \400 represent?
Layne
Java API Documentation
The Java Tutorial
Steven Bell
Ranch Hand
Joined: Dec 29, 2004
Posts: 1071
posted
Apr 22, 2005 15:35:00
0
I just don't understand why you would use such confusing syntax? Are you trying to make the code as difficult to read as possible?
Ramakrishna Nalla
Ranch Hand
Joined: Apr 21, 2005
Posts: 61
posted
Apr 23, 2005 13:07:00
0
Ok Bell I simplified the code, and I want to learn
java
through
SCJP
orient(Preparing for SCJP). Ok and my problem is:
class chara{ public static void main(String args[]) { char c='\145'; System.out.println("Char c =" +c); char c='\400'; System.out.println("Char c ="+c); } } /*Compile Errors char.java:7: unclosed character literal char c='\400'; ^ char.java:7: unclosed character literal char c='\400'; ^ 2 errors */
char c='\400'; reprsents octal notation which is equal to 256 decimal notation.(that one is beyond ASCII limit).
char c='\145'; -->Eqaul to 101 in Decimal notation.
As java uses UNICODE character set, why i am UNABLE to access above 255 character..What's my mistake
Edwin Keeton
Ranch Hand
Joined: Jul 10, 2002
Posts: 214
I like...
posted
Apr 23, 2005 16:14:00
0
The only mistake is that you can only escape octal constants up to '\377'. I don't know why really other than that octal escapes were originally provided for compatibility with C.
Unicode escapes (e.g., '\u0400') are the preferred way of expressing Unicode values.
SCJP, SCWCD
Ramakrishna Nalla
Ranch Hand
Joined: Apr 21, 2005
Posts: 61
posted
Apr 23, 2005 16:40:00
0
I tried by this statement : c='\u0400'; As i know that one is Hexadecimal notation, which is equal to 1024 in decimal..My Problem is::::: I just want to print character beyond ASCII range i.e above 255. But on my system all characters above no:255 smiply outputs a (question mark) '?'.
What I have to do
.. to Print any UNICODE character.
Jon Egan
Ranch Hand
Joined: Mar 24, 2004
Posts: 83
posted
Apr 23, 2005 21:57:00
0
I had a hunch and went down this trail... I think this is it (the problem, not the solution).
You are trying to System.out.println() the unicode character that is > 255, right?
First, look up System.out - it's a static member variable of System, of type
PrintStream
.
So then look up PrintStream.println(char x) - it says it behaves like print(char), followed by println().
So then, looked at print(char) - It says "Print a character. The character is
translated into one or more bytes according to the platform's default character encoding
, and these bytes are written in exactly the manner of the write(int) method"
So then I looked at write(int) - and it says it writes the byte "as given", and that if you want it encoded into the platform's default character encoding, use print(char) or println(char)...
But the point is, you
don't
want it encoded - you want it printed
as is
.
So I think what you're looking for is:
System.out.write('\u00FF'); // implicit char-to-int cast System.out.println();
for me, that didn't end up displaying anything (it's supposed to be a french lowercase y, with two dots, or "diaeresis", over it). I tried the following, to try printing all the characters "in the neighborhood" of '\u00FF':
public class DoIt { public static void main(String [] args) { for (char c = '\u00F0'; c <= '\u0114'; c++) { int i = c; System.out.print(i + " = ["); System.out.write(c); System.out.print("]"); System.out.println(); } } }
One seemed to be a bell, another seemed to be a backspace character... Which sounded to me like the beginning of the ASCII sequence, so I tried it again with the loop going from '\u0000' to '\u000F', and it was nearly (interestingly, it was not exactly) the same sequence of gibberish - dark smiley face, light smiley face, heart, diamond, club, spade...
And then I looked back at that method definition for PrintStream.write(int) again:
write(int) - ...writes the
byte
"as given"...
so even though you supply an int, it's going to interpret it (must be an explicit cast) as a byte... which means that after 255, we wrap around to 0. I found in the source for
PrintStream
that it's calling OutputStream.write(int), which is abstract, but the doc says:
"The general contract for write is that one byte is written to the output stream. The byte to be written is the eight low-order bits of the argument b.
The 24 high-order bits of b are ignored
."
Bottom line, I think you're out of luck trying to use System.out with anything bigger than a byte.
I considered looking into whether there was a way to write to a file instead, without the encoding, etc. And then I realized it was late, and I need to quit stalling, and get back to studying for SCJP, since my voucher expires on the 30th. Good luck from here ;-)
-- Jon
Ramakrishna Nalla
Ranch Hand
Joined: Apr 21, 2005
Posts: 61
posted
Apr 24, 2005 01:07:00
0
Thanx For your BIG reply
...As i am New to JAVA, So please Suggest me some good Stuff for SCJP...
Stuart Gray
Ranch Hand
Joined: Apr 21, 2005
Posts: 410
posted
Apr 24, 2005 01:11:00
0
I'm wondering if there isn't anything in the new
java.util.Formatter
class in 1.5 that might help (haven't had time to check these new features out myself yet).
subject: Compile time Difference between // and /* */
Similar Threads
dan exam dought
char declaration
//Comment in java
unicode character
why java compiler compile comments?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/399396/java/java/Compile-time-Difference | CC-MAIN-2013-48 | refinedweb | 1,355 | 64.81 |
CudaText API
Contents
- 1 Intro
- 2 Event plugins
- 3 Callback param
- 4 Global funcs
- 4.1 version
- 4.2 app_path
- 4.3 app_proc
- 4.4 app_log
- 4.5 app_idle
- 4.6 msg_box
- 4.7 msg_status
- 4.8 msg_status_alt
- 4.9 dlg_input
- 4.10 dlg_input_ex
- 4.11 dlg_file
- 4.12 dlg_dir
- 4.13 dlg_menu
- 4.14 dlg_color
- 4.15 dlg_hotkey
- 4.16 dlg_hotkeys
- 4.17 dlg_custom
- 4.18 dlg_proc
- 4.19 dlg_commands
- 4.20 file_open
- 4.21 file_save
- 4.22 ed_handles
- 4.23 ed_group
- 4.24 ini_read/ini_write
- 4.25 lexer_proc
- 4.26 tree_proc
- 4.27 listbox_proc
- 4.28 canvas_proc
- 4.29 timer_proc
- 4.30 menu_proc
- 4.31 toolbar_proc
- 4.32 statusbar_proc
- 4.33 imagelist_proc
- 4.34 image_proc
- 4.35 button_proc
- 4.36 more
- 5 Editor class
- 5.1 Carets
- 5.2 Text read/write
- 5.3 Selection
- 5.4 Properties
- 5.5 Misc
- 6 Tech info
- 7 FAQ
- 8 History
Intro
This is API for CudaText in Python.
- Main module: cudatext. Has constants, funcs, class Editor, objects of class Editor.
- Additional module: cudatext_cmd, constants for ed.cmd().
- Additional module: cudatext_keys, constants for on_key.
- Additional module: cudax_lib, some high-level functions for plugins.
Event plugins
To make plugin react to events, add method to class Command (like methods of command plugin). E.g. method "on_save" will be called by editor event "on_save". Event methods have param "ed_self": Editor object in which event occured (this object is the same as "ed" in 99% cases, but in some cases event occurs in inactive editor, e.g. "Save all tabs" calls "on_save" for inactive tabs).
General
- on_open(self, ed_self): Called after file is opened from disk.
- on_open_pre(self, ed_self, filename): Called before file opening. Method can return False to disable opening, other value is ignored.
- on_close_pre(self, ed_self): Called before closing tab, before checking if tab modified or not. Method can return False to disable closing.
- on_close(self, ed_self): Called before closing tab, after modified file was saved, and editor is still active.
- on_save_pre(self, ed_self): Called before saving file. Method can return False to disable saving, other value is ignored.
- on_save(self, ed_self): Called after saving file.
- on_start(self, ed_self): Called once on program start (ignore ed_self).
- on_tab_change(self, ed_self): Called after active tab is changed.
- on_tab_move(self, ed_self): Called after closing a tab (another tab is already activated), or moving a tab (by drag-n-drop, or UI command).
- on_exit(self, ed_self): Called on exiting application. This event is lazy, ie it's called only for already loaded plugins.
Editor
- on_change(self, ed_self): Called after editor text is changed.
- on_change_slow(self, ed_self): Called after editor text is changed, and few seconds (option) passed. Used in CudaLint plugin, it needs to react to change after a delay.
- on_caret(self, ed_self): Called after editor caret position and/or selection is changed.
- on_insert(self, ed_self, text): Called before inserting a text. Method can return False to disable insertion, other return value is ignored.
- on_key(self, ed_self, key, state): Called when user presses a key in editor. Param "key" is int key code; values are listed in the module cudatext_keys. Param "state" is string of chars: "a" if Alt pressed, "c" if Ctrl pressed, "s" if Shift pressed, "m" if Meta (Windows-key) pressed. Method can return False to disable key processing, other return value is ignored.
- on_key_up(self, ed_self, key, state): Called when user depresses a key in editor. Params meaning is the same as in "on_key". Currently called only for Ctrl/Shift/Alt keys (to not slow down).
- on_focus(self, ed_self): Called after any editor gets focus.
- on_lexer(self, ed_self): Called after any editor's lexer is changed.
- on_paste(self, ed_self, keep_caret, select_then): Called before some Clipboard Paste command runs. Parameters are options of various paste commands. Method can return False to disable default operation.
- on_scroll(self, ed_self): Called on scrolling in editor.
- on_mouse_stop(self, ed_self, x, y): Called when mouse cursor stops (for a short delay) over editor. Params "x", "y" are mouse editor-related coords.
- on_hotspot(self, ed_self, entered, hotspot_index): Called when mouse cursor moves in/out of hotspot. See ed.hotspots() API.
- on_state(self, ed_self, state): Called after some app state is changed. Param "state" is one of EDSTATE_nnnn or APPSTATE_nnnn constants.
- on_snippet(self, ed_self, snippet_id, snippet_text): Called when user chooses snippet to insert, in ed.complete_alt() call.
Editor clicks
- on_click(self, ed_self, state): Called after mouse click on text area. Param "state": same meaning as in on_key.
- on_click_dbl(self, ed_self, state): Called after mouse double-click on text area. Param "state": same meaning as in on_key. Method can return False to disable default processing.
- on_click_gutter(self, ed_self, state, nline, nband): Called on mouse click on gutter area. Param "state": same as in on_key. Param "nline": index of editor line. Param "nband": index of gutter band. Method can return False to disable default processing.
- on_click_gap(self, ed_self, state, nline, ntag, size_x, size_y, pos_x, pos_y): Called after mouse click on inter-line gap: see Editor.gap(). Param "state": same meaning as in on_key. Other params: properties of clicked gap.
Smart commands
- on_complete(self, ed_self): Called by auto-completion command (default hotkey: Ctrl+Space). Method should call Editor.complete API. Method must return True if it handled command, otherwise None.
- on_func_hint(self, ed_self): Called by function-hint command (default hotkey: Ctrl+Shift+Space). Method must return function-hint string (comma-separated parameters, brackets are optional). Empty str or None means no hint found.
- on_goto_def(self, ed_self): Called by go-to-definition command (in editor context menu). Method must return True if it handled command, otherwise None.
Panels
- on_console(self, ed_self, text): Called on entering text command in Python Console panel. Method can return False to disable internal command processing, other value is ignored. Ignore ed_self.
- on_console_print(self, ed_self, text): Called on adding line to Python Console memo. Also called on "print" commands. Method can return False to disable standard output to the Console panel. Ignore ed_self. Also called when app_log() clears console memo - with param text=None.
- on_console_nav(self, ed_self, text): Called on double-clicking line, or calling context menu item "Navigate", in Python Console panel. Ignore ed_self. Param "text" is line with caret.
- on_output_nav(self, ed_self, text, tag): Called on clicking line, or pressing Enter, in the Output or Validate panel. Ignore ed_self. Param "text" is clicked line, param "tag" is int value associated with line. Event is called only if app cannot parse output line by itself using given regex, or regex is not set.
Macros
- on_macro(self, ed_self, text): Called when command "macros: stop recording" runs. In text the "\n"-separated list of macro items is passed. These items were run after command "macros: start recording" and before command "macros: stop recording".
- if item is "number" then it's simple command.
- if item is "number,string" then it's command with str parameter (usually it's command cCommand_TextInsert).
- if item is "py:string_module,string_method,string_param" then it's call of Python plugin (usually string_param is empty).
Note: To run each on_macro item (with number) later, call ed.cmd(): number is command code, string after comma is command text.
Events priority
By default all events in all plugins have prior=0. So for any event, if 2+ plugins have this event, they're called in order on module names (e.g. cuda_aa, then cuda_dd, then cuda_mmm).
You may want to handle some event first. To change prior for your plugin event, write in install.inf event like this: "on_key+", "on_key++"... with any number of "+" up to 4. Each "+" makes higher prior. So first called "on_key" with maximal number of "+", last called "on_key" w/o "+".
Lazy events
By default all event handlers, except on_exit, are not "lazy". On_exit is always "lazy" - it means that it's called only for already loaded plugins. To make other event handlers lazy, write in install.inf event name with "~":
events=on_focus~,on_open~,on_state~
It's allowed to combine "+" and "~" suffixes, like on_focus~++
Callback param
Callback parameter is supported in several API functions: dlg_proc, timer_proc, menu_proc, button_proc (maybe more later).
Parameter can be in these forms:
- callable, i.e. name of a function
- string "module=mmm;cmd=nnn;" - to call method nnn (in class Command) in plugin mmm, where mmm is usually sub-dir in the "cudatext/py", but can be any module name
- string "module=mmm;cmd=nnn;info=iii;" - the same, and callback will get param "info" with your value
- string "mmm.nnn" - the same, to call method, short form
- string "module=mmm;func=nnn;" - to call function nnn in root of module mmm
- string "module=mmm;func=nnn;info=iii;" - the same, and callback will get param "info" with your value
For example:
- If you need to call function "f" from plugin cuda_my, from file "py/cuda_my/__init__.py", callback must be "module=cuda_my;func=f;"
- If you need to call it from file "py/cuda_my/lib/mod.py", callback must be "module=cuda_my.lib.mod;func=f;".
Value after "info=" can be of any type, e.g. "info=20;" will pass int 20 to callback, "info='my';" will pass string 'my'.
Global funcs
version
app_exe_version()
Gets version of program (string).
app_api_version()
Gets version of API (string, contains 3 numbers, dot-separated).
Example of version check:
from cudatext import * if app_api_version() < '1.0.140': msg_box('Plugin NN needs newer app version', MB_OK+MB_ICONWARNING)
app_path
app_path(id)
Gets some path. Possible values of id:
- APP_DIR_EXE: Dir of program executable.
- APP_DIR_SETTINGS: Dir "settings".
- APP_DIR_DATA: Dir "data".
- APP_DIR_PY: Dir "py" with Python files.
- APP_DIR_INSTALLED_ADDON: Dir of last installed addon (for plugins it is folder in "py", for data-files it is folder in "data", for lexers it is folder "lexlib"). This dir is updated only if addon installed via file_open() or from app (Open dialog or command line).
- APP_FILE_SESSION: Filename of current session. Default filename is "history session.json" without path. Missing path means that folder "settings" will be used.
- APP_FILE_RECENTS: Str: "\n"-separated filenames of recent files.
Note: to get path of dir "settings_default", use base dir of "settings".
app_proc
app_proc(id, text)
Performs application-wide action.
Parameter "text" is usually string, but can be of other types (bool, int, float, tuple/list of simple types).
Misc properties
- PROC_GET_LANG: Gets string id of active translation. E.g. "ru_RU" if translation file is "ru_RU.ini". Or "translation template" if such file is used. Empty string if built-in English translation is used.
- PROC_GET_GROUPING: Gets grouping mode in program, it's one of GROUPS_nnnn int contants.
- PROC_SET_GROUPING: Sets grouping mode in program, pass str(value) where value is one of GROUPS_nnnn int contants.
- PROC_GET_FIND_OPTIONS: Gets options of finder-object as string.
- PROC_SET_FIND_OPTIONS: Sets options of finder-object from string. Note: Find dialog don't apply these opts immediately.
- PROC_GET_FIND_STRINGS: Gets strings of finder-object, 2-tuple, (str_search_for, str_replace_with), or None if finder-object not inited.
- PROC_PROGRESSBAR: Change state of the progress-bar, which is located on the status-bar (hidden by default). If passed int value<0, progressbar hides, else it shows with the given value 0..100.
- PROC_GET_TAB_IMAGELIST: Gets int handle of imagelist, for file tabs. Use imagelist_proc() to work with it. Use PROP_TAB_ICON editor property to get/set icons of file tabs.
- PROC_GET_CODETREE: Gets int handle of Code Tree UI control. Use tree_proc() to work with it.
System
- PROC_ENUM_FONTS: Gets list of font names, currently installed in OS. Note: only some names are common in all OSes (like Arial, Courier, Courier New, Terminal, maybe more).
- PROC_GET_SYSTEM_PPI: Gets int value of screen pixels-per-inch. Usual value is 96. When OS UI is scaled, it's bigger, e.g. for scale 150% it is round(96*1.5).
- PROC_GET_GUI_HEIGHT: Gets height (pixels) of GUI element for dlg_custom(). Possible values of text: 'button', 'label', 'linklabel', 'combo', 'combo_ro', 'edit', 'spinedit', 'check', 'radio', 'checkbutton', 'filter_listbox', 'filter_listview'. Special value 'scrollbar' gets size of OS scrollbar. Gets None for other values.
- PROC_GET_MOUSE_POS: Gets mouse cursor position in screen-related coordinates, as (x, y) tuple.
- PROC_SEND_MESSAGE: For Windows. Sends message to a window by its handle. Param "text" must be 4-tuple of int (window_handle, msg_code, param1, param2).
Clipboard
- PROC_GET_CLIP: Gets clipboard text.
- PROC_SET_CLIP: Copies text to clipboard (usual).
- PROC_SET_CLIP_ALT: Copies text to alternate clipboard on Linux gtk2, called "primary selection" (also "secondary selection" exists, but no API for it here).
- CudaText copy commands put text to usual clipboard + primary selection.
- CudaText paste commands get text from usual clipboard.
- CudaText option "mouse_mid_click_paste" gets text from primary selection.
Plugin calls
- PROC_SET_EVENTS: Subscribe plugin to events. Param text must be 4 values ";"-separated: "module_name;event_list;lexer_list;keycode_list".
- event_list is comma-separated event names. e.g. "on_open,on_save", or empty str to unsubscribe from all.
- lexer_list is comma-separated lexer names.
- keycode_list is comma-separated int key codes, it can be used when on_key event was specified (event will fire only for specified key codes).
- PROC_GET_LAST_PLUGIN: Gets info about last plugin which run from program. It is str "module_name,method_name", values are empty if no plugins run yet.
- PROC_SET_SUBCOMMANDS: Adds command items for plugin (for fixed module/method). Param text must be ";"-separated: "module_name;method_name;param_list". Param_list is "\n"-separated items. Each item is s_caption+"\t"+s_param_value. s_caption is caption for Commands dialog item, s_param_value is parameter for Python method (it can be any primitive type: 20, None, False, 'Test here', or expression).
- PROC_GET_COMMANDS: Gets list of all commands, as list of dict. Dict keys are:
- "type": str: Possible values: "cmd" (usual command), "lexer" (dynamically added command to activate lexer), "plugin" (dynamically added plugin command).
- "cmd": int: Command code, to use in ed.cmd() call.
- "name": str: Pretty name, which is visible in the Commands dialog.
- "key1": str: Hotkey-1.
- "key2": str: Hotkey-2.
- "key1_init": str: Hotkey-1 from built-in config.
- "key2_init": str: Hotkey-2 from built-in config.
- (for plugins) "p_caption": str: Menu-item caption from install.inf.
- (for plugins) "p_module": str: Python module.
- (for plugins) "p_method": str: Python method in Command class.
- (for plugins) "p_method_params": str: Optional params for Python method.
- (for plugins) "p_lexers": str: Comma-separated lexers list.
- (for plugins) "p_from_api": bool: Shows that command was generated from API by some plugin.
- (for plugins) "p_in_menu": str: Value of parameter "menu" from install.inf.
Hotkeys
Notes:
- command_id can be: str(int_command_code) or "module,method" or "module,method,param".
Actions:
- PROC_GET_ESCAPE: Gets Esc-key state (bool). This flag is set when user presses Esc key (each Esc press is counted since app start). Note: to allow app to handle key press in long loop, call msg_status('text', True).
- PROC_SET_ESCAPE: Sets Esc-key flag. Text must be "0"/"1".
- PROC_GET_HOTKEY: Gets hotkeys strings for given command_id. Examples of result: "F1", "Ctrl+B * B", "Ctrl+B * B * C|Ctrl+B * D" (two hotkeys for one command). Gets empty str for unknown command_id.
- PROC_SET_HOTKEY: Sets hotkeys for given command_id from strings. Text must be "command_id|hotkey1|hotkey2", where hotkey1/hotkey2 examples: "F1", "Ctrl+B * B", "Ctrl+B * B * C". Symbol "*" can be without near spaces. Gets bool: command_id exists, changed.
- PROC_GET_KEYSTATE: Gets state of pressed special keys. String has "c" for Ctrl pressed, "a" for Alt, "s" for Shift, "m" for Meta (Windows key).
- PROC_HOTKEY_INT_TO_STR: Converts given int hotkey to string. Gets empty str for unknown code.
- PROC_HOTKEY_STR_TO_INT: Converts given string hotkey to int. Gets 0 for incorrect string. Example: converts "alt+shift+z" to "41050", which is then converted to "Shift+Alt+Z".
Python
- PROC_EXEC_PYTHON: Runs Python string in the context of Console (it is not the same as exec() call). Also gets string: result of command.
- PROC_EXEC_PLUGIN: Runs Python plugin's method. Text must be ","-separated: "module_name,method_name,param_string", where param_string can be missed, this is parameter(s) for method.
Themes
Notes:
- Theme names are short strings, lowercase, e.g. "cobalt", "sub". Empty string means built-in theme.
- Setting default theme (empty str) resets both UI + syntax themes.
- To enumerate themes, you need to list themes folder.
Actions:
- PROC_THEME_UI_GET: Gets name of UI-theme.
- PROC_THEME_UI_SET: Sets name of UI-theme.
- PROC_THEME_SYNTAX_GET: Gets name of syntax-theme.
- PROC_THEME_SYNTAX_SET: Sets name of syntax-theme.
- PROC_THEME_UI_DATA_GET: Gets contents of current UI-theme, as list of dict.
- PROC_THEME_SYNTAX_DATA_GET: Gets contents of current syntax-theme, as list of dict.
Sessions
Session is a set of opened files + untitled tabs, with some properties of editors in these tabs, with group-indexes of tabs, with layout of groups. Session format is JSON.
- PROC_SAVE_SESSION: Saves session to file with given name. Gets bool: session was saved.
- PROC_LOAD_SESSION: Loads session from file with given name (closes all tabs first). Gets bool: tabs closed w/o pressing Cancel, session was loaded.
- PROC_SET_SESSION: Tells to app filename of session. Session with this filename will be saved on exit, loaded on start, shown in app title in {} brackets. Don't set here empty string. Default filename is "history session.json" without path. Missing path means that folder "settings" will be used.
Side panel
- PROC_SIDEPANEL_ADD_DIALOG: Adds tab, with embedded dialog, to sidebar. Gets bool: params ok, tab was added. Text is 3-tuple (tab_caption, id_dlg, icon_filename):
- Caption of tab (must not contain ",")
- Handle of dialog, from dlg_proc()
- Name of icon file (must not contain ",")
- PROC_SIDEPANEL_REMOVE: Removes tab. Text is caption of tab. (Note: actually it hides tab, dialog for this tab is still in memory). Gets bool: tab was found/removed.
- PROC_SIDEPANEL_ACTIVATE: Activates tab in sidebar. Text must be a) tab caption or b) tuple (tab_caption, bool_set_focus). Default for bool_set_focus is False. Gets bool: tab was found/activated.
- PROC_SIDEPANEL_ENUM: Enumerates tabs. Gets str, "\n"-separated captions of tabs.
- PROC_SIDEPANEL_GET_CONTROL: Gets int handle of dialog, inserted into tab. Text is caption of tab. Gets None if cannot find tab.
Notes:
- Name of icon file: name of .png or .bmp file. Icon size should be equal to size of current sidebar theme, it's default is 20x20. If filename is without path, CudaText sub-dir "data/sideicons" is used.
Bottom panel
- PROC_BOTTOMPANEL_ADD_DIALOG: Adds tab. Same as for PROC_SIDEPANEL_ADD_DIALOG.
- PROC_BOTTOMPANEL_REMOVE: Removes tab. Same as for PROC_SIDEPANEL_REMOVE.
- PROC_BOTTOMPANEL_ACTIVATE: Activates tab. Same as for PROC_SIDEPANEL_ACTIVATE.
- PROC_BOTTOMPANEL_ENUM: Enumerates tabs. Same as for PROC_SIDEPANEL_ENUM.
- PROC_BOTTOMPANEL_GET_CONTROL: Gets int handle of control. Same as for PROC_SIDEPANEL_GET_CONTROL.
Splitters
Splitter id:
- SPLITTER_SIDE: splitter near side panel.
- SPLITTER_BOTTOM: splitter above bottom panel.
- SPLITTER_G1, SPLITTER_G2, SPLITTER_G3: splitters between groups.
Actions:
- PROC_SPLITTER_GET: Gets info about splitter, as 4-tuple: (bool_vertical, bool_visible, int_pos, int_parent_panel_size). Param "text" is int splitter id.
- PROC_SPLITTER_SET: Sets splitter pos. Param "text" is 2-tuple (int_splitter_id, int_splitter_pos).
Positions of group splitters (G1, G2, G3) are determined by grouping view, in one view splitter may be horizontal with one parent panel, in other view - vertical with another parent panel. Detailed:
2VERT t G1 t 2HORZ t G1 t 3VERT t G1 t G2 t 3HORZ t G1 t G2 t 1P2VERT t G3 t G2 t 1P2HORZ t G3 t G2 t 4VERT t G1 t G2 t G3 t 4HORZ t G1 t G2 t G3 t 4GRID t G1 t G3 t G2 t 6GRID t G1 t G2 t G3 t G1 t G2 t
Show/Hide/Undock UI elements
Actions can get/set state of UI elements (pass "0"/"1" or False/True):
- PROC_SHOW_STATUSBAR_GET
- PROC_SHOW_STATUSBAR_SET
- PROC_SHOW_TOOLBAR_GET
- PROC_SHOW_TOOLBAR_SET
- PROC_SHOW_SIDEBAR_GET
- PROC_SHOW_SIDEBAR_SET
- PROC_SHOW_SIDEPANEL_GET
- PROC_SHOW_SIDEPANEL_SET
- PROC_SHOW_BOTTOMPANEL_GET
- PROC_SHOW_BOTTOMPANEL_SET
- PROC_SHOW_TABS_GET
- PROC_SHOW_TABS_SET
- PROC_SHOW_FLOATGROUP1_GET
- PROC_SHOW_FLOATGROUP1_SET
- PROC_SHOW_FLOATGROUP2_GET
- PROC_SHOW_FLOATGROUP2_SET
- PROC_SHOW_FLOATGROUP3_GET
- PROC_SHOW_FLOATGROUP3_SET
- PROC_FLOAT_SIDE_GET
- PROC_FLOAT_SIDE_SET
- PROC_FLOAT_BOTTOM_GET
- PROC_FLOAT_BOTTOM_SET
Screen coordinates
Notes:
- When getting or setting coords, you get/set 4-tuple of int: (left, top, right, bottom).
Actions:
- PROC_COORD_WINDOW_GET: Gets coords of app window.
- PROC_COORD_WINDOW_SET: Sets coords of app window.
- PROC_COORD_DESKTOP: Gets coords of virtual desktop, which includes all monitors.
- PROC_COORD_MONITOR: Gets coords of monitor with app window.
- PROC_COORD_MONITOR0: Gets coords of 1st monitor.
- PROC_COORD_MONITOR1: Gets coords of 2nd monitor, or None if no such monitor.
- PROC_COORD_MONITOR2: Gets coords of 3rd monitor, or None if no such monitor.
- PROC_COORD_MONITOR3: Gets coords of 4th monitor, or None if no such monitor.
app_log
app_log(id, text, tag=0)
Controls standard panels in the bottom panel: Console, Output, Validate.
Possible values of id:
- LOG_CLEAR: Clears active log panel, text is ignored.
- LOG_ADD: Adds line to active log panel. Param "tag" is used here: int value associated with line, it's passed in on_output_nav.
- LOG_SET_PANEL: Sets active log panel. Text must be name of panel: LOG_PANEL_OUTPUT or LOG_PANEL_VALIDATE. Incorrect text will stop next operations with panels, until correct value is set.
- LOG_SET_REGEX: Sets parsing regex for active log panel. Regex must have some groups in round brackets, indexes of these groups must be passed in separate API calls. All lines in log panel, which can be parsed by this regex, will allow navigation to source code by click or double-click.
- LOG_SET_LINE_ID: Sets index of regex group for line-number. Param "text" is one-char string from "1" to "8", and "0" means "not used".
- LOG_SET_COL_ID: Sets index of regex group for column-number. Param "text".
- LOG_SET_NAME_ID: Sets index of regex group for file-name. Param "text".
- LOG_SET_FILENAME: Sets default file name, which will be used when regex cannot find file name in a string. Param "text".
- LOG_SET_ZEROBASE: Sets flag: line and column numbers are 0-based, not 1-based. Param "text" is one-char string "0" or "1".
- LOG_GET_LINES_LIST: Gets items in panel's listbox, as list of 2-tuples (line, tag).
- LOG_GET_LINEINDEX: Gets index of selected line in panel's listbox.
- LOG_SET_LINEINDEX: Sets index of selected line in panel's listbox.
For "Python Console" panel:
- LOG_CONSOLE_CLEAR: Clears UI controls in console.
- If text empty or has "m" then memo-control (above) is cleared.
- If text empty or has "e" then edit-control (below) is cleared.
- If text empty or has "h" then combobox history list is cleared.
- LOG_CONSOLE_ADD: Adds line to console (its combobox and memo).
- LOG_CONSOLE_GET_COMBO_LINES: Gets list of lines in combobox-control.
- LOG_CONSOLE_GET_MEMO_LINES: Gets list of lines in memo-control.
Example
For line "line 10 column 20: text message here" the following regex and indexes can be used:
- regex "\w+ (\d+) \w+ (\d+): .+"
- line-number index "1"
- column-number index "2"
- file-name index "0" (not used)
- zero-base flag "0" (off)
app_idle
app_idle(wait=False)
Performs application's message-processing. If wait=True, also waits for new UI event.
msg_box
msg_box(text, flags)
Shows modal message-box with given text.
Param flags is sum of button-value (OK, OK/Cancel, Yes/No etc) and icon-value (Info, Warning, Error, Question):
- MB_OK
- MB_OKCANCEL
- MB_ABORTRETRYIGNORE
- MB_YESNOCANCEL
- MB_YESNO
- MB_RETRYCANCEL
- MB_ICONERROR
- MB_ICONQUESTION
- MB_ICONWARNING
- MB_ICONINFO
Gets int code of button pressed:
- ID_OK
- ID_CANCEL
- ID_ABORT
- ID_RETRY
- ID_IGNORE
- ID_YES
- ID_NO
msg_status
msg_status(text, process_messages=False)
Shows given text in statusbar.
Param process_messages: if True, function also does UI messages processing. It takes some time, it is needed to refresh status of Esc-key pressed. After call msg_status(..., True) you can get state of Esc-key pressed via PROC_GET_ESCAPE, else plugin gets old state, until UI messages are processed.
msg_status_alt
msg_status_alt(text, seconds)
Shows given text in alternative statusbar; it has yellowish color and shows on the bottom of current editor.
Time in seconds: 1..30; seconds<=0 hides statusbar (if shown before).
dlg_input
dlg_input(label, defvalue)
Shows modal dialog to input one string.
Gets entered string or None of cancelled.
dlg_input_ex
dlg_input_ex(number, caption, label1 , text1="", label2="", text2="", label3="", text3="", label4="", text4="", label5="", text5="", label6="", text6="", label7="", text7="", label8="", text8="", label9="", text9="", label10="", text10="")
Shows modal dialog to enter 1 to 10 strings. Param number is count of strings.
Gets list of entered strings or None if cancelled.
dlg_file
dlg_file(is_open, init_filename, init_dir, filters)
Shows file-open or file-save-as modal dialog.
Gets filename (str) or None if cancelled. Params:
- is_open: True for open dialog, False for save-as dialog.
- init_filename: Initial filename for save-as dialog. Can be empty.
- init_dir: Initial dir for dialog. Can be empty.
- filters: Sets file filters for dialog. Can be empty. Example, 2 filters: "Texts|*.pas;*.txt|Include|*.inc"
To allow multi-select in open dialog, pass init_filename="*". If single filename selected, result is str. If several filenames selected, result is list of str.
To disable check "filename exists" in open dialog, start init_filename with "!".
dlg_dir
dlg_dir(init_dir)
Shows dialog to select folder. Gets folder path, or None if cancelled.
dlg_menu(id, items, focused=0, caption="")
Shows menu-like dialog. Gets index of selected item (0-based), or None if cancelled.
Possible values of "id":
- MENU_LIST: Dialog with listbox and filter.
- MENU_LIST_ALT: Dialog with listbox and filter, but each item has double height, and instead of right-aligning, 2nd part of an item shows below.
You can add to "id" values:
- MENU_NO_FUZZY: disable fuzzy search in the filter field.
- MENU_NO_FULLFILTER: disable filtering by 2nd part of items (part after "\t"), filter only by 1st part.
Parameters:
- Param "items": list of str, tuple of str, or string from joined items "\n".join(str_items). Each str item can be simple str or str1+"\t"+str2 (str2 shows right-aligned or below).
- Param "focused": index of initially selected item.
- Param "caption": if not empty str, caption will be shown above input box.
dlg_color
dlg_color(value)
Shows select-color dialog with given initial color (int).
Gets int color, or None if cancelled.
dlg_hotkey
dlg_hotkey(title="")
Shows dialog to press single hotkey.
Gets str of hotkey (e.g. "F1", "Ctrl+Alt+B") or None if cancelled.
dlg_hotkeys
dlg_hotkeys(command, lexer="")
Shows dialog to configure hotkeys of internal command or plugin. Gets bool: OK pressed and hotkeys saved (to keys.json).
Param command can be:
- str(int_command): for internal command codes (module cudatext_cmd).
- "module_name,method_name" or "module_name,method_name,method_param": for command plugin.
Param lexer is optional lexer name. If not empty, dialog enables checkbox "For current lexer" and (if checkbox checked) can save hotkey to "keys lexer NNNN.json".
dlg_custom
dlg_custom(title, size_x, size_y, text, focused=-1, get_dict=False)
Shows dialog with controls of many types.
Types
Types of UI controls:
- "button": simple button
- "label": simple read-only text
- "check": checkbox, checked/unchecked/grayed
- "radio": radio-button, only one of radio-buttons can be checked
- "edit": single-line input
- "edit_pwd": single-line input for password, text is masked
- "combo": combobox, editable + drop-down, value is text
- "combo_ro": combobox, drop-down only, value is index of drop-down
- "listbox": list of items with one item selected
- "checkbutton": looks like button, but don't close dialog, checked/unchecked
- "memo": multi-line input
- "checkgroup": group of check-boxes
- "radiogroup": group of radio-buttons
- "checklistbox": listbox with checkboxes
- "spinedit": input for numbers, has min-value, max-value, increment
- "listview": list with columns, with column headers, value is index
- "checklistview": listview with checkboxes, value is check-flags
- "linklabel": label which activates URL on click
- "panel": rectangle with centered caption only (client area is entire rect)
- "group": rectangle with OS-themed border and caption on border (client area is decreased by this border)
- "colorpanel": panel, with N-pixels colored border, with colored background
- "image": picture, which shows picture-file
- "trackbar": horiz/vert bar with handler, has position
- "progressbar": horiz/vert bar, only shows position
- "progressbar_ex": like progressbar, with new styles
- "filter_listbox": input, which filters content of another "listbox" control
- "filter_listview": input, which filter content of another "listview" control
- "bevel": control which shows only border (at one side, or all 4 sides), w/o value/caption
- "tabs": TabControl: set of tabs, w/o pages attached to them
- "pages": PageControl: set of tabs, with pages attached to them (only one of pages is visible)
- "splitter": divider bar, which can be dragged by mouse. It finds "linked control", ie control with the same "parent" and "align", nearest to splitter. On mouse drag, splitter resizes this linked control. Recommended to place another control with "align": ALIGN_CLIENT, so splitter will resize two controls at once.
Notes:
- Control property "name" is required for filter_ controls: must set name of listbox/listview, and name of its filter - to the same name with prefix "f_" (e.g. "name=mylistbox" with "name=f_mylistbox").
- Control "button_ex": to change advanced props, you must get handle via DLG_CTL_HANDLE, and pass it to button_proc().
- Control "treeview" don't have "items"/"value": to work with it, you must get handle of control via DLG_CTL_HANDLE, and pass it to tree_proc().
- Control "listbox_ex" don't have "items"/"value": to work with it, you must get handle of control via DLG_CTL_HANDLE, and pass it to listbox_proc().
- Control "toolbar" don't have "items"/"value": to work with it, you must get handle via DLG_CTL_HANDLE, and pass it to toolbar_proc().
- Control "statusbar" don't have "items"/"value": to work with it, you must get handle via DLG_CTL_HANDLE, and pass it to statusbar_proc().
- Control "editor" don't have "items"/"value": to work with it, you must get handle via DLG_CTL_HANDLE, and pass it to Editor() to make editor object.
- Control "paintbox" is empty area, plugin can paint on it. Get canvas_id via DLG_CTL_HANDLE, and use it in canvas_proc().
Properties
Parameter "text" is "\n"-separated items, one item per control. Each item is chr(1)-separated props in the form "key=value". Possible props:
- "type": type of control; must be specified first
- "cap": caption
- "x", "y": position, left/top
- "w", "h": size, width/height
- "w_min", "w_max": Constraints for width, min/max value.
- "h_min", "h_max": Constraints for height, min/max value.
- "pos": position, str in the form "left,top,right,bottom". Some one-line controls ignore bottom and do auto size. If specified "x/y/w/h" together with "pos", then last mentioned prop has effect.
- "en": enabled state, bool
- "vis": visible state, bool
- "hint": hint string for mouse-over. Can be multiline, "\r"-separated.
- "color": background color
- "autosize": control is auto-sized (by width and/or height, it's control-specific)
- "font_name": font name
- "font_size": font size
- "font_color": font color
- "font_style": font styles. String, each char can be: "b": bold, "i": italic, "u": underline, "s": strikeout.
- "name": optional name of control. It may be not unique for all controls.
- "act": active state, bool. For many controls (edit, check, radio, combo_ro, checkbutton, listbox'es, listview's, tabs), it means that control's value change fires events (for dlg_proc) or closes form (for dlg_custom).
- "ex0"..."ex9": advanced control-specific props. Described below.
- "props": deprecated; advanced control-specific props. Described below.
- "val": value of control. Described below.
- "items": list of items. Described below.
Prop "items"
Possible values of "items":
- combo, combo_ro, listbox, checkgroup, radiogroup, checklistbox, tabs: "\t"-separated lines
- listview, checklistview: "\t"-separated items.
- first item is column headers: title1+"="+size1 + "\r" + title2+"="+size2 + "\r" +...
- size1...sizeN can be with lead char to specify alignment of column: L (default), R, C
- other items are data: cell1+"\r"+cell2+"\r"+... (count of cells may be less than count of columns)
- image: full path of picture file (png/gif/jpg/bmp)
Action DLG_CTL_PROP_GET also gets key "items" (in the same format) for these controls: "listbox", "checklistbox", "listview", "checklistview".
Prop "columns"
- For "listview", "checklistview": "\t"-separated items, each item is "\r"-separated props of a column:
- caption (must be without "\t", "\r", "\n")
- width, as str
- (optional) minimal width, as str. 0 means "not used". Not supported on Windows.
- (optional) maximal width, as str. 0 means "not used". Not supported on Windows.
- (optional) alignment - one of "L", "R", "C"
- (optional) autosize - "0", "1"
- (optional) visible - "0", "1"
- For "radiogroup", it is number of vertical columns of radiobuttons.
Action DLG_CTL_PROP_GET also gets key "columns" in the same format.
Action DLG_CTL_PROP_SET, for "listview", allows both "items" (it sets columns) and "columns", and "columns" is applied last.
Prop "val"
- check: "0", "1" or "?" (grayed state)
- radio, checkbutton: "0", "1"
- edit, edit_pwd, spinedit, combo, filter_*: text
- memo: "\t"-separated lines, in lines "\t" must be replaced to chr(2)
- combo_ro, listbox, radiogroup, listview: current index
- checkgroup: ","-separated checked states ("0", "1")
- checklistbox, checklistview: index + ";" + checked_states
- tabs: index of active tab
- trackbar, progressbar: int position
Prop "props"
Deprecated. Tuple of advanced control-specific properties. See description of prop "ex". If some control supports 4 props, "ex0" to "ex3", then "props" must be 4-tuple for it. If some control supports only "ex0", then "props" must be one value, same as "ex0".
- For dlg_custom, "props" must be str with ","-separated items, and "0"/"1" for bool items.
- For dlg_proc, "props" must be simple type or tuple of simple types, e.g. (True, False, 2).
Prop "ex"
Props "ex0"..."ex9" are control-specific. They have different simple type (str, bool, int...).
- button:
- "ex0": bool: default for Enter key
- edit, memo:
- "ex0": bool: read-only
- "ex1": bool: font is monospaced
- "ex2": bool: show border
- spinedit:
- "ex0": int: min value
- "ex1": int: max value
- "ex2": int: increment
- label:
- "ex0": bool: right aligned
- linklabel:
- "ex0": str: URL. Should not have ",". Clicking on http:/mailto: URLs should work, result of clicking on other kinds depends on OS.
- listview:
- "ex0": bool: show grid lines
- radiogroup:
- "ex0": int: 0: arrange items horizontally then vertically (default); 1: opposite
- tabs:
- "ex0": bool: show tabs at bottom
- colorpanel:
- "ex0": int: border width (from 0)
- "ex1": int: color of fill
- "ex2": int: color of font
- "ex3": int: color of border
- filter_listview:
- "ex0": bool: filter works for all columns
- image:
- "ex0": bool: center picture
- "ex1": bool: stretch picture
- "ex2": bool: allow stretch in
- "ex3": bool: allow stretch out
- "ex4": bool: keep origin x, when big picture clipped
- "ex5": bool: keep origin y, when big picture clipped
- trackbar:
- "ex0": int: orientation (0: horz, 1: vert)
- "ex1": int: min value
- "ex2": int: max value
- "ex3": int: line size
- "ex4": int: page size
- "ex5": bool: reversed
- "ex6": int: tick marks position (0: bottom-right, 1: top-left, 2: both)
- "ex7": int: tick style (0: none, 1: auto, 2: manual)
- progressbar:
- "ex0": int: orientation (0: horz, 1: vert, 2: right-to-left, 3: top-down)
- "ex1": int: min value
- "ex2": int: max value
- "ex3": bool: smooth bar
- "ex4": int: step
- "ex5": int: style (0: normal, 1: marquee)
- "ex6": bool: show text (only for some OSes)
- progressbar_ex:
- "ex0": int: style (0: text only, 1: horz bar, 2: vert bar, 3: pie, 4: needle, 5: half-pie)
- "ex1": int: min value
- "ex2": int: max value
- "ex3": bool: show text
- "ex4": int: color of background
- "ex5": int: color of foreground
- "ex6": int: color of border
- bevel:
- "ex0": int: shape (0: sunken panel, 1: 4 separate lines - use it as border for group of controls, 2: top line, 3: bottom line, 4: left line, 5: right line, 6: no lines, empty space)
- splitter:
- "ex0": bool: paint border
- "ex1": bool: on mouse drag, use instant repainting (else splitter paints after mouse released)
- "ex2": bool: auto jump to edge, when size of linked control becomes < min size
- "ex3": int: mininal size of linked control (controlled by splitter)
Result
Dialog is closed by clicking any button or by changing of any control which has "act=1".
- If cancelled, gets None
- If get_dict=True, gets dict: {0: str_value_0, 1: str_value_1, ..., 'clicked': N, 'focused': N}
- If get_dict=False, gets 2-tuple: (clicked_index, state_text)
- clicked_index: index of control which closed dialog (0-based).
- state_text: "\n"-separated values of controls. Same count of items as in text, plus optional additional lines in the form "key=value". Line "focused=N" with index of last focused control (0-based) or -1.
Notes
- Property "type" must be first.
- Property "act" must be set after "val".
- Controls sizes differ on Win/Linux/OSX, picture shows controls (Linux/Win) auto-aligned, not in CudaText, only example app:
- So it's good to use common height for single-line controls, 30 pixels is ok for edit/button/check/radio, 36 for combobox.
- Control "tabs" height cannot auto-size, please make correct height for control (it is usually like big buttons).
dlg_proc
dlg_proc(id_dialog, id_action, prop="", index=-1, index2=-1, name="")
Advanced work with dialogs (forms). More advanced than dlg_custom(), forms can show modal/nonmodal, controls can change value during form showing.
If an action needs control, you can use 2 ways:
- set param "name" to control's name (name is searched if not empty),
- set param "index" to control's index (you should use DLG_CTL_FIND to find it).
Types
Possible types of UI controls are described in dlg_custom, see #Types. Additional types only for dlg_proc:
- "button_ex": button, application-themed, works via button_proc()
- "editor": full featured editor, works via Editor class
- "listbox_ex": listbox, application-themed, works via listbox_proc()
- "paintbox": control which must be painted by plugin via canvas_proc()
- "statusbar": statusbar: panel with one horizontal row of cells, works via statusbar_proc()
- "toolbar": toolbar: panel which holds buttons with icons, works via toolbar_proc()
- "treeview": tree structure with nodes and nested nodes, works via tree_proc()
Form properties
- "cap": str: Caption of form.
- "x", "y": int: Position (screen coordinates), left/top.
- "w", "h": int: Size, width/height.
- "w_min", "w_max": int: Constraints for width, min/max value.
- "h_min", "h_max": int: Constraints for height, min/max value.
- "tag": str: Any string, set by plugin.
- (deprecated) "resize": bool: Sets border of form to "resizable" style.
- "border": int: Sets border of form, DBORDER_nnn constants. Don't specify it together with deprecated "resize".
- "topmost": bool: Makes form stay on top of other forms in CudaText.
- "vis": bool: Visible state.
- "color": int: Background color.
- "autosize": bool: Form resizes to minimal size, which shows all visible controls. Don't use it together with "resize".
- "keypreview": bool: If on, then key press calls on_key_down before passing key to focused control. Should be True if form needs to handle on_key_down.
- "p": int: Parent handle. Set this property to int handle of any windowed UI control or form, this control/form will be parent of form.
Form events
These are also properties of forms.
- "on_resize": Called after form is resized.
- "on_close": Called after form is closed.
- "on_close_query": Called to ask plugin, it is allowed to close form (in any way: x-icon, Alt+F4, Esc etc). If plugin returns False: not allowed, other value: allowed.
- "on_show": Called after form shows.
- "on_hide": Called after form hides.
- "on_act": Called when form is activated.
- "on_deact": Called when form is deactivated.
- "on_mouse_enter": Called when mouse cursor enters form area.
- "on_mouse_exit": Called when mouse cursor leaves form area.
- "on_key_down": Called when key is pressed in form (form should have "keypreview":True). If plugin returns False, key is blocked.
- param "id_ctl": int key code.
- param "data": key-state string: few chars, "c" for Ctrl, "a" for Alt, "s" for Shift, "m" for Meta.
- "on_key_up": Called when key is depressed (after on_key_down). If plugin returns False, depressing of key is blocked.
How to get nice key description in on_key_down, e.g. "Ctrl+Alt+Esc" from id_ctl=27 and data="ca":
str_key =\ ('Meta+' if 'm' in data else '')+\ ('Ctrl+' if 'c' in data else '')+\ ('Alt+' if 'a' in data else '')+\ ('Shift+' if 's' in data else '')+\ app_proc(PROC_HOTKEY_INT_TO_STR, id_ctl)
Control properties
- "name": str: Optional name of control, to find control later by name. May be not unique for controls.
- "cap": str: Caption.
- "x", "y": int: Position (coordinates relative to dialog), left/top.
- "w", "h": int: Size, width/height.
- "en": bool: Enabled state.
- "vis": bool: Visible state.
- "color": int: Color.
- "border": bool: Control has border.
- "font_name": str: Font name.
- "font_size": int: Font size.
- "font_color": int: Font color.
- "hint": str: Hint (tooltip) for mouse-over. Newlines must be "\r".
- "ex0"..."ex9": Advanced control-specific props. Described in dlg_custom.
- "props": Deprecated. Advanced control-specific props. Described in dlg_custom.
- "items": str: Usually tab-separated items. Described in dlg_custom.
- "val": str: Value of control. Described in dlg_custom.
- "tag": str: Any string, set by plugin.
- "focused": bool: Shows if control has focus.
- "tab_stop": bool: Allows tab-key to jump to this control.
- "tab_order": int: Tab-key jumps to controls using tab_orders. First activated is control with tab_order=0, next with =1, etc. If tab_orders not set, controls activated by creation order.
- "sp_l", "sp_r", "sp_t", "sp_b", "sp_a": int: Border spacing, ie padding of control's edge from anchored controls (or parent form). 5 props here: left, right, top, bottom, around. "Around" padding is added to padding of all 4 sides.
- "a_l", "a_r", "a_t", "a_b": 2-tuple (str_control_name, str_side): Anchors of control. See #Anchors. Value is 2-tuple, or None to disable anchor.
- Item-0: name of target control, or empty str to use control's parent (it is form by default).
- Item-1: side of target control, one of 3 values: "[" (left/top), "-" (center), "]" (right/bottom).
- "align": alignment of control:
- ALIGN_NONE: no alignment (props "x", "y", "w", "h" have meaning)
- ALIGN_CLIENT: stretched to entire parent area (props "x", "y", "w", "h" are ignored)
- ALIGN_LEFT: glued to the left side of parent (only prop "w" has meaning)
- ALIGN_RIGHT: glued to the right side of parent (only prop "w" has meaning)
- ALIGN_TOP: glued to the top of parent (only prop "h" has meaning)
- ALIGN_BOTTOM: glued to the bottom of parent (only prop "h" has meaning)
- "p": str: Name of control's parent, or empty to use the form. Coordinates x/y of control are relative to the current parent. If parent's position changes, control's position don't change. To place control on PageControl's page with index N, specify such parent name: pages_name+"."+str(N).
Control events
These are also properties of controls.
- "on_change": Called after "value" of control is changed.
- "on_click": Called after mouse click on control, for non-active controls, which don't change "value" by click. Param "data" is tuple (x, y) with control-related coordinates of click.
- "on_click_dbl": Called after mouse double-click on control. Param "data" is tuple (x, y) with control-related coordinates.
- "on_mouse_down": Called when mouse button pressed. Param "data" is dict: { "btn": int_button_code, "state": str_keyboard_state, "x": int, "y": int }. Button code: 0: left; 1: right; 2: middle. Keyboard state: value like in global event "on_key".
- "on_mouse_up": Called when mouse button depressed (released). Param "data": like in "on_mouse_down".
- "on_mouse_enter": Called when mouse cursor enters control area.
- "on_mouse_exit": Called when mouse cursor leaves control area.
- "on_menu": Called before showing context menu after right click.
Events for control "listview"/"checklistview":
- "on_click_header": Called when user clicks on column header. Param "data": int column index.
- "on_select": Called after selection is changed. Param "data" is tuple: (int_item_index, bool_item_selected).
Events for control "listbox_ex":
- "on_draw_item": Called if listbox is owner-drawn. Param "data" is dict: { "canvas": canvas_id, "index": int_item_index, "rect": item_rectangle_4_tuple }.
Events for control "treeview":
- "on_fold", "on_unfold": Called before treeview node is folded/unfolded. Param "data" is int handle of treeview node.
- "on_select": Called after selection is changed.
Events for control "editor":
- "on_change": Called after text is changed.
- "on_caret": Called after caret position and/or selection is changed.
- "on_scroll": Called after editor is scrolled.
- "on_key_down": Called when user presses a key. Param "data" is tuple (int_key_code, str_key_state). Method can return False to disable default processing.
- "on_key_up": Called when user depresses a key. Param "data" is tuple (int_key_code, str_key_state). Method can return False to disable default processing.
- "on_click_gutter": Called on mouse click on gutter area. Param "data" is dict: {"state": str_key_state, "line": int_line_index, "band": int_gutterband_index}.
- "on_click_gap": Called on mouse click on inter-line gap. Param "data" is dict: {"state": str_key_state, "line": int, "tag": int, "gap_w": int, "gap_h": int, "x": int, "y": int}.
- "on_paste": Called before doing one of Paste commands. Param "data" is dict: {"keep_caret": bool, "sel_then": bool}. Method can return False to disable default operation.
Callbacks
Values of events are callbacks, must be in one of these forms: #Callback_param.
Callbacks for dlg_proc must be declared as:
#function def my(id_dlg, id_ctl, data='', info=''): pass #method class Command: def my(self, id_dlg, id_ctl, data='', info=''): pass
Parameters:
- id_dlg: Int handle of form.
- id_ctl: Int index of control. Used only for control events.
- data: Value, specific to event.
- info: Value from extended form of callback string.
Actions
Param prop: it can be of any simple type (str, int, bool), also tuple/list (of any simple type), also dict (keys: str, values: simple type or tuple/list). Most used is dict. Example: prop={"cap": "...", "x": 10, "y": 10, "w": 600, "en": False}.
Param id_dialog: int, form handle. Ignored only for DLG_CREATE (pass 0).
Possible values of id_action:
- DLG_CREATE: Creates new form, gets form handle.
- DLG_HIDE: Hides form.
- DLG_FREE: Hides and deletes form.
- DLG_SHOW_MODAL: Shows form in modal mode. Waits for form to hide, then returns.
- DLG_SHOW_NONMODAL: Shows form in non-modal mode. Returns immediately.
- DLG_FOCUS: Focuses form (in non-modal mode).
- DLG_SCALE: Scales form, with all controls, for the current OS high-DPI value. E.g. of OS scale is 150%, all will be scaled by 1.5.
- DLG_PROP_GET: Gets form props, as dict. See example plugin, which props are returned.
- DLG_PROP_SET: Sets form props, from dict. Param "prop" is dict. Only props mentioned in "prop" are applied, other props don't change.
- DLG_DOCK: Docks (inserts) form into another form. Param "index" is handle of another form, and 0 means main CudaText form. Param "prop" can be: "L", "R", "T", "B" for sides left/right/top/bottom (default is bottom).
- DLG_UNDOCK: Undocks form from it's current parent form.
- DLG_CTL_COUNT: Gets count of controls on form.
- DLG_CTL_ADD: Adds new control to form, gets its index, or None if cannot add. Param "prop" is type of control. See description in dlg_custom.
- DLG_CTL_PROP_GET: Gets control props, as dict. Control must be specified by name or index.
- DLG_CTL_PROP_SET: Sets control props. Control must be specified by name or index. Param "prop" is dict with props. Only props mentioned in "prop" are applied, other props don't change. To "reset" some props, you must mention them with some value.
- DLG_CTL_FOCUS: Focuses control. Control must be specified by name or index.
- DLG_CTL_DELETE: Deletes control. Control must be specified by name or index. Controls are stored in list, so after a control deleted, indexes of next controls shift by -1. So don't use fixed indexes if you delete some, use DLG_CTL_FIND.
- DLG_CTL_DELETE_ALL: Deletes all controls.
- DLG_CTL_FIND: Gets index of control by name, or -1 if cannot find. Param "prop" is name.
- DLG_CTL_HANDLE: Gets int handle of control. Control must be specified by name or index. This handle is currently useful for types:
- type "button_ex": pass handle to button_proc()
- type "editor": pass handle to Editor() constructor
- type "listbox_ex": pass handle to listbox_proc()
- type "paintbox": pass handle to canvas_proc()
- type "statusbar": pass handle to statusbar_proc()
- type "treeview": pass handle to tree_proc()
- type "toolbar": pass handle to toolbar_proc()
- DLG_COORD_LOCAL_TO_SCREEN: Converts x/y coordinates from form-related, to screen-related. Param "index" is x, "index2" is y. Gets tuple (x,y).
- DLG_COORD_SCREEN_TO_LOCAL: Converts x/y coordinates from screen-related, to form-related. Param "index" is x, "index2" is y. Gets tuple (x,y).
Anchors
Anchor is attaching of control's side to another control, or to the parent form, so control is auto positioned, initially and on form resize. Lazarus IDE has such Anchor Editor dialog:
In this dialog you see, that all 4 sides of control attach to one of 3 sides of another control (or parent form).
- Anchors override absolute positions, e.g. anchor of left side overrides prop "x".
- Anchoring to invisible control is allowed.
- Anchoring circles (A to B to C to A) is not allowed, but should not give errors.
To change anchors of control, set its properties: a_l, a_r, a_t, a_b. Initially left/top anchors are set (to the parent form).
Side value "[" aligns control to left/top side of target:
+--------+ | target | +--------+ +--------------+ | control | +--------------+
Side value "]" aligns control to right/bottom side of target:
+--------+ | target | +--------+ +--------------+ | control | +--------------+
Side value "-" centers control relative to target:
+--------+ | target | +--------+ +--------------+ | control | +--------------+
Example: to attach "colorpanel" to the right side of form, clear left anchor (set to None), and add right/bottom anchors. This also sets spacing-aroung (padding) to 6 pixels.
#attach colorpanel to the right dlg_proc(id_form, DLG_CTL_PROP_SET, index=n, prop= { 'a_l': None, 'a_r': ('', ']'), 'a_b': ('', ']'), 'sp_a': 6 } )
Example
Detailed demo plugin exists, it shows many dlg_proc actions, shows modal/nonmodal forms, uses callbacks, moves control by button click, moves control on form resize. It is in the CudaText repo with name "cuda_testing_dlg_proc".
dlg_commands
dlg_commands(options)
Show commands dialog, which is like customizable version of F1 dialog in CudaText.
Param options is sum of int flags:
- COMMANDS_USUAL - Show usual commands, which have int codes. Function gets "c:"+str(int_command) for them.
- COMMANDS_PLUGINS - Show plugins commands. Function gets "p:"+callback_string for them.
- COMMANDS_LEXERS - Show lexers pseudo-commands. Function gets "l:"+lexer_name for them.
- COMMANDS_CONFIG - Allow to call Configure Hotkeys dialog by F9 key.
Gets string if command selected, or None if cancelled.
file_open
file_open(filename, group=-1, options="")
Opens editor tab with given filename. If filename already opened, activates its tab. Pass empty str to open untitled tab. Gets bool: filename is empty or file successfully opened. For zip files, gets True only if add-on (in zip file) was installed.
- Param "group": index of tab-group (0-based), default means "current group". If you pass index of currently hidden group, group won't show, you need to call editor command to show it, see #cmd.
- Param "options": string:
- If it has "/preview", then file opens in a "temporary preview" tab, with italic caption. Param "group" is ignored then, used 1st group.
- If it has "/nohistory", file's saved history (caret, scroll pos, etc) won't be used.
- If it has "/noevent", then "on_open_pre" event won't fire.
- If it has "/silent", then zipped add-on will install w/o prompt and report.
- If it has "/passive", then tab will not activate, it will be passive tab.
- If it has "/nonear", then app option "ui_tab_new_near_current" will be ignored, tab will append to end.
- If it has "/view-text', then file will open in internal viewer, text (variable width) mode.
- If it has "/view-binary', then file will open in internal viewer, binary (fixed width) mode.
- If it has "/view-hex', then file will open in internal viewer, hex mode.
- If it has "/view-unicode', then file will open in internal viewer, unicode (variable width) mode.
Note: "ed" is always the current editor, after file_open() current editor changes, and "ed" is the new cur editor.
Example opens untitled tab, and writes multi-line text to it:
file_open('') ed.set_text_all(text)
file_save
file_save(filename="")
Saves current tab to disk.
Shows save-as dialog for untitled tab. If param "filename" not empty, it overrides current file name, and untitled tab becomes titled. Gets bool: file was saved.
ed_handles
ed_handles()
Gets range object: it contains int handles of all editor tabs. Pass each handle to Editor() to make editor object from handle.
Example code, which shows filenames of all tabs:
#show all file names in console for h in ed_handles(): e = Editor(h) print(e.get_filename())
ed_group
ed_group(index)
Gets Editor object for active editor in tab-group with given group-index. Group-index: currently 0..5. Gets None for incorrect index, or if no tabs in this group.
ini_read/ini_write
ini_read(filename, section, key, value) ini_write(filename, section, key, value)
Reads or writes single string to ini file. Params:
- filename: Path of ini file. Can be name w/o path, this means that path of "settings" dir is used.
- section: str: Section of ini file.
- key: str: Key in section.
- value: str:
- on read: default value which is returned if no such filename/section/key was found.
- on write: value to write.
On read: gets string value. On write: gets None.
lexer_proc
lexer_proc(id, value)
Perform some lexer-related action.
Possible values of id:
- LEXER_GET_LEXERS: Gets list of lexers. Param "value" must be bool: allow to include also hidden lexers (unchecked in the Lexer Library dialog).
- LEXER_GET_PROP: For given lexer name, gets its properties as dict. For incorrect lexer name, gets None. Keys of dict:
- "en": bool: lexer is visible in the lexers menu.
- "typ": list of str: list of file-types (they detect lexer when file loads; "ext" is simple extension, "ext1.ext2" is double extension, "/fullname" is name w/o path).
- "st": list of str: list of all styles.
- "st_c": list of str: list of styles of syntax comments (e.g. used by Spell Checker).
- "st_s": list of str: list of styles of syntax strings (e.g. used by Spell Checker).
- "sub": list of str: list of sub-lexers (some items can be empty if lexer setup broken).
- "c_line": str or None: line comment (until end-of-line).
- "c_str": 2-tuple or None: stream comment (for any range).
- "c_lined": 2-tuple or None: comment for full lines.
- LEXER_DETECT: Detects lexer name by given file name. Gets None if cannot detect. Function sees file extension, or even filename before extension (e.g. "/path/makefile.gcc" gives "Makefile").
- LEXER_REREAD_LIB: Re-reads lexer library from disk, updates lexer menu. Make sure that plugins' dialogs don't use editor with lexer, which may crash.
tree_proc
tree_proc(id_tree, id_action, id_item=0, index=0, text="", image_index=-1)
Perform action on treeview UI-control.
- Param id_tree is int handle of treeview.
- Param id_item is int handle of tree-item. Can be 0 for invisible root-item:
- can clear entire tree using root-item
- can enumerate root level using root-item.
Possible values of id_action:
- TREE_ITEM_ENUM: Enumerates subitems on given item. Param id_item. Gets list of 2-tuples: (int_handle, str_caption), or None.
- TREE_ITEM_ADD: Adds subitem as item's child. Param id_item. Param index: at which subitem index to insert (0-based), or -1 to append. Param text: caption of item. Param image_index: index in tree's icon list or -1 to not show icon. Gets int handle of subitem.
- TREE_ITEM_DELETE: Deletes item (with all subitems). Param id_item.
- TREE_ITEM_SET_TEXT: Sets item's text. Params: id_item, text.
- TREE_ITEM_SET_ICON: Sets item's icon. Params: id_item, image_index.
- TREE_ITEM_SELECT: Selects item.
- TREE_ITEM_SHOW: Makes item visible, ie scrolls control to this item.
- TREE_ITEM_GET_SELECTED: Gets int handle of selected item (id_item ignored). Gets None if none selected.
- TREE_ITEM_GET_PROPS: Gets props of item, as dict. Dict keys are:
- "text": str: caption of item
- "icon": int: index of icon in tree's imagelist object, or -1 for none
- "level": int: how deep this item is nested (how many parents this item has)
- "parent": int: id of parent item, or 0 if no parent
- "folded": bool: is this item folded (item itself, not parents)
- "selected": bool: is this item selected
- "sub_items": bool: item has sub-items
- "index": int: index of item, relative to its branch
- "index_abs": int: absolute index of item, relative to root
- TREE_ITEM_FOLD: Folds item w/o subitems.
- TREE_ITEM_FOLD_DEEP: Folds item with subitems. Root-item allowed too.
- TREE_ITEM_FOLD_LEVEL: Folds all items (id_item ignored) from level with specified index, 1 or bigger. (This is what CudaText commands "fold level N" do for code-tree).
- TREE_ITEM_UNFOLD: Unfolds item w/o subitems.
- TREE_ITEM_UNFOLD_DEEP: Unfolds item with subitems. Root-item allowed too.
- TREE_GET_IMAGELIST: Gets int handle of image-list object.
- TREE_PROP_SHOW_ROOT: This allows to hide lines for invisible root-item. If hidden, tree (folded) looks like listbox. Param text: "0"/"1" to hide/show.
- TREE_LOCK: Disables repainting of control.
- TREE_UNLOCK: Enables repainting of control.
- TREE_THEME: Applies current color theme to control.
- TREE_ITEM_GET_RANGE: Should be used only for Code Tree. Gets range, stored in tree-item, as 4-tuple (start_x, start_y, end_x, end_y). If range is not set, gets (-1,-1,-1,-1).
- TREE_ITEM_SET_RANGE: Should be used only for Code Tree. Sets range for tree-item. Param "text" must be 4-tuple of int (start_x, start_y, end_x, end_y).
listbox_proc
listbox_proc(id_listbox, id_action, index=0, text="", tag=0)
Perform action on listbox UI-control.
- Param id_listbox: int handle of listbox.
- Param index: index of item (0-base).
Possible values of id_action:
- LISTBOX_GET_COUNT: Gets number if items.
- LISTBOX_ADD: Adds item (str) with associated tag (int). Param index: at which index to add, -1 to append.
- LISTBOX_DELETE: Deletes item with given index.
- LISTBOX_DELETE_ALL: Deletes all items.
- LISTBOX_GET_ITEM: Gets item with given index as 2-tuple (text, tag). Gets None if index incorrect.
- LISTBOX_SET_ITEM: Sets item with given index, to given text and tag.
- LISTBOX_GET_ITEM_H: Gets height of items (pixels).
- LISTBOX_SET_ITEM_H: Sets height of items. Param index: size in pixels.
- LISTBOX_GET_SEL: Gets selected index. -1 for none.
- LISTBOX_SET_SEL: Sets selected index. -1 for none.
- LISTBOX_GET_TOP: Gets index of top visible item.
- LISTBOX_SET_TOP: Sets index of top visible item.
- LISTBOX_GET_DRAWN: Gets bool: listbox is owner-drawn, ie it don't paint itself, but plugin must paint it via event on_draw_item.
- LISTBOX_SET_DRAWN: Sets owner-drawn state. Param index should be 0 or 1 (off/on).
- LISTBOX_THEME: Applies current color theme to control.
canvas_proc
canvas_proc(id_canvas, id_action, text="", color=-1, size=-1, x=-1, y=-1, x2=-1, y2=-1, style=-1, p1=-1, p2=-1)
Performs action on canvas (drawing surface of some GUI control). Id_canvas is handle of canvas of some GUI control. Special value 0 means testing empty panel, it appears at the top of app, when used.
Possible values of id_action:
- CANVAS_SET_FONT: Sets props of font. Params:
- text - font name;
- color;
- size;
- style - 0 for normal, or sum of values FONT_B (bold), FONT_I (italic), FONT_U (underline), FONT_S (strikeout)
- CANVAS_SET_PEN: Sets props of pen. Params:
- color;
- size;
- style - one of PEN_STYLE_nnnn;
- p1 - end caps style - one of PEN_CAPS_nnnn;
- p2 - line joining style - one of PEN_JOIN_nnnn
- CANVAS_SET_BRUSH: Sets props of brush. Params:
- color;
- style - one of BRUSH_nnnn. Usually used: BRUSH_SOLID (filled background), BRUSH_CLEAR (transparent background).
- CANVAS_SET_ANTIALIAS: Sets anti-aliasing mode of canvas. Params: style - ANTIALIAS_NONE, _ON, _OFF.
- CANVAS_GET_TEXT_SIZE: Gets size of text on canvas, as 2-tuple (size_x, size_y). Uses font. Params: text.
- CANVAS_TEXT: Paints text at given coords. Uses font and brush. Params: text, x, y.
- CANVAS_LINE: Paints line at given coords. Uses pen. Params: x, y, x2, y2.
- CANVAS_PIXEL: Paints one pixel at given coords. Params: x, y, color.
- CANVAS_RECT: Paints rectangle. Uses pen and brush. Params: x, y, x2, y2.
- CANVAS_RECT_FRAME: Paints rectangle. Uses only pen. Params: x, y, x2, y2.
- CANVAS_RECT_FILL: Paints rectangle. Uses only brush. Params: x, y, x2, y2.
- CANVAS_RECT_ROUND: Paints rounded rectangle. Uses pen and brush. Params: x, y, x2, y2, style - radius of corners.
- CANVAS_ELLIPSE: Paints ellipse or circle. Uses pen and brush. Params: x, y, x2, y2.
- CANVAS_POLYGON: Paints polygon from any number of points (>2). Uses pen and brush. Params: text - comma separated list of (x,y) coords. Example: "10,10,200,50,10,100" - 3 points.
- CANVAS_SET_TESTPANEL: Sets height of testing panel at the top. Params: size. If it's "too small", panel hides; for big size, size is limited.
timer_proc
timer_proc(id, callback, interval, tag="")
Perform action on timers. Many different timers allowed, they work at the same time, each uniq callback - makes new timer with its own interval. To stop some timer, you must specify same callback as on start.
- callback: Callback string, see below.
- interval: Timer delay in msec, 150 or bigger. Specify it only on starting (ignored on stopping).
- tag: Some string, if not empty, it will be parameter to callback. If it's empty, callback is called without params.
Possible values of id:
- TIMER_START - Create (if needed) and start timer, infinite ticks. If timer already created, then it's restated.
- TIMER_START_ONE - Create (if needed) and start timer, for single tick.
- TIMER_STOP - Stop timer (timer must be created before).
- TIMER_DELETE - Stop timer, and delete it from list of timers. Usually don't use it, use only to save memory if created lot of timers.
Result is True if params ok; False if params not ok (callback str incorrect, interval incorrect, not created callback on stopping); or None (for unknown id).
Callbacks
Callback param must be in one of these forms: #Callback_param.
Callbacks in timer_proc must be declared as:
#function def my(tag='', info=''): pass #method class Command: def my(self, tag='', info=''): pass
menu_proc(id_menu, id_action, command="", caption="", index=-1, hotkey="", tag="")
Perform action on menu items.
Menu id
Value of "id_menu" param can be:
- int_value or str(int_value) - to specify menu item by unique int value
- "top" - to specify top menu
- "top-file", "top-edit", "top-sel", "top-sr", "top-view", "top-op", "top-help" - to specify submenus of the main menu: File, Edit, Selection, Search, View, Options, Help
- "text" - to specify editor context menu
- "side:"+tab_caption - to specify context menu of sidebar panel (e.g. "side:Tree", "side:Project")
- "btm:"+tab_caption - to specify context menu of bottom panel
- "toolmenu:"+name - to specify drop-down sub-menu of toolbar button
Command for new items
Value of "command" parameter for MENU_ADD can be:
- int_command or str(int_command) - int command code, from module cudatext_cmd (pass 0 if item does nothing)
- callback in one of these forms: #Callback_param.
- (deprecated callback form) callback in the form "module,method" or "module,method,param" (param can be of any primitive type).
- to create standard special sub-menus, special values:
- "_recents": Recent-files submenu
- "_enc": Encodings submenu (has subitems "reload as", "convert to")
- "_langs": Translations submenu
- "_lexers": Lexers submenu
- "_plugins": Plugins submenu
- "_themes-ui": UI-themes submenu
- "_themes-syntax": Syntax-themes submenu
- "_oplugins": Settings-plugins submenu
- empty string, if item will be used as submenu (item is a submenu, if any subitems are added to it)
Item props as dict
Some actions get dict for menu items. Dict keys:
- "id": menu_id, int
- "cap": caption, str (for separator items it is "-")
- "cmd": command code (e.g. from module cudatext_cmd), int
- "hint": callback (used if "cmd" value <=0), str
- "hotkey": hotkey, str
- "command": combined command description: int value of "cmd" (if code>0), or str value of "hint" (otherwise)
- "tag": plugin-defined tag, str
- "en": enabled state, bool
- "vis": visible state, bool
- "checked": checked state, bool
- "radio": radio-item state, bool (it means round checkmark in checked state)
Actions
Possible values of "id_action":
- MENU_CLEAR: Removes all sub-items from menu item.
- MENU_ENUM: Enumerates sub-items of the menu item. Gets list of dict, or None (for incorrect menu_id).
- MENU_GET_PROP: Gets props of menu item, as dict.
- MENU_ADD: Adds sub-item to menu item. Gets string, menu_id for newly added sub-item. Params:
- "id_menu": Item in which you add sub-item.
- "caption": Caption of item, or "-" for menu separator.
- "index": Index (0-based) at which to insert sub-item. Default: append item to end.
- "command": Values are described above.
- "hotkey": String of hotkey (e.g. "Ctrl+Shift+A"). Hotkey combos are not allowed. It overrides hotkey, which is auto assigned from command code.
- "tag": Any string stored in menu item.
- MENU_CREATE: Creates new popup-menu, independant from CudaText menus. It can be filled like other menus, then shown via MENU_SHOW.
- MENU_SHOW: Shows given popup-menu. Only menu created with MENU_CREATE should be specified here. Param "command" must be tuple (x,y) or string "x,y" with screen-related coordinates, if empty - menu shows at mouse cursor.
- MENU_SET_CAPTION: Changes caption of menu item. Param "command" must be str value.
- MENU_SET_VISIBLE: Changes visible state of menu item. Param "command" must be bool value.
- MENU_SET_ENABLED: Changes enabled state of menu item. Param "command" must be bool value.
- MENU_SET_HOTKEY: Changes hotkey of menu item. Param "command" must be str value, e.g. "Ctrl+Alt+F1".
- MENU_SET_CHECKED: Changes checked state of menu item. When item is checked, it shows a checkmark. Param "command" must be bool value.
- MENU_SET_RADIOITEM: Changes radio kind of menu item. When item has radio kind, its checkmark is round (in checked state). Param "command" must be bool value.
- MENU_SET_IMAGELIST: Changes image-list object of menu, which contains menu item. Param "command" must be image-list handle.
- MENU_SET_IMAGEINDEX: Changes icon index of menu item (index in image-list). Param "index" must be icon index.
Example
Example adds item "Misc" to the main menu, and sub-items: "About", separator, "Rename file" (from CudaExt plugin):
menuid = menu_proc('top', MENU_ADD, caption='Misc') n = menu_proc(menuid, MENU_ADD, command=2700, caption='About') n = menu_proc(menuid, MENU_ADD, caption='-') n = menu_proc(menuid, MENU_ADD, command='cuda_ext.rename_file', caption='Rename file')
Example creates popup-menu with one item and shows it at (x=100, y=100):
h = menu_proc(0, MENU_CREATE) menu_proc(h, MENU_ADD, command=2700, caption='About...') menu_proc(h, MENU_SHOW, command=(100,100) )
toolbar_proc
toolbar_proc(id_toolbar, id_action, text="", text2="", command=0, index=-1, index2=-1)
Perform action on toolbar UI control.
Param "id_toolbar": int handle of toolbar. Can be "top" for main app toolbar. Function gets None, if id_toolbar not correct.
Param "id_action" possible values:
- TOOLBAR_GET_COUNT: Gets number of buttons in toolbar.
- TOOLBAR_GET_IMAGELIST: Gets int handle of image-list object.
- TOOLBAR_GET_BUTTON_HANDLE: Gets int handle of button to use in button_proc(), or None if index not correct. Param "index": button index.
- TOOLBAR_DELETE_ALL: Deletes all buttons.
- TOOLBAR_DELETE_BUTTON: Deletes one button. Param "index": button index.
- TOOLBAR_ADD_ITEM: Adds one button, usual. Param "index": button index at which to insert, -1 to append. You need to get handle of this button, and customize button via button_proc().
- TOOLBAR_ADD_MENU: Adds one button, with submenu. Param "index": button index at which to insert, -1 to append.
- TOOLBAR_UPDATE: Updates buttons positions and sizes (for current size of image-list, current captions etc).
- TOOLBAR_GET_VERTICAL: Gets vertical state of toolbar.
- TOOLBAR_SET_VERTICAL: Sets vertical state of toolbar. Param "index": bool value.
- TOOLBAR_GET_WRAP: Gets wrappable state of toolbar.
- TOOLBAR_SET_WRAP: Sets wrappable state of toolbar (implemented for horizontal toolbar only). Param "index": bool value.
- TOOLBAR_THEME: Applies current UI theme to toolbar.
Example
Example adds button to show About dialog:
toolbar_proc('top', TOOLBAR_ADD_BUTTON, text='About', text2='about program...', index2=4, command=2700)
Example adds button with dropdown submenu, which has 2 items:
toolbar_proc('top', TOOLBAR_ADD_BUTTON, text='Dropdown', index2=4, command="toolmenu:drop") menu_proc('toolmenu:drop', MENU_CLEAR) menu_proc('toolmenu:drop', MENU_ADD, caption='About...', command=2700) menu_proc('toolmenu:drop', MENU_ADD, caption='Edit plugin...', command="cuda_addonman,do_edit")
statusbar_proc
statusbar_proc(id_statusbar, id_action, index=-1, tag=0, value="")
Perform action on statusbar UI control.
- Param "id_statusbar": handle of control. Can be "main" for main program statusbar.
- Param "index": index of cell (0-based), if action needs it.
- Param "tag": int tag of cell. Tag value, if not 0, overrides "index" param. It's needed to address a cell without knowing its index, only by tag. CudaText addresses standard cells by tag in the range 1..20.
Param "id_action" can be:
- STATUSBAR_GET_COUNT: Gets int number of cells.
- STATUSBAR_DELETE_ALL: Deletes all cells.
- STATUSBAR_DELETE_CELL: Deletes one cell (by "index" or "tag").
- STATUSBAR_ADD_CELL: Adds one cell. Param "index": -1: cell will be appended; >=0: cell will be inserted at given index. Param "tag" has special meaning: it is tag value of a new cell. Gets index of new cell, or None if cannot add (e.g. "tag" is busy).
- STATUSBAR_FIND_CELL: Gets index if cell from tag, or None. Param "value": int tag. Params "index", "tag" ignored.
- STATUSBAR_GET_IMAGELIST: Gets handle of image-list object, attached to statusbar, or 0 for none.
- STATUSBAR_SET_IMAGELIST: Sets handle if image-list object. Param "value": handle.
- STATUSBAR_GET_COLOR_BACK: Gets color of background.
- STATUSBAR_GET_COLOR_FONT: Gets color of font.
- STATUSBAR_GET_COLOR_BORDER_TOP: Gets color of border-top.
- STATUSBAR_GET_COLOR_BORDER_L: Gets color of border-left.
- STATUSBAR_GET_COLOR_BORDER_R: Gets color of border-right.
- STATUSBAR_GET_COLOR_BORDER_U: Gets color of border-up.
- STATUSBAR_GET_COLOR_BORDER_D: Gets color of border-down.
- STATUSBAR_SET_COLOR_BACK: Sets color of background. Param "value": int color.
- STATUSBAR_SET_COLOR_FONT: Sets color of font.
- STATUSBAR_SET_COLOR_BORDER_TOP: Sets color of border-top.
- STATUSBAR_SET_COLOR_BORDER_L: Sets color of border-left.
- STATUSBAR_SET_COLOR_BORDER_R: Sets color of border-right.
- STATUSBAR_SET_COLOR_BORDER_U: Sets color of border-up.
- STATUSBAR_SET_COLOR_BORDER_D: Sets color of border-down.
- STATUSBAR_GET_CELL_SIZE: Gets width of cell.
- STATUSBAR_GET_CELL_AUTOSIZE: Gets auto-size of cell, bool. Auto-size: adjust width of cell to its icon+text.
- STATUSBAR_GET_CELL_AUTOSTRETCH: Gets auto-stretch of cell, bool. Auto-stretch: stretch cell to fill entire statusbar width.
- STATUSBAR_GET_CELL_ALIGN: Gets alignment of cell. One of str values: "L" (left), "C" (center), "R" (right).
- STATUSBAR_GET_CELL_TEXT: Gets text of cell.
- STATUSBAR_GET_CELL_HINT: Gets hint of cell.
- STATUSBAR_GET_CELL_IMAGEINDEX: Gets icon index (inside image-list) of cell, -1 for none.
- STATUSBAR_GET_CELL_COLOR_FONT: Gets font color of cell, COLOR_NONE if not set.
- STATUSBAR_GET_CELL_COLOR_BACK: Gets background color of cell, COLOR_NONE if not set.
- STATUSBAR_GET_CELL_TAG: Gets int tag of cell.
- STATUSBAR_SET_CELL_SIZE: Sets width of cell. Param "value": int.
- STATUSBAR_SET_CELL_AUTOSIZE: Sets auto-size of cell. Param "value": bool.
- STATUSBAR_SET_CELL_AUTOSTRETCH: Sets auto-stretch of cell. Param "value": bool.
- STATUSBAR_SET_CELL_ALIGN: Sets alignment of cell. Param "value": str constant: "L", "C", "R".
- STATUSBAR_SET_CELL_TEXT: Sets text of cell. Param "value": str.
- STATUSBAR_SET_CELL_HINT: Sets hint of cell. Param "value": str.
- STATUSBAR_SET_CELL_IMAGEINDEX: Sets icon index (inside image-list) of cell, -1 for none. Param "value": int.
- STATUSBAR_SET_CELL_COLOR_FONT: Sets font color of cell. Param "value": int color or COLOR_NONE.
- STATUSBAR_SET_CELL_COLOR_BACK: Sets background color of cell. Param "value": int color or COLOR_NONE.
- STATUSBAR_SET_CELL_TAG: Sets tag of cell. Param "value": int.
Notes:
- If cell text not empty, alignment applies only for text, icon is on the left. If text empty, alignment applies for icon.
imagelist_proc
imagelist_proc(id_list, id_action, value="")
Perform action on image-list object.
Param "id_list" is int handle of image-list. It is required for all actions, except IMAGELIST_CREATE, where it should be 0.
Possible values of id_action:
- IMAGELIST_CREATE: Creates new image-list object with default icon size 16x16. Gets int handle of this image-list. Param "value" must be int handle of owner form of object.
- If it is form handle from dlg_proc, object will be deleted after deletion of this form.
- If it is 0, main application form is used as owner, and object will be persistent.
- IMAGELIST_COUNT: Gets int number of icons in image-list.
- IMAGELIST_GET_SIZE: Gets current icon size as 2-tuple (width, height).
- IMAGELIST_SET_SIZE: Sets new icon size, and clears image-list. Param "value" must be 2-tuple of int (width, height). Gets new icon size (corrected by minimal value) as 2-tuple.
- IMAGELIST_ADD: Loads image into image-list. Param "value" must be full path to png/bmp image file. Image size should be the same as size in image-list (but not required). Gets int icon index, or None if cannot load.
- IMAGELIST_DELETE: Deletes one icon. Param "value" is int icon index (0-based).
- IMAGELIST_DELETE_ALL: Deletes all icons.
- IMAGELIST_PAINT: Paints single icon on given canvas, at given coords. Param "value" must be tuple (canvas_id, x, y, icon_index). Value canvas_id can be 0 for testing paintbox in CudaText.
image_proc
image_proc(id_image, id_action, value="")
Perform action on image object.
Param "id_image" is int handle of image. It is required for all actions, except IMAGE_CREATE, where it should be 0.
Possible values of id_action:
- IMAGE_CREATE: Creates new image object. Gets int handle of this image. Param "value" must be int handle of owner form of object.
- If it is form handle from dlg_proc, object will be deleted after deletion of this form.
- If it is 0, main application form is used as owner, and object will be persistent.
- IMAGE_GET_SIZE: Gets current image size as 2-tuple (width, height).
- IMAGE_LOAD: Reads picture file into image object. Param "value" must be full file path (png, jpg, bmp, gif, ico).
- IMAGE_PAINT: Paints image object on given canvas, at given coords. Param "value" must be tuple (canvas_id, x, y). Value canvas_id can be 0, for testing paintbox in CudaText.
- IMAGE_PAINT_SIZED: Paints image object on given canvas, resized to given rectangle. Param "value" must be tuple (canvas_id, x1, y1, x2, y2).
button_proc(id_button, id_action, value="")
Perform action on extended button (control type "button_ex").
Param "id_button" is int handle of button.
Possible values of "id_action":
- BTN_UPDATE: Repaints button.
- BTN_GET_TEXT: Gets caption string.
- BTN_SET_TEXT: Sets caption string.
- BTN_GET_HINT: Gets hint (tooltip) string.
- BTN_SET_HINT: Sets hint string.
- BTN_GET_ENABLED: Gets enabled state, bool.
- BTN_SET_ENABLED: Sets enabled state. Param "value" must be bool.
- BTN_GET_VISIBLE: Gets visible state, bool.
- BTN_SET_VISIBLE: Sets visible state. Param "value" must be bool.
- BTN_GET_CHECKED: Gets checked state, bool.
- BTN_SET_CHECKED: Sets checked state. Param "value" must be bool.
- BTN_GET_IMAGELIST: Gets handle of image-list for button.
- BTN_SET_IMAGELIST: Sets handle of image-list. Param "value" must be int handle.
- BTN_GET_IMAGEINDEX: Gets icon index (in attached image-list).
- BTN_SET_IMAGEINDEX: Sets icon index. Param "value" must be int index (0-based), or -1 for none. To show icon, you must also set appropriate kind of button.
- BTN_GET_MENU: Gets int handle of submenu. It can be passed to menu_proc(h, MENU_SHOW).
- BTN_SET_MENU: Sets int handle of submenu. It can be handle created by menu_proc(0, MENU_CREATE).
- BTN_GET_KIND: Gets kind of button. Int value, one of BTNKIND_nnn constants.
- BTN_SET_KIND: Sets kind of button.
- BTN_GET_BOLD: Gets bold-style of button, bool.
- BTN_SET_BOLD: Sets bold-style. Param "value" must be bool.
- BTN_GET_ARROW: Gets dropdown arrow visible flag, bool.
- BTN_SET_ARROW: Sets dropdown arrow visible flag. Param "value" must be bool.
- BTN_GET_ARROW_ALIGN: Gets alignment of dropdown arrow, as str: "L", "R", "C".
- BTN_SET_ARROW_ALIGN: Sets alignment of dropdown arrow. Param "value" must be str: "L", "R", "C".
- BTN_GET_FOCUSABLE: Gets focusable state, bool.
- BTN_SET_FOCUSABLE: Sets focusable state. Param "value" must be bool.
- BTN_GET_FLAT: Gets flat state, bool.
- BTN_SET_FLAT: Sets flat state. Param "value" must be bool.
- BTN_GET_DATA1: Gets data1 string. Data1 is used for toolbar buttons, contains command of button: str(int_command) or callback.
- BTN_SET_DATA1: Sets data1. Param "value" must be callback: #Callback_param.
- BTN_GET_DATA2: Gets data2 string. Data2 is currently not used by app.
- BTN_SET_DATA2: Sets data2 string.
- BTN_GET_WIDTH: Gets width (in pixels).
- BTN_SET_WIDTH: Sets width.
- BTN_GET_ITEMS: Gets choice items, used for kind=BTNKIND_TEXT_CHOICE, as "\n"-separated strings.
- BTN_SET_ITEMS: Sets choice items. Param "value" must be str, "\n"-separated strings.
- BTN_GET_ITEMINDEX: Gets choice index, used for kind=BTNKIND_TEXT_CHOICE.
- BTN_SET_ITEMINDEX: Sets choice index. Param "value" must be int >=0, or -1 for none.
Note: Toolbars contain several "button_ex" objects, which are anchored one to another (horizontally or vertically). You can also construct such toolbar by hands. API toolbar_proc() don't allow to specify kind of buttons, it sets kind from button properties.
more
Editor class
Editor class has methods to work with editor. Global objects of Editor exist:
- ed: refers to currently focused editor (in any tab).
- ed_bro: refers to "brother" of ed. If tab is splitted, 2 editors are shown: 1st/2nd. 1st and 2nd are "brother" editors.
- ed_con_log: refers to log field (multi-line) in the "Console" panel.
- ed_con_in: refers to input field (single line) in the "Console" panel.
Carets
get_carets
get_carets()
Returns list of 4-tuples, each item is info about one caret: (PosX, PosY, EndX, EndY).
- PosX is caret's column (0-base). Tab-chars give x increment 1, like others.
- PosY is caret's line (0-base).
- EndX/EndY is position of selection edge for this caret. Both -1 if no selection for caret.
set_caret
set_caret(posx, posy, endx=-1, endy=-1, id=CARET_SET_ONE)
Controls carets. Possible values of id:
- CARET_SET_ONE: Removes multi-carets and sets single caret with given coords (posx, posy, endx, endy).
- CARET_ADD: Adds caret (multi-carets feature) with given coords. Also gets count of carets after that (same as len(get_carets()) ).
- CARET_DELETE_ALL: Removes all carets. (Note: you must add caret then to enable text editing to user.)
- CARET_SET_INDEX + N (any N>=0): Changes single caret with index N to given coords.
Text read/write
get_text_all/set_text_all
get_text_all() set_text_all(text)
Gets/sets entire text in the editor (str).
Notes:
- get_text_all is simple wrapper around get_text_line/get_line_count, it uses "\n" as line sep.
- set_text_all cannot work with read-only editor (see set_prop() and PROP_RO).
get_text_line/set_text_line
get_text_line(num) set_text_line(num, text)
Gets/sets single line (str) with given index (0-base).
Line must be w/o CR LF. Gets None if index incorrect.
To add new line, call set with num=-1.
get_text_substr
get_text_substr(x1, y1, x2, y2)
Gets substring from position (x1, y1) to position (x2, y2). Second position must be bigger than first.
delete
delete(x1, y1, x2, y2)
Deletes range from position (x1, y1) to bigger position (x2, y2).
- Too big x1/x2 are allowed (after line-end)
- Too big y2 means delete to end of file
Note: don't pass tuple from get_carets()[0], this tuple has not sorted pos=(x1, y1), end=(x2, y2), you need to sort them (first sort by y, then by x).
Example replaces selection of 1st caret with text:
x0, y0, x1, y1 = ed.get_carets()[0] if (y0, x0) >= (y1, x1): #note that y first x0, y0, x1, y1 = x1, y1, x0, y0 ed.set_caret(x0, y0) ed.delete(x0, y0, x1, y1) ed.insert(x0, y0, text)
insert
insert(x, y, text)
Inserts given text at position (x, y). If y too big, appends block to end (even to final line w/out line-end). Text can be multi-line, all CR LF are converted to currently used line-ends.
Gets 2-tuple (x, y) of position after inserted text. It is on the same line, if text is single line. Gets None if cannot insert, e.g. y is too big.
replace
replace(x1, y1, x2, y2, text)
Replaces range from position (x1, y1) to bigger position (x2, y2), with new text.
- Too big x1/x2 are allowed (after line-end)
- Too big y2 means delete to end of file
Function does the same as delete+insert, but
- optimized for replace inside one line (when y1==y2 and no EOLs in text)
- for multi-line it also makes grouped-undo for delete+insert
Gets 2-tuple (x, y) of position after inserted text.
replace_lines
replace_lines(y1, y2, lines)
Deletes whole lines from index y1 to y2, then inserts new lines from specified list.
- Index y1 must be valid index (0 to count-1)
- Index y2 can be less than y1 (to not delete), and can be too big (to delete lines to end)
- Param lines is list/tuple of str. Can be empty list to just delete lines. Inside items should not be "\n" and "\r": "\n" will generate additonal lines, "\r" not handled.
Gets bool: index y1 correct, replace done.
Selection
get_text_sel
get_text_sel()
Returns selected text for 1st caret (empty, if no selection).
get_sel_mode
get_sel_mode()
Gets kind of selection: normal or column selection: SEL_NORMAL, SEL_COLUMN.
get_sel_lines
get_sel_lines()
Gets 2-tuple, indexes of 1st and last lines affected by 1st caret selection. Both -1 if no selection.
get_sel_rect/set_sel_rect
get_sel_rect() set_sel_rect(x1, y1, x2, y2)
Gets/sets coords of column selection.
Gets 4-tuple (x1, y1, x2, y2). All 0 if no column selection.
Properties
get_line_count
get_line_count()
Gets number of lines.
get_filename
get_filename()
Gets filename (str) of the editor's tab.
- Empty str for untitled tab.
- String "?" if picture file loaded in tab.
get_split
get_split()
Gets tab splitting: each tab can be splitted to primary/second editors.
Gets 2-tuple: (state, percent):
- int: state of splitting, one of the values
- TAB_SPLIT_NO: tab is not splitted
- TAB_SPLIT_HORZ: tab is splitted horizontally
- TAB_SPLIT_VERT: tab is splitted vertically
- float: percent of splitting, e.g. it's 0.5 if tab splitted 50/50.
set_split
set_split(state, percent)
Sets tab splitting. Meaning of params is the same as for get_split().
get_prop
get_prop(id, value="")
Gets editor's property.
Param value can be: str, number, bool (for bool can be also "0"/"1")._EOL: str: end-of-line chars. Currently always gets "\n" since it's ignored in "insert" method.
- PROP_WRAP: int: word-wrap mode. One of WRAP_nnn values.
- PROP_RO: bool: read-only mode.
- PROP_MARGIN: int: position of fixed margin.
- PROP_MARGIN_STRING: str: user-defined margins positions, e.g. "20 25".
- PROP_INSERT: bool: insert/overwrite mode.
- PROP_MODIFIED: bool: editor is modified.
- PROP_MODIFIED_VERSION: int: counter which is incremented on each text change.
- PROP_RULER: bool: horz ruler is shown.
- PROP_LINE_STATE: int: state of the line with given index. One of LINESTATE_nnn values.
- PROP_LINE_NUMBERS: int: style of line numbers. One of LINENUM_nnn values.
- PROP_LINE_TOP: int: index of line visible at the top of editor.
- PROP_LINE_BOTTOM: int: index of line visible at the bottom of editor (considers word wrap).
-_COLOR: int: color property. Value must be one of COLOR_ID_nnn values. Gets None for incorrect id.
- PROP_ENC: str: encoding name. Names are listed at CudaText#Encodings.
- PROP_LEXER_FILE: str: name of lexer for entire file (empty str if none is active).
- PROP_LEXER_POS: str: name of lexer at specified position: value must be str(column)+","+str(line), column/line 0-based.
- PROP_LEXER_CARET: str: name of lexer at the position of 1st caret.
-. Value must be "key:defvalue" or simply "defvalue". Saved value for "key" is returned, or "defvalue" returned if value for key was not set._MACRO_REC: bool: currently macro is recording.
- PROP_MARKED_RANGE: 2-tuple with line indexes of "marked range"; (-1, -1) if range not set.
- PROP_VISIBLE_LINES: int: max count of lines that fit to window (doesn't consider word wrap).
- PROP_VISIBLE_COLUMNS: int: max count of columns that fit to window.
- PROP_PICTURE: properties of picture file as 3-tuple: (picture_filename, size_x, size_y), or None if not picture loaded in tab. For picture ed.get_filename() gets "?".
- PROP_MINIMAP: bool: minimap is visible.
- PROP_MICROMAP: bool: micromap is visible.
- PROP_LINK_AT_POS: str: URL in the document, at given position. Value must be str(pos_x)+","+str(pos_y).
- PROP_TAB_SPACES: bool: tab-key inserts spaces (not tab-char).
- PROP_TAB_SIZE: int: size of tab-char.
- PROP_TAB_COLLECT_MARKERS: bool: tab-key collects (jumps to and deletes) markers (if markers placed).
- PROP_TAB_TITLE: str: title of tab, useful for untitled tabs, for tabs with picture files.
- PROP_TAB_COLOR: int: color of tab containing editor; COLOR_NONE if not set.
- PROP_TAB_ICON: int: index of tab icon, ie index in imagelist, which handle you can get via app_proc(PROC_GET_TAB_IMAGELIST).
- PROP_TAB_ID: int: unique tab's identifier (one number for main/secondary editors in tab), it is not changed when tab is moved.
-_COORDS: 4-tuple of int: screen coordinates of editor UI control, (left, top, right, bottom).
- PROP_CELL_SIZE: 2-tuple of int: size in pixels of average char cell.
- PROP_ONE_LINE: bool: editor has single-line mode. (Usual editors are multi-line.)
- PROP_MODERN_SCROLLBAR: bool: use custom-drawn themed scrollbars.
- PROP_KIND: str: kind of UI control in the tab.
- "text": normal editor
- "bin": binary viewer
- "pic": picture
-.
set_prop
set_prop(id, value)
Sets editor's property.
Param value can be: str, number, bool (for bool can be also "0"/"1"), tuple of simple type._WRAP: int: word-wrap mode. One of WRAP_nnn values.
- PROP_RO: bool: read-only mode.
- PROP_MARGIN: int: position of fixed margin.
- PROP_MARGIN_STRING: str: space-separated user-margins columns.
- PROP_INSERT: bool: insert/overwrite mode.
- PROP_MODIFIED: bool: editor is modified.
- PROP_RULER: bool: show ruler.
- PROP_COLOR: color property, value must be 2-tuple (COLOR_ID_nnnn, int_color_value).
- PROP_LINE_TOP: int: index of line visible at the top of editor (allows to scroll editor).
- PROP_LINE_NUMBERS: int: style of line numbers. One of LINENUM_nnn values.
- PROP_LINE_STATE: 2-tuple: state of the line with given index. 2-tuple (line_index, LINESTATE_nnn).
-_ENC: str: encoding name. Names listed at CudaText#Encodings.
- PROP_LEXER_FILE: str: name of lexer.
-. Param text must be pair "key:value" or simply "value", this sets value for specified key (internally it is dictionary)._MARKED_RANGE: line indexes of "marked range", value is 2-tuple (index1, index2) or (-1, -1) to remove this range.
- PROP_MINIMAP: bool: minimap is visible.
- PROP_MICROMAP: bool: micromap is visible.
- PROP_TAB_SPACES: bool: tab-key inserts spaces.
- PROP_TAB_SIZE: int: size of tab-char.
- PROP_TAB_COLLECT_MARKERS: bool: tab-key collects (jumps to and deletes) markers.
- PROP_TAB_COLOR: int: color of tab containing editor; set COLOR_NONE to reset.
- PROP_TAB_TITLE: str: title of tab, useful for untitled tabs.
- PROP_TAB_ICON: int: index of tab icon, ie index in imagelist, which handle you can get via app_proc(PROC_GET_TAB_IMAGELIST).
-_ONE_LINE: bool: editor has single-line mode.
- PROP_MODERN_SCROLLBAR: bool: use custom-drawn themed scrollbars.
-.
props vs options
Many get_prop/set_prop ids correspond to CudaText options. List of correspoding pairs:
- PROP_CARET_SHAPE - caret_shape
- PROP_CARET_SHAPE_OVR - caret_shape_ovr
- PROP_CARET_SHAPE_RO - caret_shape_ro
- PROP_CARET_VIRTUAL - caret_after_end
- PROP_GUTTER_ALL - gutter_show
- PROP_GUTTER_BM - gutter_bookmarks
- PROP_GUTTER_FOLD - gutter_fold
- PROP_HILITE_CUR_COL - show_cur_column
- PROP_HILITE_CUR_LINE - show_cur_line
- PROP_HILITE_CUR_LINE_MINIMAL - show_cur_line_minimal
- PROP_HILITE_CUR_LINE_IF_FOCUS - show_cur_line_only_focused
- PROP_INDENT_AUTO - indent_auto
- PROP_INDENT_KEEP_ALIGN - unindent_keeps_align
- PROP_INDENT_KIND - indent_kind
- PROP_INDENT_SIZE - indent_size
- PROP_LAST_LINE_ON_TOP - show_last_line_on_top
- PROP_MARGIN - margin
- PROP_MARGIN_STRING - margin_string
- PROP_MICROMAP - micromap_show
- PROP_MINIMAP - minimap_show
- PROP_RULER - ruler_show
- PROP_TAB_SIZE - tab_size
- PROP_TAB_SPACES - tab_spaces
- PROP_UNPRINTED_ENDS - unprinted_ends
- PROP_UNPRINTED_END_DETAILS - unprinted_end_details
- PROP_UNPRINTED_SHOW - unprinted_show
- PROP_UNPRINTED_SPACES - unprinted_spaces
- PROP_UNPRINTED_SPACES_TRAILING - unprinted_spaces_trailing
- PROP_WRAP - wrap_mode
bookmark
bookmark(id, nline, nkind=1, ncolor=-1, text="")
Controls bookmarks. Possible values of id:
- BOOKMARK_GET: Gets kind of bookmark at given line index (param "nline"). 0 means no bookmark, 1 means usual bookmark.
- BOOKMARK_SET: Sets bookmark.
- Param "nline": line index of bookmark.
- Param "nkind": kind of bookmark: 1 means usual bookmark with usual color and icon. Other kind values mean custom bookmark, which must be setup via BOOKMARK_SETUP.
- Param "text": hint, shown when mouse moves over bookmark gutter icon.
- By default, bookmark is kept on deleting its line. To place bookmark which is deleted on deleting its line, set ncolor=1.
- BOOKMARK_CLEAR: Removes bookmark from line=nline (nkind ignored).
- BOOKMARK_CLEAR_ALL: Removes all bookmarks (nline, nkind ignored).
- BOOKMARK_SETUP: Setups bookmarks with given kind (param "nkind").
- Param "ncolor": Color of bookmarked line. Can be COLOR_NONE to not use background color.
- Param "text": Path to icon file for gutter, 16x16 .bmp or .png file. Empty str: don't show icon.
- BOOKMARK_GET_LIST: Gets list of line indexes with bookmarks. Params "nline", "nkind" are ignored.
Notes:
- Param "nkind" must be in the range 1..63.
- Param "nkind" values 2..9 have setup by default: they have blue icons "1" to "8".
folding
folding(id, index=-1, item_x=-1, item_y=-1, item_y2=-1, item_staple=False, item_hint="")
Performs action on folding ranges. Possible values of "id":
- FOLDING_GET_LIST: Gets list of folding ranges. Params used: none except "id". Gets list of tuples (y, y2, x, staple, folded), or None.
- "y": int: line of range start.
- "y2": int: line of range end. If y==y2, then range is simple and don't have gutter-mark and staple.
- "x": int: x-offset of range start (char index in start line).
- "staple": bool: range has block staple.
- "folded": bool: range is currently folded.
- FOLDING_FOLD: Folds range with given index (index in list, from FOLDING_GET_LIST). Params used: "index".
- FOLDING_UNFOLD: Unfolds range with given index (index in list, from FOLDING_GET_LIST). Params used: "index".
- FOLDING_ADD: Adds folding range. Life time of this range is until lexer analisys runs (it clears ranges and adds ranges from lexer), which runs after any text change. Params used:
- "item_x": char index (offset in line) of range start.
- "item_y": line index of range start.
- "item_y2": line index of range end.
- "item_staple" (optional): bool, range has block staple.
- "item_hint" (optional): str, hint which is shown when range is folded.
- "index": if it's valid range index (0-based), range inserts at this index, else range appends.
- FOLDING_DELETE: Deletes folding range with given index (index in list, from FOLDING_GET_LIST). Params used: "index".
- FOLDING_DELETE_ALL: Deletes all folding ranges. Params used: none except "id".
- FOLDING_FIND: Finds index of range (index in list, from FOLDING_GET_LIST). Gets None if not found. Params used: "item_y" is line index at which range begins.
- FOLDING_CHECK_RANGE_INSIDE: For 2 ranges, which indexes given by "index" and "item_x", detects: 1st range is inside 2nd range. Gets bool. Gets False if indexes incorrect.
- FOLDING_CHECK_RANGES_SAME: For 2 ranges, which indexes given by "index" and "item_x", detects: ranges are "equal" (x, y, y2 in ranges may differ). Gets bool. Gets False if indexes incorrect.
get_sublexer_ranges
get_sublexer_ranges()
Gets list of ranges which belong to nested lexers (sublexers of current lexer for entire file, e.g. "HTML" inside "PHP", or "CSS" inside "HTML"). Gets list of 5-tuples, or None if no ranges. Each tuple is:
- str: sublexer name
- int: start column (0-based)
- int: start line (0-based)
- int: end column
- int: end line
get_token
get_token(id, index1, index2)
Gets info about one token ("token" is minimal text fragment for syntax parser).
Gets 4-tuple, or None if no such token. Tuple is:
- (start_x, start_y)
- (end_x, end_y)
- string_token_type
- string_style_name
Possible values of id:
- TOKEN_AT_POS: Token at position (x=index1, y=index2).
- TOKEN_INDEX: Token with index=index1 (in tokens collection), 0-based.
get_wrapinfo
get_wrapinfo()
Gets info about wrapped lines. It allows to calculate, at which positions long lines are wrapped (when wrap mode is on). It allows to see, which text parts are folded.
Gets list of dict, or None. Dict items has keys:
- "line": Original line index, for this part.
- "char": Char index (1-based, for UTF-16 stream of chars), at which this part starts. It is >1, for next parts of long wrapped line.
- "len": Length of this part. It equals to entire line length, if line is not wrapped.
- "indent": Screen indent in spaces, for this part, it's used in rendering when option "Show wrapped parts indented" is on.
- "final": Enum, state of this part. 0: final part of entire line; 1: folded (hidden) part; 2: first or middle part of entire line.
markers
markers(id, x=0, y=0, tag=0, len_x=0, len_y=0)
Controls markers (used e.g. in Snippets plugin). Possible values of "id":
- MARKERS_GET: Gets list of markers. Each list item is [x, y, len_x, len_y, tag]. Gets None if no markers.
- MARKERS_ADD: Adds marker. Also gets number of markers. Params:
- "x", "y": Start of marker (like caret pos).
- "tag">0 is needed if you want to place multi-carets when command "goto last marker (and delete)" runs. All markers with the same tag>0 will get multi-carets (after markers will be deleted).
- "len_x", "len_y" are needed if you want to place selection at marker, when "goto last marker" command goes to this marker.
- if len_y==0: len_x is length of selection (single line),
- if len_y>0: len_y is y-delta of selection-end, and len_x is absolute x-pos of selection-end.
- MARKERS_DELETE_ALL: Deletes all markers.
- MARKERS_DELETE_LAST: Deletes last added marker.
- MARKERS_DELETE_BY_TAG: Deletes all markers for specified "tag".
attr
attr(id, tag=0, x=0, y=0, len=0, color_font=COLOR_NONE, color_bg=COLOR_NONE, color_border=COLOR_NONE, font_bold=0, font_italic=0, font_strikeout=0, border_left=0, border_right=0, border_down=0, border_up=0 )
Controls additional color attributes. Possible values of "id":
- MARKERS_ADD: Adds fragment with specified properties. Also gets number of fragments. Props:
- "tag": any int value attached to fragment (several plugins must add fragments with different tags, to be able to remove only fragments for them)
- "x", "y": position of fragment start (like caret).
- "len": length of fragment.
- "color_nnnn": RGB color values for font, background, borders. E.g. 0x0000FF is red, 0x00FF00 is green.
- "font_nnnn": font attributes: 0 - off, 1 - on.
- "border_nnnn": border types for edges, values from 0: none, solid, dash, solid 2pixel, dotted, rounded, wave.
- MARKERS_GET: Gets list of fragments, or None. Each item is list with int fields, in the same order as function params.
- MARKERS_DELETE_ALL: Deletes all fragments.
- MARKERS_DELETE_LAST: Deletes last added fragment.
- MARKERS_DELETE_BY_TAG: Deletes all fragments for specified "tag".
Note:
- "color_bg" value can be COLOR_NONE: it uses usual back-color.
- "color_font", "color_border" value can be COLOR_NONE: it uses usual text-color.
hotspots
hotspots(id, tag=0, tag_str="", pos="")
Controls hotspot ranges. When mouse cursor moves in/out of hotspot, event on_hotspot is called.
Possible values of "id":
- HOTSPOT_ADD: Adds hotspot. Gets bool: params correct, hotspot added. Params:
- "pos": 4-tuple of int, text coords: (x_start, y_start, x_end, y_end)
- "tag": optional int value
- "tag_str": optional str value
- HOTSPOT_GET_LIST: Gets list of hotspots, or None. Each item is dict with keys:
- "pos": 4-tuple of int
- "tag": int
- "tag_str": str
- HOTSPOT_DELETE_ALL: Deletes all hotspots.
- HOTSPOT_DELETE_LAST: Deletes last hotspot.
- HOTSPOT_DELETE_BY_TAG: Deletes all hotspots with specified "tag".
Misc
save
save(filename="")
Saves editor's tab to disk.
Shows save-as dialog for untitled tab. If param filename not empty, uses it (untitled tab becomes titled). Gets bool: file was saved.
cmd
cmd(code, text="")
Command runner, it runs any command in editor by its int code.
See codes in module cudatext_cmd. Param text is needed for rare commands (e.g. by cCommand_TextInsert).
focus
focus()
Activates editor's tab and focuses editor itself. It's needed when you want to activate inactive tab. You can get Editor objects of inactive tabs using ed_handles().
lock/unlock
lock() unlock()
Functions lock, then unlock editor. "Locked" editor is not painted and shows only hourglass (or another) icon.
Call "lock" increases counter, "unlock" decreases this counter (it's safe to call "unlock" more times, counter will be 0). When counter is >0, editor is locked.
convert
convert(id, x, y, text="")
Converts something in editor. Possible values of "id":
- CONVERT_CHAR_TO_COL: Convert char coordinates (x,y) to column coordinates (column,y), using current tab size.
- CONVERT_COL_TO_CHAR: Convert column coordinates (x,y) to char coordinates (chars,y).
- CONVERT_LINE_TABS_TO_SPACES: Convert line (param "text") from tab-chars to spaces, using current tab size. Gets str. Gets original line, if no tab-chars in it.
- CONVERT_SCREEN_TO_LOCAL: Convert pixel coordinates (x,y), from screen-related to control-related.
- CONVERT_LOCAL_TO_SCREEN: Convert pixel coordinates (x,y), from control-related to screen-related.
- CONVERT_PIXELS_TO_CARET: Convert pixel control-related coords (x,y) to caret position (column,line).
- CONVERT_CARET_TO_PIXELS: Convert caret position (column,line) to pixel control-related coords (x,y).
Gets None if params incorrect, or cannot calc result.
complete
complete(text, len1, len2, selected=0, alt_order=False)
Shows auto-completion listbox with given items.
Function gets None and listbox stays open. When user chooses listbox item, its data is inserted.
- Param "text": string with items, must be formatted as shown in the ATSynEdit#Auto-completion_lists.
- Param "len1": count of chars to the left of the caret, to be replaced.
- Param "len2": count of chars to the right of the caret, to be replaced.
- Param "selected": index of initially selected listbox item (0-based).
- Param "alt_order":
- alt_order=False: pass items in each line: prefix, id, desc. Listbox shows prefix at the left edge.
- alt_order=True: pass items in each line in such order: id, prefix, desc. Listbox shows id at the left edge. Use it if plugin needs to show long "prefixes".
Note: listbox disappears if you move caret or type text, unlike usual auto-completion listboxes (they can recalculate items for new caret pos).
complete_alt
complete_alt(text, snippet_id, len_chars, selected=0)
Shows alternative completion listbox. This listbox allows to insert complex snippets, not only simple words. E.g. snippet text like "myobj.myfunc(${1:value1}, ${2:value2});".
Function gets None and listbox stays open. When user chooses listbox item, event on_snippet is called, with params:
- Param "snippet_id": value from complete_alt() call; on_snippet should handle only known value of snippet_id.
- Param "snippet_text": text of selected item (last column).
Function params:
- Param "text": "\n"-separated lines, one line per snippet. Each line has 3 columns "\t"-separated.
- Column 1 is shown on left side of listbox (good to show here name of function)
- Column 2 is shown on right side of listbox (good to show here type of function)
- Column 3 is snippet text in any format. Can contain "\t" but not "\n". Can have any chars escaped in any form.
- Param "snippet_id": any short string just to identify snippet kind between all plugins. E.g. plugin1/plugin2 may have snippets in different formats, they must handle only their snippets.
- Param "len_chars": listbox shows lefter than caret, by this count of chars.
- Param "selected": index of initially selected listbox item (0-based).
gap
gap(id, num1, num2, tag=-1)
Performs action on inter-line gaps (they don't overlap text above/below). Possible values of id:
- GAP_GET_LIST: Gets list of gaps. List of 4-tuples (nline, ntag, bitmap_size_x, bitmap_size_y), or None.
- GAP_MAKE_BITMAP: Makes new bitmap for future gap. Size of bitmap is num1 x num2. Gets 2-tuple (id_bitmap, id_canvas).
- id_bitmap is needed to later call GAP_ADD.
- id_canvas is needed to paint an image on bitmap, by canvas_proc().
- GAP_ADD: Adds gap to editor, with ready bitmap. Gets bool: params correct, gap added. Params:
- num1 - line index
- num2 - id_bitmap you got before
- tag - int value (signed 64 bit) to separate gaps from several plugins.
- GAP_DELETE: Deletes gaps, for line indexes from num1 to num2.
- GAP_DELETE_ALL: Deletes all gaps.
Example plugin:
from cudatext import * class Command: def run(self): for i in range(ed.get_line_count()//2): self.do_gap(i*2) def do_gap(self, num): id_bitmap, id_canvas = ed.gap(GAP_MAKE_BITMAP, 600, 50) canvas_proc(id_canvas, CANVAS_SET_BRUSH, color=0xa0ffa0) canvas_proc(id_canvas, CANVAS_POLYGON, '200,0,300,30,200,50') canvas_proc(id_canvas, CANVAS_SET_BRUSH, color=0xffffff, style=BRUSH_CLEAR) canvas_proc(id_canvas, CANVAS_TEXT, x=230, y=10, text='gap %d'%(num+1)) canvas_proc(id_canvas, CANVAS_SET_BRUSH, color=0xffffff, style=BRUSH_SOLID) ed.gap(GAP_ADD, num, id_bitmap)
dim
dim(id, index=0, index2=0, value=100)
Controls dim (shade, fade) ranges, which blend text font color with background color. Dim value is 0..255, 0 means "no effect", 255 means "text is transparent".
Possible values of id:
- DIM_ADD: Adds new range.
- Param "index": index of first range line (0-base).
- Param "index2": index if last range line (value bigger than line count is allowed).
- Param "value": dim value.
- DIM_DELETE: Deletes one range. Param "index" is index of range (0-base).
- DIM_DELETE_ALL: Deletes all ranges.
- DIM_ENUM: Enumerates ranges. Gives list of 3-tuples (line_from, line_to, dim_value) or None.
lexer_scan
lexer_scan(nline)
Runs lexer analysis (if lexer active) from given line index. Waits until analysis finishes.
Use case: plugin appends n lines to text (line by line), then lexer needs to rescan text from 1st added line, else folding won't appear for these lines.
export_html
export_html(file_name, title, font_name, font_size, with_nums, color_bg, color_nums)
Makes HTML file from source text with syntax highlighting. Gets bool: file is created. Params:
- file_name: str: Path of file.
- title: str: HTML page title.
- font_name: str: Font name.
- int: Font size, in HTML points.
- with_nums: bool: Make column with line numbers.
- color_bg: int: RGB color of page background.
- color_nums: int: RGB color of line numbers.
more
Tech info
Format of text for cmd_MouseClick
Text is "X,Y" where X/Y are position of caret relative to top-caret (other carets removed when command runs). If Y is 0, X is addition to caret's x. If Y not 0, Y is addition to caret's y, and X is absolute caret's x.
Format of text for cmd_FinderAction
Text is chr(1) separated items:
- item 0: finder action, one of:
- 'findfirst': find first
- 'findnext': find next
- 'findprev': find previous
- 'rep': replace next, and find next
- 'repstop': replace next, and don't find
- 'repall': replace all
- 'findcnt': count all
- 'findsel': find all, make selections
- 'findmark': find all, place markers
- item 1: text to find
- item 2: text to replace with
- item 3: several chars, each char is finder option:
- 'c': case sensitive
- 'r': regular expression
- 'w': whole words
- 'f': from caret
- 'o': confirm replaces
- 'a': wrapped search
- 's': search in selection
How plugin can fill Code Tree?
- Handle events on_open, on_focus, on_change_slow (better don't use on_change, it slows down the app)
- Get handle of Code Tree: h_tree = app_proc(PROC_GET_CODETREE, "")
- Fill tree via tree_proc(h_tree, ...)
- Disable standard tree filling via ed.set_prop(PROP_CODETREE, False)
- Clear tree via tree_proc(h_tree, TREE_ITEM_DELETE, id_item=0)
- On adding tree items, set range for them via tree_proc(h_tree, TREE_ITEM_SET_RANGE...)
How plugin can show tooltips on mouse-over?
- Create dialog for tooltip, via dlg_proc()
- Handle events on_open, on_focus, on_change_slow (better don't use on_change, it will slow down the app)
- Find text regions which need tooltip
- Delete old hotspots via ed.hotspot(HOTSPOT_DELETE_BY_TAG...)
- Add hotspots via ed.hotspots(HOTSPOT_ADD...)
- Handle event on_hotspot
- Find which text region is under cursor
- Fill dialog via dlg_proc()
- Set dialog prop "p" to the handle of editor, ed_self.h. Don't use ed.h, because it is virtual handle 0.
- Set dialog pos (props "x", "y") to editor-related pixel coords. It is complex. See example: plugin HTML Tooltips.
- Show dialog via dlg_proc(..., DLG_SHOW_NONMODAL)
- On exiting hotspot, hide dialog
How to make formatter plugin?
- Fork plugin CSS Format (or JS Format). It has most of code done. It has 3 commands done: Format, Config global, Config local.
- Replace/delete its CSS (JS) modules
- Change options filename, change loading of options
- Change contents of func do_format()
History
1.0.247 (app 1.56.3)
- add: event on_close_pre
- add: ed.get_prop/set_prop: PROP_FOLD_ALWAYS
- add: ed.get_prop/set_prop: PROP_FOLD_ICONS
- add: ed.get_prop/set_prop: PROP_FOLD_TOOLTIP_SHOW
1.0.246 (app 1.56.1)
- add: dlg_proc: controls have prop "autosize"
- add: dlg_proc: added events for control "editor": on_change, on_caret, on_scroll, on_key_down, on_key_up, on_click_gutter, on_click_gap, on_paste
- add: addon can require lexer(s) via install.inf: [info] reqlexer=Name1,Name2
1.0.245 (app 1.55.3)
- add: Editor.get_prop/set_prop: PROP_CARET_STOP_UNFOCUSED
- add: addon can require other addon(s) via install.inf: [info] req=cuda_aa,cuda_bb
1.0.244 (app 1.55.1)
- deleted: object ed_goto
1.0.243 (app 1.55.0)
- add: event on_click_gutter
- deleted: events on_goto_enter, on_goto_change, on_goto_caret, on_goto_key, on_goto_key_up
- add: additional group indexes 6..8 for floating groups
- add: app_proc: PROC_SHOW_FLOATGROUP1_GET, PROC_SHOW_FLOATGROUP1_SET
- add: app_proc: PROC_SHOW_FLOATGROUP2_GET, PROC_SHOW_FLOATGROUP2_SET
- add: app_proc: PROC_SHOW_FLOATGROUP3_GET, PROC_SHOW_FLOATGROUP3_SET
- add: app_proc: PROC_FLOAT_SIDE_GET, PROC_FLOAT_SIDE_SET
- add: app_proc: PROC_FLOAT_BOTTOM_GET, PROC_FLOAT_BOTTOM_SET
1.0.242 (app 1.53.6)
- add/change: dlg_proc: form property "border" is now enum, with DBORDER_nnn values
- deprecated: dlg_proc: form property "resize"
- deleted deprecated: TOOLBAR_SET_BUTTON, TOOLBAR_ADD_BUTTON, TOOLBAR_ENUM, TOOLBAR_GET_CHECKED, TOOLBAR_SET_CHECKED
1.0.240 (app 1.52.1)
- add: ed.bookmark: BOOKMARK_SET can specify "delete bookmark on deleting line"
- deleted deprecated: BOOKMARK_CLEAR_HINTS
- deleted deprecated: TREE_ITEM_GET_SYNTAX_RANGE
1.0.239 (app 1.51.2)
- add: event on_mouse_stop
1.0.238 (app 1.49.5)
- add: dlg_proc/dlg_custom: controls have prop "font_style"
- add: file_open: options "/view-text", "/view-binary", "/view-hex", "/view-unicode"
- change: file_open: deleted option "/binary"
1.0.237 (app 1.49.1)
- add: statusbar_proc: STATUSBAR_GET_CELL_HINT, STATUSBAR_SET_CELL_HINT
1.0.236 (app 1.48.3)
- add: dlg_proc: controls have props "w_min/w_max/h_min/h_max", like forms
- fix: bug in dlg_proc/dlg_custom: controls with changed parent become not findable by name/index (and count of controls becomes smaller)
1.0.235 (app 1.48.2)
- add: support for "lazy events"
- add: event on_console_print
- add: lexer_proc: LEXER_REREAD_LIB
1.0.234 (app 1.47.5)
- add: event on_exit
- add: ed.get_prop/set_prop: PROP_UNPRINTED_SPACES_TRAILING
- add: ed.get_prop/set_prop: PROP_LAST_LINE_ON_TOP
- add: ed.get_prop/set_prop: PROP_HILITE_CUR_COL
- add: ed.get_prop/set_prop: PROP_HILITE_CUR_LINE
- add: ed.get_prop/set_prop: PROP_HILITE_CUR_LINE_MINIMAL
- add: ed.get_prop/set_prop: PROP_HILITE_CUR_LINE_IF_FOCUS
- add: ed.get_prop/set_prop: PROP_MODERN_SCROLLBAR
1.0.233 (app 1.47.1)
- add: app_proc: PROC_GET_GUI_HEIGHT supports name "scrollbar"
1.0.233 (app 1.47.0)
- add: dlg_menu: option MENU_NO_FUZZY
- add: dlg_menu: option MENU_NO_FULLFILTER
1.0.232 (app 1.46.2)
- add: dlg_proc: control "radiogroup" has prop "columns"
- add: dlg_proc: control "radiogroup" has prop "ex0"
- add: Editor.get_prop/set_prop: PROP_GUTTER_ALL, PROP_GUTTER_STATES
1.0.231 (app 1.45.5)
- add: dlg_proc: control "listview" has prop "columns"
- removed: on_group; on_state(..., APPSTATE_GROUPS) called instead
1.0.230 (app 1.44.0)
- add: dlg_proc: control "listview"/"checklistview" has event "on_click_header"
- add: install.inf: allow os= values with bitness: win32, win64, linux32, linux64, freebsd32, freebsd64
- deleted deprecated: PROC_GET_COMMAND
- deleted deprecated: PROC_GET_COMMAND_INITIAL
- deleted deprecated: PROC_GET_COMMAND_PLUGIN
1.0.229 (app 1.43.0)
- add: plugins can force show sidebar button, even if they don't run on start. New install.inf sections [sidebar1] .. [sidebar3]. Examples: ProjectManager, TabsList.
1.0.228 (app 1.40.7)
- add: event on_tab_change
1.0.227 (app 1.40.2)
- add: file_open: added options "/passive", "/nonear"
1.0.226 (app 1.40.1)
- add: app_proc: PROC_GET_CODETREE
- add: Editor.get_prop/set_prop: PROP_CODETREE
- add: tree_proc: TREE_ITEM_GET_RANGE, TREE_ITEM_SET_RANGE
- deprecated: tree_proc: TREE_ITEM_GET_SYNTAX_RANGE
- change: changed Code-Tree button caption from "Tree" to "Code tree"
1.0.225 (app 1.40.0)
- add: app_proc: PROC_SEND_MESSAGE
1.0.224 (app 1.39.5)
- add: dlg_proc: control prop "ex"
- deprecated: dlg_proc/dlg_custom prop "props" - now use "ex0"..."ex9"
- deprecated: BOOKMARK_CLEAR_HINTS
1.0.223 (app 1.39.1)
- add: install.inf supports param [info] os=, value is comma separated strings: win, linux, macos
- add: menu_proc: support for spec menu id "_oplugins"
1.0.222 (app 1.38.5)
- add: dlg_proc: form events "on_mouse_enter", "on_mouse_exit"
- add: dlg_proc: form events "on_show", "on_hide"
- add: dlg_proc: control events "on_mouse_enter", "on_mouse_exit", "on_mouse_down", "on_mouse_up"
1.0.221 (app 1.38.3)
- add: Editor.get_prop/set_prop: PROP_INDENT_SIZE, PROP_INDENT_KEEP_ALIGN, PROP_INDENT_AUTO, PROP_INDENT_KIND
1.0.220 (app 1.38.2)
- add: on_state event aslo called with new constants APPSTATE_nnnn
1.0.219 (app 1.38.1)
- add: app_proc: PROC_THEME_UI_DATA_GET
- add: app_proc: PROC_THEME_SYNTAX_DATA_GET
1.0.218 (app 1.38.0)
- add: Editor.hotspots()
- add: event on_hotspot
- add: app_proc: PROC_GET_MOUSE_POS
- add: dlg_proc: DLG_PROP_SET supports prop "p" (parent of form)
- add: Editor.convert: CONVERT_SCREEN_TO_LOCAL, CONVERT_LOCAL_TO_SCREEN
- add: Editor.convert: CONVERT_PIXELS_TO_CARET, CONVERT_CARET_TO_PIXELS
- add: Editor.markers: MARKERS_DELETE_BY_TAG
- add: Editor.get_prop: PROP_CELL_SIZE
1.0.217 (app 1.34.4)
- add: button_proc: const BTNKIND_TEXT_CHOICE
- add: button_proc: BTN_GET_WIDTH, BTN_SET_WIDTH
- add: button_proc: BTN_GET_ITEMS, BTN_SET_ITEMS
- add: button_proc: BTN_GET_ITEMINDEX, BTN_SET_ITEMINDEX
- add: button_proc: BTN_GET_ARROW_ALIGN, BTN_SET_ARROW_ALIGN
- add: button_proc: BTN_UPDATE
1.0.216 (app 1.34.2)
- add: statusbar_proc: STATUSBAR_GET_CELL_AUTOSIZE, STATUSBAR_SET_CELL_AUTOSIZE
- add: statusbar_proc: STATUSBAR_GET_CELL_AUTOSTRETCH, STATUSBAR_SET_CELL_AUTOSTRETCH
- add: statusbar_proc: 10 actions to get/set colors, STATUSBAR_GET/SET_COLOR_*
- deleted: STATUSBAR_GET_COLOR_BORDER, STATUSBAR_SET_COLOR_BORDER
- deleted: STATUSBAR_AUTOSIZE_CELL - now use STATUSBAR_SET_CELL_AUTOSTRETCH
1.0.215 (app 1.34.1)
- add: dlg_proc: DLG_CTL_PROP_GET gets key "items" for controls "listview", "checklistview", "listbox", "checklistbox"
1.0.214 (app 1.32.4)
- add: button_proc supports callable param "value"
1.0.213 (app 1.32.3)
- add: button_proc: BTN_GET_MENU, BTN_SET_MENU
- add: button_proc: BTN_GET_ARROW, BTN_SET_ARROW
- add: button_proc: BTN_GET_FOCUSABLE, BTN_SET_FOCUSABLE
- add: button_proc: BTN_GET_FLAT, BTN_SET_FLAT
- add: toolbar_proc: TOOLBAR_ADD_ITEM, TOOLBAR_ADD_MENU
- deprecated: toolbar_proc: TOOLBAR_ADD_BUTTON - now use TOOLBAR_ADD_ITEM/TOOLBAR_ADD_MENU + button_proc()
- deprecated: toolbar_proc: TOOLBAR_SET_BUTTON - now use TOOLBAR_GET_BUTTON_HANDLE + button_proc()
- deprecated: toolbar_proc: TOOLBAR_ENUM - now use TOOLBAR_GET_COUNT + TOOLBAR_GET_BUTTON_HANDLE + button_proc()
1.0.212 (app 1.32.2)
- add: toolbar_proc: TOOLBAR_GET_COUNT
- add: toolbar_proc: TOOLBAR_GET_BUTTON_HANDLE
- add: button_proc: BTN_GET_TEXT, BTN_SET_TEXT
- add: button_proc: BTN_GET_ENABLED, BTN_SET_ENABLED
- add: button_proc: BTN_GET_VISIBLE, BTN_SET_VISIBLE
- add: button_proc: BTN_GET_HINT, BTN_SET_HINT
- add: button_proc: BTN_GET_DATA1, BTN_SET_DATA1, BTN_GET_DATA2, BTN_SET_DATA2
- change: toolbar_proc: TOOLBAR_ENUM gets int value of "kind", before it was str
- deprecated: TOOLBAR_GET_CHECKED, TOOLBAR_SET_CHECKED - now use TOOLBAR_GET_BUTTON_HANDLE + button_proc()
- deprecated: PROC_GET_COMMAND, PROC_GET_COMMAND_INITIAL, PROC_GET_COMMAND_PLUGIN - now use PROC_GET_COMMANDS instead
1.0.211 (app 1.32.0)
- add: statusbar_proc()
- add: dlg_proc: added type "statusbar"
- add: toolbar_proc: TOOLBAR_UPDATE
1.0.210 (app 1.29.0)
- add: file_open: for binary viewer, pass options='/binary'
- add: ed.get_prop/set_prop: PROP_KIND, PROP_V_MODE, PROP_V_POS, PROP_V_SEL_START, PROP_V_SEL_LEN
1.0.208 (app 1.27.2)
- add: menu_proc: added MENU_SET_IMAGELIST, MENU_SET_IMAGEINDEX
- add: file_open: for zip files, gets True if zip installation completed
1.0.208 (app 1.26.2)
- add: lexer_proc supports lite lexers too (their names have " ^" suffix)
1.0.207 (app 1.25.1)
- add: app_proc: PROC_GET_COMMANDS
1.0.206 (app 1.24.2)
- add: dlg_proc: added ctl property "focused"
- add: dlg_proc: added form events "on_act", "on_deact"
1.0.205 (app 1.24.1)
- add: dlg_proc: ctl "edit" supports "on_change" (if "act":True)
1.0.204 (app 1.23.5)
- add: ed.get_wrapinfo()
- add: ed.get_prop/ed.set_prop: PROP_SCROLL_VERT, PROP_SCROLL_HORZ
- add: ed.export_html()
- deleted: ed.set_prop: PROP_EXPORT_HTML
1.0.203 (app 1.23.4)
- add: event on_scroll
- add: event on_group
- add: file_open: options can have "/noevent"
1.0.202 (app 1.23.0)
- add: file_open: can open preview-tab
- add: file_open: can open file ignoring history
- change: renamed file_open param "args" to "options"
1.0.201 (app 1.21.0)
- add: several commands in cudatext_cmd.py: simple word jump, goto abs line begin/end
1.0.200 (app 1.20.2)
- add: event on_insert
1.0.199 (app 1.19.2)
- add: toolbar_proc: TOOLBAR_GET_VERTICAL, TOOLBAR_SET_VERTICAL
- add: toolbar_proc: TOOLBAR_GET_WRAP, TOOLBAR_SET_WRAP
- deleted deprecated: TOOLBAR_GET_ICON_SIZES, TOOLBAR_SET_ICON_SIZES, TOOLBAR_ADD_ICON
- deleted deprecated: TREE_ITEM_GET_PROP, TREE_ITEM_GET_PARENT
- deleted deprecated: TREE_ICON_ADD, TREE_ICON_DELETE, TREE_ICON_GET_SIZES, TREE_ICON_SET_SIZES
- deleted deprecated: LEXER_GET_LIST, LEXER_GET_EXT, LEXER_GET_ENABLED, LEXER_GET_COMMENT, LEXER_GET_COMMENT_STREAM, LEXER_GET_COMMENT_LINED, LEXER_GET_LINKS, LEXER_GET_STYLES, LEXER_GET_STYLES_COMMENTS, LEXER_GET_STYLES_STRINGS
1.0.198 (app 1.18.0)
- add: constants WRAP_nnn
- add: ed.set_prop works for PROP_LINE_STATE
1.0.197 (app 1.17.2)
- add: dlg_proc: type "editor"
- add: object ed_goto
- add: objects ed_con_log, ed_con_in
- add: events: on_goto_enter, on_goto_change, on_goto_caret, on_goto_key, on_goto_key_up
- add: ed.get_prop/set_prop: PROP_ONE_LINE
- add: ed.get_prop/set_prop: PROP_LINE_NUMBERS
1.0.196 (app 1.16.2)
- add: ed.dim()
1.0.195 (app 1.15.5)
- add: dlg_proc: control prop "border"
- add: dlg_proc: control "linklabel" uses "on_click" (after opening URL if it's valid)
1.0.194 (app 1.15.4)
- add: app_proc: PROC_SIDEPANEL_ACTIVATE has 2nd param
- add: menu_proc: MENU_GET_PROP
- add: menu_proc: MENU_SET_CAPTION
- add: menu_proc: MENU_SET_VISIBLE
- add: menu_proc: MENU_SET_ENABLED
- add: menu_proc: MENU_SET_CHECKED
- add: menu_proc: MENU_SET_RADIOITEM
- add: menu_proc: MENU_SET_HOTKEY
- add: tree_proc: TREE_ITEM_GET_PROPS
- deprecated: tree_proc: TREE_ITEM_GET_PROP, TREE_ITEM_GET_PARENT
1.0.193 (app 1.15.0)
- add: image_proc()
- deleted: canvas_proc actions: CANVAS_IMAGE, CANVAS_IMAGE_SIZED (use image_proc instead)
- add: dlg_menu can have argument of type list/tuple
- add: dlg_menu can have argument "caption"
1.0.192 (app 1.14.8)
- add: dlg_proc: control type "splitter"
- add: dlg_proc: control prop "align"
- add: dlg_proc: control "listbox_ex" has event "on_draw_item"
- add: listbox_proc: actions LISTBOX_GET_ITEM_H, LISTBOX_SET_ITEM_H
- add: listbox_proc: actions LISTBOX_GET_DRAWN, LISTBOX_SET_DRAWN
- add: event on_paste
- deleted deprecated: event on_panel
- deleted deprecated: menu_proc spec strings: "recents", "enc", "langs", "lexers", "plugins", "themes-ui", "themes-syntax"
1.0.191 (app 1.14.2)
- add: dlg_proc: form property "border"
- add: dlg_proc: control type "button_ex"
- add: button_proc() to work with "button_ex"
1.0.190 (app 1.14.0)
- add: app_proc: PROC_GET_TAB_IMAGELIST
- add: ed.get_prop/ed.set_prop: PROP_TAB_ICON
- add: imagelist_proc: IMAGELIST_PAINT
1.0.189 (app 1.13.1)
- add: tree_proc: TREE_ITEM_SHOW
1.0.188 (app 1.13.0)
- add: imagelist_proc()
- add: tree_proc: TREE_GET_IMAGELIST
- add: toolbar_proc: TOOLBAR_GET_IMAGELIST
- add: lexer_proc: LEXER_GET_LEXERS
- deprecated: tree_proc: TREE_ICON_ADD, TREE_ICON_DELETE, TREE_ICON_GET_SIZES, TREE_ICON_SET_SIZES
- deprecated: toolbar_proc: TOOLBAR_GET_ICON_SIZES, TOOLBAR_SET_ICON_SIZES, TOOLBAR_ADD_ICON
- deprecated: lexer_proc: LEXER_GET_LIST
1.0.187 (app 1.12.2)
- add: lexer_proc: LEXER_GET_PROP
- deprecated: lexer_proc: LEXER_GET_EXT, LEXER_GET_ENABLED, LEXER_GET_COMMENT, LEXER_GET_COMMENT_STREAM, LEXER_GET_COMMENT_LINED, LEXER_GET_LINKS, LEXER_GET_STYLES, LEXER_GET_STYLES_COMMENTS, LEXER_GET_STYLES_STRINGS
- deleted: lexer_proc: actions didnt work, lexer files were not saved: LEXER_SET_*, LEXER_DELETE, LEXER_IMPORT
- add: dlg_proc: control "listview" on_select gets "data" param with 2-tuple
1.0.186 (app 1.12.0)
- add: tree_proc: TREE_ICON_GET_SIZES, TREE_ICON_SET_SIZES
- deleted deprecated: app_proc: PROC_MENU_* actions
- deleted deprecated: app_proc: PROC_SIDEPANEL_ADD, PROC_BOTTOMPANEL_ADD
1.0.185 (app 1.11.0)
- add: menu_proc: for MENU_ADD added param "tag"
- add: menu_proc: MENU_SHOW can use 2-tuple (x,y)
- add: menu_proc: MENU_ENUM gives additional dict key "command"
- deprecated: menu_proc command values: "recents", "enc", "langs", "lexers", "plugins", "themes-ui", "themes-syntax"
1.0.184
- deprecated: app_proc: PROC_SIDEPANEL_ADD, PROC_BOTTOMPANEL_ADD
- deprecated: on_panel event
- add: app_proc: PROC_SIDEPANEL_ADD_DIALOG, PROC_BOTTOMPANEL_ADD_DIALOG
- add: tree_proc: TREE_ITEM_FOLD_LEVEL
- add: tree_proc: TREE_THEME
- add: listbox_proc: LISTBOX_THEME
- add: toolbar_proc: TOOLBAR_THEME
- add: toolbar_proc: used new callback form
- add: menu_proc: MENU_SHOW with command="" to show at cursor
- add: menu_proc: added param "hotkey" for MENU_ADD, "hotkey" returned from MENU_ENUM
- deleted deprecated: PROC_GET_SPLIT, PROC_SET_SPLIT
- deleted deprecated: LOG_GET_LINES
- deleted deprecated: LOG_CONSOLE_GET, LOG_CONSOLE_GET_LOG
1.0.183
- big changes in dlg_proc:
- deleted: parameter 'id_event'
- deleted: props 'callback', 'events'
- form events is not called for controls (forms have own events, controls have own events)
- add: events for forms: 'on_resize', 'on_close', 'on_close_query', 'on_key_down', 'on_key_up'
- add: events for controls: 'on_change', 'on_click', 'on_click_dbl', 'on_select', 'on_menu'
- add: events for 'treeview' control: 'on_fold', 'on_unfold'
1.0.182
- add: dlg_proc: type "toolbar"
- add: app_proc: PROC_PROGRESSBAR
- add: app_proc: PROC_SPLITTER_GET, PROC_SPLITTER_SET
- deprecated: app_proc: PROC_GET_SPLIT, PROC_SET_SPLIT
- add: app_log: LOG_CONSOLE_GET_COMBO_LINES
- add: app_log: LOG_CONSOLE_GET_MEMO_LINES
- add: app_log: LOG_GET_LINES_LIST
- deprecated: app_log: LOG_CONSOLE_GET, LOG_CONSOLE_GET_LOG, LOG_GET_LINES
1.0.181
- add: dlg_proc/dlg_custom: type "pages"
- add: dlg_proc: control "paintbox" has sub-event "on_click" with info="x,y"
1.0.180
- add: app_proc: PROC_SHOW_SIDEBAR_GET, PROC_SHOW_SIDEBAR_SET
- add: app_proc: can pass not only string value
- add: ed.get_prop: PROP_COORDS
- add: dlg_proc/dlg_custom: type "panel", type "group"
- add: dlg_proc: control prop "p" (parent)
1.0.179
- add: dlg_proc/timer_proc/menu_proc: callbacks can be "callable", ie function names
- add: dlg_proc/timer_proc: callbacks can be with "info=...;" at end
- add: menu_proc: can use new-style callbacks like in dlg_proc
- deprecated: menu_proc old-style callbacks "module,method[,param]"
1.0.178
- add: dlg_proc: control props for anchors/spacing: "a_*", "sp_*"
- add: dlg_proc: param "name" to specify controls by name
- add: dlg_proc: form prop "color"
- add: dlg_proc: form prop "autosize"
- add: dlg_proc: actions DLG_DOCK, DLG_UNDOCK
- add: dlg_custom/dlg_proc: control type "paintbox"
1.0.177
- add: ed.replace
- add: ed.replace_lines
- add: dlg_custom: parameter "get_dict" to get new dict result (not tuple)
- add: dlg_proc: action DLG_SCALE
- add: app_proc: action PROC_GET_SYSTEM_PPI
1.0.176
- reworked dlg_proc, many form props+events added
- add: dlg_proc/dlg_custom: control type "treeview"
- add: dlg_proc/dlg_custom: control type "listbox_ex"
- add: dlg_proc/dlg_custom: control type "trackbar"
- add: dlg_proc/dlg_custom: control type "progressbar"
- add: dlg_proc/dlg_custom: control type "progressbar_ex"
- add: dlg_proc/dlg_custom: control type "bevel"
- add: file_open: added param "args"
- add: toolbar_proc: TOOLBAR_GET_CHECKED, TOOLBAR_SET_CHECKED
- delete: event on_dlg
- delete: dlg_proc/dlg_custom: prop "font" (use font_name, font_size, font_color)
- delete: deprecated APIs LOG_PANEL_ADD LOG_PANEL_DELETE LOG_PANEL_FOCUS
- delete: deprecated APIs PROC_TOOLBAR_*
1.0.175
- add: dlg_proc()
- add: dlg_custom: added props x= y= w= h= vis= color= font_name= font_size= font_color=
- add: timer_proc: can also use callbacks "module=nnn;cmd=nnn;" and "module=nnn;func=nnn;"
- add: timer_proc: added param "tag"
- add: menu_proc: actions MENU_CREATE, MENU_SHOW
1.0.174
- add: dlg_custom: "name=" (not required)
- add: dlg_custom: "font="
- add: dlg_custom: "type=filter_listbox", "type=filter_listview". To setup these filter controls, you must set "name=" for filter and its listbox/listview
- add: app_proc: PROC_ENUM_FONTS
1.0.173
- add: dlg_commands()
- add: toolbar_proc()
- deprecated: app_proc actions: PROC_TOOLBAR_*
1.0.172
- add: menu_proc()
- deprecated: app_proc actions: PROC_MENU_*
- change: PROP_ENC now uses short names, see CudaText#Encodings
1.0.171
- add: app_proc: PROC_SIDEPANEL_ADD takes additional value icon_filename
- deprecated: app_log: LOG_PANEL_ADD
- deprecated: app_log: LOG_PANEL_DELETE
- deprecated: app_log: LOG_PANEL_FOCUS
1.0.170
- add: app_proc: PROC_SET_CLIP_ALT
1.0.169
- add: ed.insert does append, if y too big
- add: ed.delete can use too big y2
1.0.168
- add: app_proc: PROC_SAVE_SESSION/ PROC_LOAD_SESSION get bool (session was saved/loaded)
- add: dlg_custom: "label" has prop to right-align
- add: ed.get_prop/set_prop: value can be int/bool too
1.0.167
- add: timer_proc. | http://wiki.freepascal.org/CudaText_API | CC-MAIN-2018-26 | refinedweb | 18,544 | 61.22 |
Problem with DirectStore
Problem with DirectStore
Hello.
I am trying to use Direct b DirectStore in my application, but I have some problems with them.
I write in MVC style and add in my app.js file this:
Code:
Ext.direct.Manager.addProvider({ 'type': 'remoting', 'url': '/places/remoting/router/', 'namespace': 'Recruitant.places', 'actions': { 'city': [ { 'formHandler': false, 'name': 'load_cities', 'len': 1 } ] } }); Recruitant.places.city.load_cities();
Code:
Ext.define('Recruitant.model.City', { extend: 'Ext.data.Model', proxy: { type: 'direct', url: '/places/remoting/router/', directFn: 'Recruitant.places.city.load_cities' }, fields: [ {name: 'id', type: 'int'}, {name: 'name', type: 'string'} ] });
Code:
Ext.define('Recruitant.store.Cities', { extend: 'Ext.data.DirectStore', model: 'Recruitant.model.City' });
Help me to solve this problem, please.
My code works if I create grid, store and other objects in app.js, but it is not work if I write Store, Model and other stuff with Ext.define.
On the client side, remove the quotes around the directFn specification: it needs to be a function, not a strong.
You have:
directFn: 'Recruitant.places.city.load_cities'
You should have:
directFn: Recruitant.places.city.load_cities
@oniseijin - not true, docs say: "The directFn may also be a string reference to the fully qualified name of the function, for example: 'MyApp.company.GetProfile'. This can be useful when using dynamic loading. The string will be looked up when the proxy is created."
I am using it as a string in my application now and it is necessary in case the JS gets loaded out of order, although I have gotten intermittent errors about the directFn is null whether I use the string or function symbol formatCh: | http://www.sencha.com/forum/showthread.php?151062-Problem-with-DirectStore&p=660838&viewfull=1 | CC-MAIN-2014-52 | refinedweb | 270 | 52.36 |
Visual C++ is one of the most used C++ IDEs which provides many interesting features to developers, each new version brings new major features and many extensions are available to add more nice features to it.In this post, I will talk about some useful features provided by CppDepend.
Visualize your Code Base
Dependency Graph
The dependency graph is the most common used graph to represent the dependencies between code elements, Visual Studio provides an interesting one. And the Graph provided by CppDepend is more oriented code quality, indeed the box size as the Edge thickness could be proportional to one of many metrics available.
Dependency Matrix
The DSM (Dependency Structure Matrix) is a compact way to represent and navigate across dependencies between components. DSM is used to represent the same information than a graph, but it’s more practical if:
- Many code elements are involved.
- You need to discover the dependency weight between elements.
- Discover easily dependency cycles between code elements.
Treemap
In the Metric View, the code base is represented through a
Treemap. Treemapping is a method for displaying tree-structured data by using nested rectangles. The tree structure used is the usual code hierarchy:
- C/C++ projects contain namespaces
- Namespaces contain types
- Types contains methods and fields
The rectangle size is proportional to the metric you choose, by default it’s the
NbLinesOfCode. And the Metric View can display a second metric by coloring the treemap elements. Hence two code metrics can be displayed at once.
For example, in this treemap, the rectangle size corresponds to the lines of code and the color to the cyclomatic complexity, you can visually detect where are the most complex method and the biggest ones.
Metrics Info
Many metrics are available for projects, namespaces, classes, methods and fields. The info view report some of these metrics which is very helpful to check in dev the quality of our implementations.
Define Rules
CQLinq is maybe the most powerful feature provided by CppDepend, we can easily define some advanced queries to search for a specific code elements. The CQLinq editor has many nice features to help us create our custom rules and the result is calculated in real time.
Get Diagnostics
With CppDepend, you can easily get all Clang, Cppcheck and Vera++ issues. You can plug other tools using the API.
Trend Charts
Trend chart is a useful feature to explore the evolution of a metric after some code changes. Many charts are provided by default and you can easily create your own charts.
Advanced Search
Sometimes, it’s useful to search methods by their code size, cyclomatic complexity, comment percent, coupling,…
The CppDepend provides this feature and you can search for code elements using many metrics.
Conclusion
CppDepend could help you to better understand your code base and detect some implementation and design flaws. However, as mentioned before, its feature richness makes it not easily accessible after a first use, you have to spend some hours to master some of its capabilities. | https://cppdepend.com/blog/?p=264 | CC-MAIN-2019-47 | refinedweb | 500 | 51.38 |
.
#import pandas library import pandas as pd #read data into DataFrame df = pd.read_excel('Financial Sample.xlsx') #visualise first 5 rows - different numbers can be placed within the parenthesis to display different numbers of rows - the default is 5 df.head()
This will look like the following:
So we can see we have “Excel like” data, presented in rows and columns.
If we wanted to sum the “Profit” column in Excel, we would enter a formula into a cell – something along the lines of “=SUM(L2:L30)” to sum up the total for column L, from rows 2 to 30.
With Pandas, we can just write:
total_profit = df['Profit'].sum() print(total_profit)
This gets us:
16893702.26
If we want to add a row at the bottom of the DataFrame that sums up a few select columns, this can be achieved as follows; first we need to create a Series that holds the
data fopr the sum of the rows in question.
sum_row = df[['Gross Sales','Sales','COGS','Profit']].sum()
If we print out the variable “sum_row” it appears as such:
Gross Sales 1.279316e+08 Sales 1.187264e+08 COGS 1.018326e+08 Profit 1.689370e+07 dtype: float64
Before we can append this to the bottom of our main DataFrame, we need to transpose the data and turn it into a DataFrame object.
sum_row = pd.DataFrame(sum_row).T
If displayed, “sum_row” would now look like this:
The final thing we need to do before adding the totals back is to add the missing columns. We use reindex to do this for us. The trick is to add all of our
columns and then allow pandas to fill in the values that are missing.
sum_row=sum_row.reindex(columns=df.columns)
It would now look like this:
The final step is to now append it to the main DataFrame, and show the last 5 rows to make sure it has worked properly:
df_final = df.append(sum_row) df_final.tail()
Great we can see it has worked as required and we now have a final row that shows the sums that we requested. If you wanted rto quickly sent the output back into either
“.csv” or .”xlsx” format you can just run one of the following:
df_final.to_csv('df_final.csv') df_final.to_excel('df_final.xlsx')
That will output the relevant file in the current working directory. if you want the output to appear somewhere else, you can just specify the full file path instead of
just the name of the file.
So it might currently seem like using Pandas is a bit of overkill, and why wouldn’t you just use an excel sum formula instead of the 5-6 step process we have outlined above.
Well that’s not a bad question – the answer will become evident as we investigate how to undertake more and more complex data manipulation operations. You will see that what takes multiple, convoluted
steps in Excel can actually be replicated in Python and Pandas in a much more simple format. Let’s take the next step, and replicate anm excel pivot table where we are looking to sum and group by
the month and year, for various data columns.
profit_per_month = pd.groupby(df[['Gross Sales','COGS','Profit','Month Name']],'Month Name').sum()
This will create a table which sums the ‘Gross Sales’,’COGS’,’Profit’ columns and groups them by “Month Name”.
This is great and all but you may have noticed that when we group by just “Month Name”, it doesn’t distinguish by year, and groups together the same month, across multiple years – this is
usually not exactly what we want. Usually we would like to group by month AND year. To achieve this we need tgo create a new column which concatenates together the “Month Name” column and
the “Year” column.
df['Month Year'] = df['Month Name'] + " " + df['Year'].map(str)
Because the “Month Name” is already a string, and “Year” in an integer, we need to change the datatype from an integer to a string, in order for the string concatenation to work.
That’s what the “.map(str)” part of the code does.
We can now use the “groupby” method to create a pivot table style output that sorts by month AND year.
profit_per_month = df[['Gross Sales','COGS','Profit','Month Year']].groupby('Month Year').sum()
This looks like the following:
Ok, we’re getting there, but the first column showing our month and year doesn’t seem to be in the right chronological order. That’s a problem!
The reason for this is that the Month and Year columns (which is currently or DataFrame “index”), isn’t of type “datetime”. It currently exists just as datatype “object”.
We can see this by running the code:
profit_per_month.index
Which gets us:
Index(['April 2014', 'August 2014', 'December 2013', 'December 2014', 'February 2014', 'January 2014', 'July 2014', 'June 2014', 'March 2014', 'May 2014', 'November 2013', 'November 2014', 'October 2013', 'October 2014', 'September 2013', 'September 2014'], dtype='object', name='Month Year')
There you can see at the bottom “dtype=’object'”. We need to convert this to a “datetime” object so that it can be auto sorted correctly by Pandas.
profit_per_month.index = pd.to_datetime(profit_per_month.index, format="%B %Y")
The format %B refers to the month name, and the %Y refers to the year. This tells Pandas what format to expect our “dates” in so that it can correctly carry out the convcersion. The full list
of formats can be found here.
Now if we call
profit_per_month.index
We can see that the datatype has changed to “‘datetime64[ns]'”
DatetimeIndex(['2014-04-01', '2014-08-01', '2013-12-01', '2014-12-01', '2014-02-01', '2014-01-01', '2014-07-01', '2014-06-01', '2014-03-01', '2014-05-01', '2013-11-01', '2014-11-01', '2013-10-01', '2014-10-01', '2013-09-01', '2014-09-01'], dtype='datetime64[ns]', name='Month Year', freq=None)
And a call to:
profit_per_month.sort_index(inplace=True) profit_per_month
now shows the correct sorting of the date columkn in chronological order.
Ooh…so close! The date columns now seems to have changed back to a “YYYY-MM-DD” format. If you’re happy with thatm then fantastic, we’re done. If you would prefer it
in a more “easy to read” format then we can do something along the folllowing lines.
profit_per_month.index = profit_per_month.index.strftime("%B %Y") profit_per_month
The fact that our pivot table index is now in chronological order means that we are now able to create some plots of our data with minimal code.
We can create a line chart of the data as folows:
profit_per_month.plot(figsize=(12,7))
Or we can change this to a bar chart very simply:
profit_per_month.plot(kind='bar',figsize=(12,7))
Although we used the “groupby” method above to create a “pivot table like” output, Pandas does actually have a dedicated “pivot_table” method. If for example we wanted to see the
average Sale Price by Country and Product, we can do so as follows:
df.pivot_table(values="Sale Price",index="Country",columns="Product",margins=True)
The “margin=True” adds subtotals to the rows and columns of the table. The output can be rounded down to show a certain number of decimal places (in this case 2) by adding “.round(2)” to the end
df.pivot_table(values="Sale Price",index="Country",columns="Product",margins=True).round(2)
Filtering DataFrames can also be done very easily – assume we wish to filter out results just to show data from which the country is Canada. We can do this a couple of ways – we can do it in
one line of code which will imediately output the filtered data.
df[df['Country'] == "Canada"]
Or we can do it in a couple of steps, firstly by setting up the filter and thjn applying the filter to the data.
#set up filter df_canada = df['Country'] == "Canada" #Apply filter to data df[df_canada]
This allows us to set up multiple filters and then apply them all at once to the data – below we add the filter than product must be “Montana” and then we apply borth filters to the data.
#set up filters df_canada = df['Country'] == "Canada" df_montana = df['Product'] == "Montana" #Apply filters to data df[df_canada & df_montana]
Technically this could also be done in one line of code, but you may find it a bit neater and easier to track what you are doing if you do it step by step – to each their own.
If you did want to do it in one line of code:
df[(df['Country'] == "Canada") & (df['Product'] == "Montana")]
To filter by multiple values within a single column, you can use the “isin” method – say we want data where the country is either Canada or France:
df[df["Country"].isin(["Canada","France"])]
And just for completeness, we can also use the same logic to apply multiple filters to multiple columns at once – say we wish to filter by country as either Canada or France,
and Product as either Montana or Paseo:
df[(df["Country"].isin(["Canada","France"])) &; (df["Product"].isin(["Montana","Paseo"]))]
Another thing we can do with Pandas is apply conditional formatting – say for example we wish to highlight the data in the filtered data we just created above – we wish to
highlight in red any value in the “Units Sold” column under 1000. We first need to create a simple function and then apply that to the DataFrame style.
def colour_red(val): color = 'red' if val < 1000 else 'black' return 'color: %s' % color #filter DataFrame df_canada = df['Country'] == "Canada" df_montana = df['Product'] == "Montana" df_filtered = df[df_canada & df_montana] #apply style to "Units Sold" column df_filtered.style.applymap(colour_red, subset=pd.IndexSlice[:, ['Units Sold']])
You can also do some prett snazzy other formatting styles – here’s just one example below – the full details of what can be achieved can be found here.
df_filtered.style.bar( color='#d65f5f')
So hopefully you’ll agree, that while Pandas may indeed have a steeper learning curve than Excel, once a certain level of proficiency is reached, the possibilities really are fantastic.
There’s PLENTY more functionality offered by Pandas that I havn’t mentioned 0 in fact I’ve barely scratched the surface so do feel free to investigate the module further! If you have any specific
qiuestions on how to replicate particular excel functionality, please do ask and i will do my best to show you how it is done.
Although I found that your post packed a lot of useful information for beginners, pd.groupby is depracted and should not be used anymore.
Hi there, many thanks for pointing that out – it had slipped my attention!
The post has been updated to the new, non-deprecated format of “groupby()”.
Appreciate you leaving a comment to let me know! | https://www.pythonforfinance.net/2018/06/30/replicating-excel-functionality-in-pandas/ | CC-MAIN-2022-27 | refinedweb | 1,804 | 59.74 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.