text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Sorting is arranging the data in ascending or descending order. Sorted data helps us searching easily. In mobile phone contacts are sorted in the alphabetic order it helps us to find easily a mobile number of any people.
Scala usesTimSort, which is a hybrid of Merge Sort and Insertion Sort.
Here is three sorting method of Scala.
sorted
Here is signature
def sorted[B >: A](implicit ord: Ordering[B]): Repr
The sorted function is used to sort the sequence in Scala like (List, Array, Vector, Seq). The sorted function returns new Collection which is sorted by their natural order.
Now, Here is a small example Sorted with Seq
If you want to sort in descending order then, use this signature Seq.sorted(Ordering.DataType.reverse)
If you want to sort on the basis of an attribute of a case class using
sorted method, then you need to extend
Ordered trait and override abstract method
compare. In
compare method, we define on which attribute we want to sort the objects of case class.
here is an example
sort on the basis of the name attribute of the case class.
If you do not extend
Ordered trait and want to sort a case class attribute then the compiler does not know in which attribute basis it will sort so it will give a compile-time error.
here is an example.
sortBy(attribute)
Here is signature
def sortBy[B](f: A => B)(implicit ord: Ordering[B]): Repr
The
sortBy function is used to sort one or more
attributes.
Here is a small example.
sort based on a single attribute of the case class.
sort in
descending by salary
sort is based on
multiple attributes, it will sort based on the first attribute if more than one value in the first attribute is same then it will sort on the basis of the second attribute and so on.
sort list of a
tuple by their second element using sortBy
similarly, we can sort list of a tuple by their first element
sortWith(function)
Here is signature
def sortWith(lt: (A, A) => Boolean): Repr
The
sortWith function Sorts this sequence according to a
comparison function. it takes a comparator function and sort according to it.you can provide your own custom comparison function.
Here is a small example
you can also sort by using own function.
Please comment, if you have any doubt or suggestion.
thank you 🙂
4 thoughts on “Sorting in scala using sorted,sortBy and sortWith function”
Good explanation buddy. To compare, java Comparable and Comparator and Sorted, SortBy, SortWith…
SortWith is just like Comparator in this case AgeComparator or SalaryComparator etc… remaining to are Sorted and SortBy looks like Comparable(Ordering in case of scala)
According to Java 7 API docs
Arrays#Sort() for object arrays now uses TimSort, which is a hybrid of MergeSort and InsertionSort.
On the other hand, Arrays#sort() for primitive arrays now uses Dual-PivotQuickSort.
Thanks for the comment, I agree with your points. | https://blog.knoldus.com/sorting-in-scala-using-sortedsortby-and-sortwith-function/?shared=email&msg=fail | CC-MAIN-2018-34 | refinedweb | 497 | 61.36 |
High-performance webpack config for front-end delivery
More and more developers are using webpack for easy bundling, but even in 2017, many websites still don’t take advantage of the biggest performance boosts webpack has to offer. webpack has access to more staggeringly-powerful built-in optimizations than you may be aware of. Are you utilizing all of it?
In this article, we’ll cover 7 easy webpack optimizations that will serve your app faster to users, no matter what you’re using (and an additional fallback option if dynamic imports isn’t available). In one example app, starting from no optimization, we were able to compress our entry JS by 700%, and leverage intelligent caching for snappy refreshes.
The best part? You can get a faster-loading site in only 15 minutes implementing a few of these tips. You can realistically implement all these tips in about 1 hour.
What this article covers (more 🚀
=== more speed):
- 🕙 1 min: Scope Hoisting (✨ new in webpack 3 ✨)
- 🕙 2 min: Minification and Uglification with UglifyJS2 🚀🚀🚀
- 🕙 15 min+: Dynamic Imports for Lazy-loaded Modules 🚀🚀🚀🚀🚀
- 🕙 5 min Deterministic Hashes for better caching 🚀🚀
- 🕙 10 min CommonsChunkPlugin for deduplication and vendor caching 🚀🚀
- 🕙 2 min: Offline Plugin for webpack 🚀🚀
- 🕙 10 min: webpack Bundle Analyzer 🚀🚀🚀
- 🕙 2 min: Multi-entry Automatic CommonsChunkPlugin for special cases where dynamic import isn’t possible 🚀
This article assumes you’re using webpack 3.x (3.x boasts a 98% seamless upgrade from 2.x, and should be a painless upgrade). If you’re still on 1.x, you should upgrade to take advantage of automatic tree-shaking and dead code elimination!
🖥 Code Examples🖥 Code Examples
Working examples of all of these concepts can be found in this webpack optimization sample repo. There will also be links in each section. This is cumulative whenever possible, so features added in one step will be carried over to the next.
1. Scope Hoisting1. Scope Hoisting
Est. time: 🕙 1 minEst. time: 🕙 1 min
Est. boost: 🤷Est. boost: 🤷
webpack 3, released in June 2017, was released with many under-the-hood improvements as well as a more optimized compiling mode called “scope hoisting” that has been saving some people precious kB. To enable scope hoisting, add the following to your production webpack config:
const webpack = require('webpack'); module.exports = { plugins: [ new webpack.optimize.ModuleConcatenationPlugin(), ], };
Jeremy Gayed @tizmagik: 70K => 37K (gzip!) savings on our main bundle using #Webpack 3 RC.2 + ModuleConcatenationPlugin 😲 🔥 Awesome work @TheLarkInn @wSokra et al!
Some have reported a reduction of almost 50% in bundle size, but, in example 1, I didn’t notice any change (it was technically a 6 byte gain for me, which is insignificant). From what it seems, scope hoisting yields the greatest potential for legacy apps with hundreds of modules and not for stripped-down example projects like my test. There seems to be no real downside to using it, so I’d still recommend adding it if it doesn’t raise any errors. The release blog post explains this feature further.
Aug 2017 update: further improvements have been added to scope hoisting. This is a feature the webpack core team is serious about, and working to improve even further.
Result: 🤷 It’s a simple add, with no downsides, and potential future payoff. Why not?
🖥 Example 1: Scope hoisting (view code)
2. Minification and Uglification2. Minification and Uglification
Est. time: 🕙 2 minEst. time: 🕙 2 min
Est. boost: 🚀🚀🚀Est. boost: 🚀🚀🚀
Uglification has been a close companion to optimization, and it’s never been easier to take advantage of than with webpack. It’s built right in! Though it’s one of the most accessible optimizations, it’s very easy for a developer in a rush to accidentally forget to uglify when sending assets to production. I personally have seen more than one webpack-bundled site with un-uglified code in production. So in our optimization journey, this is the first check to make sure is in place.
The wrong wayThe wrong way
If you’re new to uglification, the main thing to understand is what a difference it makes in file size. Here’s what happens when we run
webpack (no flags) command on our example base app:
webpack
Asset Size Chunks Chunk Names index.bundle.js 2.46 MB 0 [emitted] [big] index
The right wayThe right way
2.46MB. Ouch! If I inspect that file, it’s full of spaces, line breaks, and gratuitous comments—all things that don’t need to make it to production. In order to fix this, all that’s needed is a simple
-p flag:
webpack -p
Let’s see how that
-p flag affects our build:
Asset Size Chunks Chunk Names index.bundle.js 1.02 MB 0 [emitted] [big] index
1.02MB. That’s a 60% reduction in size, and I didn’t change any code; I only had to type 2 extra keyboard characters! webpack’s
-p flag is short for “production” and enables minification and uglification, as well as provides quick enabling of various production improvements.
📝 Note📝 Note
Despite what it may seem, the
-p flag does not set Node’s environment variable to
production. When running on your machine, you have to run
NODE_ENV=production PLATFORM=web webpack -p
It’s worth mentioning that some libraries (React and Vue, to name two) are designed to drop development and test features when bundled in
production, resulting in smaller file sizes and faster run times. If you run this on the example project, the index bundle actually eeks out at
983 kB rather than
1.06MB. On some other setups, though, there may be no difference in size—it’s up to the libraries being used.
💁 Tip💁 Tip
For quick builds, add a
"scripts" block to
package.json so either you or your server can run
npm run build as a shortcut:
"scripts": { "build": "webpack -p" },
Advanced uglificationAdvanced uglification
Uglify’s default setup is good enough for most projects and most people, but if you’re looking to squeeze every little drop of unnecessary code out of your bundles, add a
webpack.optimize.UglifyJsPlugin to your production webpack config:
plugins:[ new webpack.optimize.UglifyJsPlugin({/* options here */}), ],
For a full list of UglifyJS2 settings, the docs are the most up-to-date references.
⚠️🐌 Build Warning⚠️🐌 Build Warning
If you accidentally enable uglification in development, this will significantly slow down webpack build times. It’s best to leave this setting in production-only (see this article for instructions on how to set up production webpack settings).
Result: In our example, we shaved off 60% of file size with default uglification & minification. Not bad!
🖥 Example: No example, but you can try running
webpack -p on the base app.
3. Dynamic Imports for Lazy-loaded Modules3. Dynamic Imports for Lazy-loaded Modules
Est. time: 🕙 15 min+Est. time: 🕙 15 min+
Est. boost: 🚀🚀🚀🚀🚀Est. boost: 🚀🚀🚀🚀🚀
Dynamic importing is the Crown Jewel of front-end development. The Holy Grail. The Lost Arc. The Temple of Doom —er, scratch that last one; I got caught up naming Indiana Jones movies.
Whatever Harrison Ford movie you compare it to, dynamic, or lazy-loaded imports is a huge deal because it effortlessly achieves one of the central goals of front-end development: load things only when they’re needed, neither sooner, nor later.
In the words of Scott Jehl, “more weight doesn’t mean more wait.” How you deliver your code to users (lazy-loaded) is more important than the sum total code weight.
Let’s measure its impact. This is
webpack -p on our starting app:
Asset Size Chunks Chunk Names index.bundle.js 1.02 MB 0 [emitted] [big] index
We have
1.02MB of entry JS, which isn’t insignificant. But the crucial problem here is NOT the size. The problem is it’s delivered as one file. That’s bad because all your users must download the whole bundle before they see a thing on screen. We certainly can do better, breaking that file up and allowing paint to happen sooner.
Dynamic import, step 1: Babel setupDynamic import, step 1: Babel setup
Babel and the Dynamic Import Plugin are both requirements to get this working. If you’re not using Babel in your app already, you’ll need it for this entire feature. For first time setups, install Babel:
yarn add babel-loader babel-core babel-preset-env
and update your
webpack.config.js file to allow Babel to handle your JS files:
module: { rules: [ { test: /\.js$/, use: 'babel-loader', exclude: /node_modules/, }, ], },
Once that’s set up, to get dynamic imports working, install the Babel plugin:
yarn add babel-plugin-syntax-dynamic-import
and then enable the plugin by modifying or creating a
.babelrc file in your project root:
{ "presets": ["env"], "plugins": ["syntax-dynamic-import", "transform-react-jsx"] }
Some prefer to add Babel options within webpack like so so that it’s in the same file. I simply prefer the separate
.babelrc file for a cleaner, reduced webpack config. Either way works.
💁 Tip💁 Tip
In case you’re used to seeing
es2015 as the Babel preset rather than
env, consider switching.
env is a simpler config that can automatically transpile features based on your
browserslist (familiar to users of Autoprefixer for CSS).
Dynamic import, step 2:
import()
After we’ve got Babel set up, we’ll tell our app what and where we want to lazy-load. That’s as simple as replacing:
import Home from './components/Home';
with
const Home = import('./components/Home');
Doesn’t look much different, does it? If you’re ever seen
require.ensure before, that has now become deprecated in favor of
import().
Some frameworks like Vue already support this out-of-the-box. However, since our example app app is using React, we’ll need to add a tiny container called react-code-splitting. This saves us ~7 lines of boilerplate React code per import and deals with the rendering lifecycle update for us. But it’s nothing more than syntactic sugar, and the core functionality is 100% webpack
import(). This is what our app now looks like:
import React from 'react'; import Async from 'react-code-splitting'; const Nav = () => (<Async load={import('./components/Nav')} />); const Home = () => (<Async load={import('./views/home')} />); const Countdown = () => (<Async load={import('./views/countdown')} />);
Because webpack turns every
import() into a dynamic code split, now let’s see how that affected
webpack -p:
0.bundle.js 222 kB 0 [emitted] 1.bundle.js 533 kB 1 [emitted] [big] 2.bundle.js 1.41 kB 2 [emitted] index.bundle.js 229 kB 3 [emitted] index
It reduced our core entry
index.bundle.js file from
1.06MB to
229kB, an 80% reduction in size! This is significant because it’s our entry file. Before, painting couldn’t happen until that
1.06MB downloaded completed, parsed, and executed. Now, we only need 20% of the original code to start. And this applies for every page on the site! This doesn’t technically translate to a 5× faster paint—calculating is more complicated than that—but it’s nonetheless an amazing speed boost with little time investment.
Surely this can’t be it, there has to be more setup! you may be thinking. You’d be wrong!
In our example,
index.bundle.js entry file didn’t change its name, so that’s still the only
<script> tag needed. webpack handles all the rest for us (though you may run into a polyfill issue if you need to support a browser that doesn’t support
Promise)!
Sean Larkinn @TheLarkInn: "Modern UI libraries have code splitting support." @vuejs: "Hold my beer..." #VueJS #vueconf #javascript #webpack #reactjs
Result: We have a paint-able app at only 20% of the original bundle size. Dynamic imports is arguably the single-best optimization you can make on the front-end. Provided you’re using client-side routing for maximum effectiveness.
🖥 Example 3: Dynamic Import (view code)
4. Deterministic Hashes for Caching4. Deterministic Hashes for Caching
Est. time: 🕙 5 minEst. time: 🕙 5 min
Est. boost: 🚀🚀Est. boost: 🚀🚀
Caching is solely a benefit to returning users and doesn’t affect that critical first experience. By default, webpack doesn’t generate hashed file names (e.g.:
app.8087f8d9fed812132141.js), which means everything stays cached and your updates may not be making it to users. This can break the experience and cause frustration.
The quickest way to add hashes in webpack is:
output: { filename: '[name].[hash].js', },
But there’s a catch: these hashes regenerate on every build, whether the file contents have changed or not. If you’ve hooked up your app to automatically run
webpack -p on deploy (which is a great idea), this means every deploy users will have to download all your webpack assets over again, even if you didn’t change a line of code.
We can do better with deterministic hashes, hashes that only change if the file changes.
⚠️🐌 Build Warning⚠️🐌 Build Warning
Deterministic hashes will slow down your build times. They’re a great idea for every app, but this just means this config should reside in your production webpack config only.
To start, run
yarn add chunk-manifest-webpack-plugin webpack-chunk-hash to add the proper plugins. Then add this to your production config:
const webpack = require('webpack'); const ChunkManifestPlugin = require('chunk-manifest-webpack-plugin'); const WebpackChunkHash = require('webpack-chunk-hash'); const HtmlWebpackPlugin = require('html-webpack-plugin'); /* Shared Dev & Production */ const config = { /* … our webpack config up until now */ plugins: [ // /* other plugins here */ // // /* Uncomment to enable automatic HTML generation */ // new HtmlWebpackPlugin({ // inlineManifestWebpackName: 'webpackManifest', // template: require('html-webpack-template'), // }), ], }; /* Production */ if (process.env.NODE_ENV === 'production') { config.output.filename = '[name].[chunkhash].js'; config.plugins = [ ...config.plugins, // ES6 array destructuring, available in Node 5+ new webpack.HashedModuleIdsPlugin(), new WebpackChunkHash(), new ChunkManifestPlugin({ filename: 'chunk-manifest.json', manifestVariable: 'webpackManifest', inlineManifest: true, }), ]; } module.exports = config;
Our
process.env.NODE_ENV === 'production' conditional will now only apply if we’re in production.
💁 Tip💁 Tip
In the above example, running
yarn add html-webpack-plugin html-webpack-template, and uncommenting the commented-out plugin, will get webpack to auto-generate HTML for you. This is great if you’re using a JS library like React to generate markup for you. You can even customize the template if needed.
Running
webpack -p, you’ll notice a
chunk-manifest.json file that needs to be inlined in the
<head> of your document. If you’re not using webpack’s HTML plugin, you’ll need to do that manually:
<head> <script> //<![CDATA[ window.webpackManifest = { /* contents of chunk-manifest.json */ }; //]]> </script> </head>
There’s also a
manifest.js file that will need to be added via a
<script> tag as well. Once both are in there, you should be good to go!
Result: Users now get updates as soon as we push them, but only if the file has changed its contents. Caching solved!
🖥 Example 4: Caching with Deterministic Hashes (view code)
5. CommonsChunkPlugin for Vendor Caching5. CommonsChunkPlugin for Vendor Caching
Est. time: 🕙 10 minEst. time: 🕙 10 min
Est. boost: 🚀🚀Est. boost: 🚀🚀
We’ve taken great care to cache our webpack assets, but let’s take it even further and cache our vendor bundles so users don’t have to download the entire entry file again if we change a single line of code. In order to do that, let’s add a
vendor entry item to store our third-party libraries:
module.exports = { entry: { app: './app.js', vendor: ['react', 'react-dom', 'react-router'], }, };
When we run
webpack -p on it…
Asset Size Chunks Chunk Names index.bundle.js 230 kB 3 [emitted] index vendor.bundle.js 173 kB 4 [emitted] vendor
Unfortunately our index file is bigger than it should be, and the culprit is webpack’s bundling React, ReactDOM, and React router in both
index.bundle.js and
vendor.bundle.js.
webpack isn’t to blame, though, as it did exactly what we asked it to. When you specify an entry file, you’re telling webpack you want each output file to be independent and complete. webpack assumed you will be serving either one or the other, not both at once.
However, we will be serving both at once, which will require just a bit more config. We’ll have to add CommonsChunkPlugin to
plugins:
const webpack = require('webpack'); plugins: [ new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', }), ],
CommonsChunkPlugin is now enabled, and knows to use the
vendor entry point as the base for the CommonsChunk. You may have seen a
minChunks setting on this plugin that we can omit here because we’ve already told webpack exactly what was going in this bundle.
💁 Tip💁 Tip
CommonsChunkPlugin’s
name must match the name of the
vendor entry file, otherwise we’re back to square one and duplicating vendor libraries across all entry files.
With all this in place, let’s run
webpack -p again on the same app:
Asset Size Chunks Chunk Names index.bundle.js 55.7 kB 3 [emitted] index vendor.bundle.js 174 kB 4 [emitted] vendor
Wouldn’t you know it? Our index file has dropped in size by the approximate size of our vendor bundle:
174 kB. We’re no longer duplicating code, however, now we must load
vendor.bundle.js first on every page before
index and it is now a dependency wherever
index is required:
<!-- vendor comes first! --> <script src="vendor.bundle.js"></script> <script src="index.bundle.js"></script>
Now whenever you update your app code, users will only have to redownload that
55.7 kB entry file, rather than all
174 kB. This is a solid win on any app setup.
💁 Tip💁 Tip
When picking modules for the
vendor bundle:
- Limit it to only modules the entire app uses
- Also limit it to less-frequently-updated dependencies (remember: if one vendor lib updates, the whole bundle will re-download)
- Only load commonly-used submodules. For example, if the app uses
'rxjs/Observable'frequently, but
'rxjs/Scheduler'rarely, then only load the former. And whatever you do, don’t load all of
'rxjs'!
Result: Like any caching effort, this caters mostly to returning visitors. If you have a frequently-referenced site or service, this is absolutely essential.
🖥 Example 5: Commons Chunk Plugin (view code)
6. Offline Plugin for webpack6. Offline Plugin for webpack
Est. time: 🕙 2 minEst. time: 🕙 2 min
Est. boost: 🚀🚀Est. boost: 🚀🚀
Have you ever visited a site on your mobile phone when you were on spotty data, and you either accidentally triggered a refresh or the site itself triggered a refresh? Much frustration could have been avoided if the site that had already fully loaded had cached itself better. Fortunately, there’s a webpack plugin that’s become a staple in the PWA community: OfflinePlugin. By only adding a couple lines to your webpack config, now your site users can view your website while offline!
Install it:
yarn add offline-plugin
Add it to your webpack config:
const OfflinePlugin = require('offline-plugin'); module.exports = { entry: { // Adding to vendor recommended, but optional vendor: ['offline-plugin/runtime', /* … */], }, plugins: [ new OfflinePlugin({ AppCache: false, ServiceWorker: { events: true }, }), ], };
And, somewhere in your app (preferably in your entry file, before rendering code):
/* index.js */ if (process.env.NODE_ENV === 'production') { const runtime = require('offline-plugin/runtime'); runtime.install({ onUpdateReady() { runtime.applyUpdate(); }, onUpdated() { window.location.reload(); }, }); }
Overall, it’s a simple addition to any app that can result in a significantly increased user experience for users like subway riders rapidly dropping in and out of service. For more information about the benefits, and how to configure it better for your setup, see the documentation.
Result: We checked off that pesky “Respond 200 when offline” requirement in Lighthouse in only minutes.
🖥 Example 6: Offline Plugin (view code)
7. webpack Bundle Analyzer7. webpack Bundle Analyzer
Est. time: 🕙 10 minEst. time: 🕙 10 min
Est. boost: 🚀🚀🚀Est. boost: 🚀🚀🚀
Out of all the options we’ll cover for optimizing your build, this is by far the least automatic, but also helps catch careless mistakes that the automatic optimizations will skip over. In some regard, you start optimizing here because how else can you optimize your bundle if you don’t understand it? To add webpack Bundle Analyzer, run
yarn add --dev webpack-bundle-analyzer in your repo, and add it to your development webpack config only:
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin; config = { /* shared webpack config */ }; if (process.env.NODE_ENV !== 'production' && process.env.NODE_ENV !== 'test') { config.plugins = [ ...config.plugins, new BundleAnalyzerPlugin(), ]; }
Pay attention to the
.BundleAnalyzerPlugin method at the end of
require('webpack-bundle-analyzer').BundleAnalyzerPlugin—this is pretty unique
Run the following to run the bundle analyzer at
localhost:8888:
node_module/.bin/webpack --profile --json > stats.json
Here, you can see the breakdown of your entire app, byte-by-byte. Look closely at the moment section above from example 7. There are a quite a few languages bundled! All of them, to be exact. While internationalization is a wonderful thing, we’re not ready for it yet in our sample app, so there’s no good reason we should serve unused languages to the client.
Asset Size Chunks Chunk Names 0.cc206a4187c30a32c54e.js 224 kB 0 [emitted]
We’re using dynamic importing, which is great, but we’re still at
224 kB for our moment chunk. Researching a little bit, I found this solution that allowed me to only use the locales I needed.
According to the bundle analyzer, it’s looking a lot smaller! But let’s see how our final bundle performed:
Asset Size Chunks Chunk Names 0.4108c847bef03ae9e840.js 62.7 kB 0 [emitted]
It’s down by
161 kB. That’s quite a bit of savings! Had we never run the webpack bundle analyzer, we might have never noticed all that bloat in our app, and simply accepted it as dependency weight. You may be surprised how much one simple library swap or one reconfigured webpack line could save on your bundle size!
With webpack Bundle Analyzer, you get some great hints on where to start looking for optimization opportunities. Start at the biggest modules first and work your way down, seeing if there’s anything you can optimize along the way. Can you cherry-pick (e.g., can you use
require('rxjs/Observable') instead of
require('rxjs'))? Can you replace large libraries with smaller ones (e.g., swap React with Preact)? Are there any modules you can drop entirely? Asking questions like these can often have big payoffs.
Result: We discovered a pretty glaring bloat in our app, and was able to save
161 kB on a chunk request. Definitely worth our time.
🖥 Example 7: webpack Bundle Analyzer (view code)
8. Multi-entry Automatic CommonsChunk Plugin8. Multi-entry Automatic CommonsChunk Plugin
Est. time: 🕙 2 minEst. time: 🕙 2 min
Est. boost: 🚀Est. boost: 🚀
The last optimization we’ll cover is a technique from the early days of webpack that in my opinion isn’t as needed as it once was (if you disagree, please comment—I’d love to hear how you’re using it). This option should only be pursued if your app meets all the following:
- Contains many, many entry bundles across the whole app
- Can’t take advantage of dynamic imports
- The amount of proprietary code far, far outweighs NPM libraries AND it’s split into ES6 modules
If your app doesn’t meet all these criteria, I’d recommend you to return to section 3. Dynamic Import, and #5. CommonsChunkPlugin for vendor caching. If you meet all the requirements and this is your only option, let’s talk about its pros and cons. First, assume we have the following entries in our app (we’re not using the examples from earlier because this is a very different setup):
module.exports = { entry: { main: './main.js', account: './account.js', shop: './shop.js', }, };
We can update CommonsChunk to just figure things out automatically:
/* Dev & Production */ new webpack.optimize.CommonsChunkPlugin({ name: 'commons', minChunks: 2, }), /* Production */ new webpack.optimize.CommonsChunkPlugin({ name: 'manifest', minChunks: Infinity, }),
The only setting to really tweak here is
minChunks. This determines how many bundles a module must appear in, in order to make the
commons.js file. If a library only appeared in 1 of the 3, it wouldn’t make it. But as soon as two bundles required it, it would now be removed from both modules, and placed in
commons.js.
Again, this only works if you’re not taking advantage of dynamic imports. Because with dynamic imports, we were able to intelligently load, on every page, only the code the user needed and nothing more. But with this option, it’s somewhat “dumb” in the sense that it doesn’t know what a user needs per page (bad); it just bundles an average best commons file based on the assumption a user will hit many pages in your app (probably not the case). Further, it’s not truly automatic as you’ll have to test and find the most efficient
minChunks setting based on your app’s user flow and architecture.
Result: Not bad, but it’s better to use dynamic importing (#3) and vendor commons chunk (#5)
Final Example SavingsFinal Example Savings
We’ve covered some powerful new ways you can deliver your same experience to users, much faster. Here’s a breakdown of how much kB we saved in each step in our example app:
† The Commons Chunk Plugin technique split one entry file into 2, and the
combined size of both is listed.
‡ The Bundle Analyzer technique saved
161 kB on a particular request. This
is significant savings even if it doesn’t apply to 100% of users.
Were you able to incorporate some techniques into your app? Did your Lighthouse score improve at all? Are you closer to that elusive 1-second paint time? Or were any webpack optimizations missed? Please leave a comment!
Further Reading & NotesFurther Reading & Notes
- Google’s articles on optimization are nothing short of brilliant. I’ve not run across any other resource that demands you load your website in 1 second, and then provide excellent suggestions in reaching it. If you’re not sure how to improve your application, the RAIL model from that link is the best place to start.
- Lighthouse, in case you missed the previous mentions in the article, is the current status quo of performance measurements. Lighthouse is very similar to Google’s old PageSpeed ranking, or YSlow, except it’s modernized for cutting-edge web apps in 2017 and holds your app to the latest standards.
- webpack has made recent improvements to its dynamic import docs, mentioning Vue’s deft handling of
import(). It’s exciting to see so much improvement on this year!
- webpack’s AggressiveSplittingPlugin gets an honorable mention here, referenced in an article on webpack & HTTP/2 by the author of webpack. The plugin was originally included in this article, but after testing I found dynamic imports to be a universally better solution because it removes the option to lazy load and requires 100% to be served upfront. It was designed to tap into HTTP/2’s parallel downloading capabilities, but the small boost from that will rarely ever offset the overhead of downloading an entire application versus the minimal amount needed.
- 10 things I learned making the fastest website in the world by David Gilbertson sets another high standard for optimization. Unsurprisingly, webpack plays a vital role in achieving his goals.
- The focus of this article was front-end performance, not build times, but there were still several tips on the subject. If you followed the build tips and are struggling with a slow
webpack --watch, try using webpack’s dev server instead. It builds in-memory and doesn’t write to the file system, saving precious time. The general idea with this strategy is to use this for development, and only bundle during deployment, skipping
--watchentirely. | https://www.codementor.io/drewpowers/high-performance-webpack-config-for-front-end-delivery-90sqic1qa | CC-MAIN-2017-51 | refinedweb | 4,632 | 57.27 |
Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to troubleshoot a “provider load failure.”
Hey, Scripting Guy!
I ran across what may be an interesting issue for you. In one of our audit scripts last night, we received the following error:
Test-Connection : Provider load failure
At SomePowerShellScript.PS1:26 char:24
+ if (test-connection <<<< $srv -q -count 1) {
+ CategoryInfo : InvalidOperation: (:) [Test-Connection], ManagementException
+ FullyQualifiedErrorId : TestConnectionException,Microsoft.PowerShell.Commands.TestConnectionCommand
What provider does Test-Connection use? I’m trying to research the failure. I know that another scan, which uses compiled code, did a whole slew of ping sweeps after this failed, so I don’t think the problem is at a network/network interface layer. Also, with no changes to the system, the scripts ran fine this morning.
I’ve done a few Internet searches for the issue and have gotten no closer, hence the email to the guru.
Sincerely,
BK
Microsoft Scripting Guy, Ed Wilson, is here. I am still in Redmond, Washington today. This morning I am going to have coffee (well, maybe just a cup of tea) with the scripting manager. So I am sitting in the building 43 coffee shop, working through my email. Lo and behold! I look up, and it is Jeffrey Snover coming over to say hi to me! We chat about the Hey, Scripting Guy! Blog for a bit, he confirms our meeting for this afternoon, and he heads off to his first meeting of the day. Wow. I know I have said this before (probably even yesterday), but I LOVE my job!
First find the provider
BK, I happen to know that the Test-Connection Windows PowerShell cmdlet uses the Win32_PingStatus WMI class to perform the ping. I found this out by using the Get-Help cmdlet to look up Test-Connection. In the description, it says that the cmdlet returns an instance of the Win32_PingStatus WMI class. This is shown in the image that follows.
To find the provider of a WMI class, there are several approaches. Perhaps the easiest way is to use wbemtest. This is shown here.
Another way to find the provider of a WMI class is to use the Get-CimClass cmdlet as shown here.
PS C:\> Get-CimClass win32_pingstatus | select -expand cimclassqualifiers
Name Value CimType Flags
—- —– ——- —–
dynamic True Boolean …rride, ToSubclass
provider WMIPingProvider String …rride, ToSubclass
When I know the WMI class and the WMI provider, there are a couple things I can do. I can look the information up on MSDN. The WMIPingProvider and the Win32_PingStatus class are both documented.
Start a trace
With the name of the WMI provider and the name of the WMI class, it is time to start a trace log. I have written a week’s worth of Hey, Scripting Guy! Blogs about this, and you should review those posts for fundamental concepts. Here are the steps:
- Open the Windows Event Viewer utility.
- On the View menu, check Show Analytic and Debug Logs.
- Navigate to Microsoft/Windows/WMI-Activity.
- Right-click the Operational log, and select Enable Log from the Actions menu.
- Right-click the Trace log, and select Enable Log from the Actions menu.
Wait for the problem to occur again
BK, now you need to wait for the problem to surface again. When the error occurs again, go back to the Event Viewer utility, navigate to the WMI-Activity log, and search through the Operational logs and Trace logs. Hopefully, you will be able to find what is causing the problem. If not, you need to enable the Debug log by using the previous procedure. Unfortunately, the Debug logs require expert WMI skills and detailed knowledge that is rarely found outside of Microsoft Customer Service and Support (CSS). Therefore, you would need to open a case with Microsoft CSS to get to the root cause. can use:
[wmiclass]'Win32_PingStatus' | select -ExpandProperty qualifiers
if you still living in PowerShell 2.0.
@Greg Wojan thanks for the comment. This is indeed a great tip. Thank you for sharing.
Hi, I’m on a Win7 Home Premium (my noteb), and I just see the Trace item under WMI-Activity. Is that because of the windows version? Is it any way to enable the other two items in the event viewer?
Thanks,
Is it possible to query to WMI on a Remote Computer for MicrosoftTPM namespace?
I am trying to query to Win32_Tpm class of WMI from remote machine.but It’s failing with HRESULT 0x80041013 & Description: Provider load failure.
I have posted the code,.
Please take look at it, and let me know if remote query to it is possible. | https://blogs.technet.microsoft.com/heyscriptingguy/2012/09/12/use-powershell-to-troubleshoot-provider-load-failure/ | CC-MAIN-2016-40 | refinedweb | 775 | 74.9 |
Based on the functionality described thus far, several types of applications can be built based on ActiveSync services and used to synchronize device data with a desktop PC:
ActiveSync provider: As described in the ActiveSync provider section, the ActiveSync architecture is extensible and allows custom providers to be implemented. Although this approach is the most difficult of those described in this section for Compact Framework developers (because of the reliance on COM), this type of application represents the tightest integration with ActiveSync.
File synchronization: Although it is not typically discussed in most Windows CE resources, using the file-synchronization support included in ActiveSync allows a managed application to provide synchronization support with a modicum of effort; however, developers will likely want to add ActiveSync notification support, which will require a few calls to the Windows CE API. Applications that depend on file synchronization require an installed dummy file filter, which transfers files with a specific extension and stored in a special folder. This is discussed in more detail later in this section.
RAPI application: This type of application resides on the desktop and calls functions located in the Rapi.dll. The application would therefore not interact with ActiveSync at all, and so the developer would have full control over the GUI. The uses for this type of application are virtually unlimited due to the breadth of RAPI functions that allow remote activities, such as retrieving device system information, accessing the device registry, controlling processes, communicating with device windows using Windows messages, and doing directory and file manipulation, including the copying of files in either direction. A simple example of how to use RAPI is included in this chapter.
Pass-through application: When using ActiveSync 3.5 and higher coupled with Pocket PC 2002, the PC becomes the hub to the network for the device. To take advantage of this, no configuration is required. The developer simply builds the Compact Framework application to generate messages that access the network or Internet resources. These messages would "pass through" the PC and could be used with the types of communication that were covered in Chapter 4.
In the remaining parts of this section, we will discuss in more detail the latter three types of applications.
Because this chapter emphasizes the most basic of synchronization options, we'll start by showing how the Compact Framework can be used to take advantage of the file-synchronization mechanism provided by ActiveSync. We'll illustrate this technique by synchronizing data for a list of book publishers between the desktop and device utilizing XML files generated from the database and an ADO.NET DataSet on the device. To do this, the following steps are required as discussed in the proceeding sections:
Enable file synchronization.
Create an ActiveSync dummy file filter.
Create a folder structure to support imported and exported files.
Build a PC application that generates files destined for transfer to the device and reads files transferred from the device.
Build a Compact Framework application registered to run upon a notification from ActiveSync.
Build a Compact Framework application that reads files transferred from the desktop and outputs files transferred to the desktop.
The first step is to enable file synchronization from the desktop machine. To turn this feature on, a developer would select Files from the Options dialog (the same dialog shown during the partnership creation process) shown in Figure 6-7, invoked from the Tools | Options menu in ActiveSync.
A dummy file filter is an ActiveSync configuration entry that tells the ActiveSync synchronization engine to transfer a file of a specific extension to the other side without conversion. By doing this, ActiveSync will copy data files not only from the desktop to the device, but also from the device to the desktop.
The filters for a machine are installed in the registry under the key HKEY_ LOCAL_MACHINE\Software\Microsoft\Windows CE Services\Filters. As seen in RegEdit (Figure 6-8), ActiveSync installs a variety of filters. Each subkey is a file extension of a registered file type. Under the subkey, there are string values for DefaultImport and DefaultExport. The entries hold the class identifier (CLSID) of a COM object responsible for the conversion and for transferring data between the desktop and device. Note that these entries can also have a value of Binary Copy to specify that the files should be transferred without conversion.
For our example, XML files will be synchronized without conversion for the application. To enable this, a developer would add a subkey to the registry key. The subkey should be of the form ".XXX." In this case, the extension would be .xml, as shown in Figure 6-8, in order to transfer XML files. Additionally, two string values are entered under the subkey for the DefaultImport and DefaultExport, which have the values Binary Copy assigned.
Although RegEdit can be used to make these entries, a second way to create these entries is to import a .reg file, the content of which is shown in the following snippet:
Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows CE Services\Filters\.xml] "DefaultImport"="Binary Copy" "DefaultExport"="Binary Copy"
Save this text file with a .reg extension, and then double-click to add the keys to the registry.
It is possible to view and edit the registry settings within the ActiveSync application by displaying the Options dialog, selecting Files, and selecting the Rules tab. From here a developer can disable the conversion of files at synchronization or edit the settings using the Device to Desktop and Desktop to Device tabs.
When a partnership is created, a synchronizing folder is created on the PC. The location of the folder is based on several factors, including where Windows user folders are stored, the Windows account name, and the name of the device. For example, if personal folders are stored in C:\Documents and Settings\, the user is jbox, and the device is named Pocket_PC, the resulting folder is C:\Documents and Settings\jbox\My Documents\Pocket_PC. Assuming that a filter is registered for a given file type, any file placed in this directory structure will be converted and transferred to the device's corresponding folder.
The device's synchronization folder will be \My Documents. Any file placed in this folder will be converted and transferred to the desktop machine (actually, the results will vary based on the registered filter).
In order to avoid conflict-resolution issues with using the same folder, we recommend creating an additional desktop folder with child folders for importing and exporting. So, in this example, we would have a folder called Publishers and two child folders called Inbox and Outbox. From the perspective of the device, Inbox is the container for files flowing to the device, whereas Outbox is for data flowing to the desktop. This structure should be placed under the file-synchronization folder as described in the previous paragraphs. This structure will then be automatically duplicated on the device during the next synchronization or immediately if the device is cradled.
This functionality can be tested by placing the files directly in the folders with Windows Explorer (as opposed to automating). For example, a developer could perform the following test to make sure everything is working:
Cradle a device with a partnership that has the synchronization setup, as we have discussed so far.
Find an XML file on the PC, and place it in one of the Publisher folders on the PC. In several seconds, the file will be on the device.
Delete the file from the PC. After several seconds, the file will disappear from the device.
On the PC, right click on one of the Publisher folders and create a text file. After several seconds, the file will show up on the device.
Click on the device file to load it in Pocket Word. Add some text like "From the Device," close the editor, and wait several seconds.
Open up the file on the PC to view the results.
Now that you have an understanding of the plumbing provided by ActiveSync, we'll discuss how both static and dynamic data will flow between the desktop and device.
The first flow is for static data. This is the type of data that rarely changes and usually comes in pairs (ID and description), for example, a list of states and their codes, customer types and their IDs, or other lookup and reference information that the device application will use to validate data. To move this type of data, the desktop application will need to query a database or other data store and write out the data at start-up and anytime the next flow occurs. By creating a Static.xml file in the Inbox folder, the file will be transferred to the device. The device application should notice this at some point, apply it intelligently, then delete the file from the device folder.
The second flow is for dynamic data, defined as any data that can be created, changed, or deleted on the device. This data must eventually be synchronized with the database or data store so that changes from any device will eventually find their way to the rest of the devices. This flow is initiated when the device is cradled. At this point, a notification application (using RAPI, for example) will send a message to the device application to initiate the task of writing all data to the Outbox (or launch the device application and write out the data), which will be synchronized to the desktop in a file called Dynamic.xml. This will be recognized by the desktop application, which will cause the data to be applied to the database, a full copy of the data to be generated from the database resulting in Dynamic.xml and Static.xml files in the Inbox folder (if needed), and the original Outbox/Dynamic.xml file to be deleted. The device will then notice the newly received files and start using them.
For this scheme to work, a process on the desktop has to push data from the data source and pull data from the device. Although it is a simple application to create, there are a few issues to consider.
The first is whether to build a Windows application or a Windows service. Fortunately, either type is easily built with VS .NET 2003.[2] Typically, a Windows service will be used if the situation requires a process that must run in the background, doesn't require user interaction, and should be running without someone initiating the process. Although a Windows service sounds like the right solution for this scenario, regrettably, ActiveSync requires a logged-in user; however, it still has the advantage of not requiring a user to initiate the process or worry about who is logged in. These will still be issues for ActiveSync itself.
[2] See the article referenced in the "Related Reading" section at the end of the chapter for the basics on creating Windows services.
A second issue entails dealing with the folder names and their hard coding. The preferred technique to avoid hard-coding paths in the application is to use the managed application's configuration file. By storing the path in the configuration file, the programmer can use the ConfigurationSettings.AppSettings class to retrieve the data easily.
The final issue centers on how to deal with the data. This program is responsible for interacting with the database and the synchronization folders. This responsibility requires two activities: (1) writing out a copy of static data to Publishers\Inbox, and (2) whenever a file shows up in the Outbox, reading the data and updating the database, making a new copy of the dynamic and static data in the Inbox, and then deleting the file in the Outbox. In order to be notified when a file reaches the Outbox, the desktop application can take advantage of the System.IO.FileSystemWatcher to avoid the polling logic using timers.[3]
[3] See Chapter 12 of Dan Fox's Building Distributed Applications in Visual Basic .NET referenced in the "Related Reading" section at the end of the chapter.
By letting the device react to the cradling event and then pushing the dynamic data to the desktop, the process ensures that all updates are applied to the database before a new copy is sent to the device. Another benefit is that when the application is brought up for the first time, the static data will be there for start-up.
The device application must be launched by the operating system when the device is cradled. There are several ways to accomplish this, but we'll describe the registration route (because the other is COM based). This task is accomplished by creating a stub application that calls the Windows CE API functions CeRunAppAtEvent or CeSetUserNotificationEx. Chapter 11 includes an example of CeRunAppAtEvent, and detailed instructions for building a full example with CeSetUserNotificationEx are provided in the lab at the Atomic Mobility site (). This lab was used at TechEd 2003 and is the basis of this notification application. This code takes advantage of PInvoke, although interaction with it is encapsulated in classes to make it easier to utilize.
When this program is launched by the act of cradling, the first step is to inform the device application. This is done using the following steps:
Check if the device application is running by checking for a mutex that the device program has created.
If the mutex exists, use the MessageWindow class to send a message, and then exit.
If the mutex does not exist, launch the application with command-line parameters to start a refresh, and exit the program.
Although you would think it would be possible to include this logic in the device application (described below) instead of having two applications on the device, the Compact Framework runtime will not allow a second instance of an application to run on the Pocket PC.
The device application is the Compact Framework application that the disconnected worker is to use. The application will likely utilize a DataSet as described in Chapter 3 for its local data storage. Other than providing the user with the UI for viewing and maintaining the data, the application will have several additional responsibilities related to the synchronization tasks.
For first time execution, the application should send an empty data set to the Outbox, which will launch the synchronization process at the desktop, resulting in a copy of the DataSet back at the device. If necessary, the desktop will create in the Inbox a Static.xml file that contains necessary lookup information for the application. During the time that the application is waiting for the data to return from the desktop, no changes to existing data should be allowed. When new files appear in the Inbox, the application should start immediately using the new data.
At this point, anytime a refresh message is received from the device notification application, the device application should create a Dynamic.xml file in the Outbox and wait for a Dynamic.xml file to return to the Inbox. Upon return, the dynamic file will act as the database for the device. This process is simply a subset of the initialization process, and so there is an opportunity here for code reuse.
When using this type of architecture, the Compact Framework will take advantage of the capability in ADO.NET to serialize and deserialize ADO.NET DataSet objects as XML files, as described in Chapter 3; however, be aware that the larger the XML files, the more time start up will require.
To summarize the process described in the preceding sections, consider the full process of using file synchronization as outlined in Figure 6-9 and described here.
Start the device notification application on the device. It does one of two things, depending on if the device application is running (determined by checking for an existing mutex): (a) If the device application is running, it does a broadcast using MessageWindow class, or (b) if the device application is not running, it does a CreateProcess on the device application passing command-line parameters to indicate an immediate refresh is needed.
Write out Dynamic.xml to Outbox.
ActiveSync replicates the file to the desktop Outbox.
The desktop application recognizes a new file arriving (for example, by using the FileSystemWatcher). The application reads the data and applies the data to the data source. The Inbox file should be deleted upon reading, which will delete the copy on the device.
The desktop application writes out Static.xml and a Dynamic.xml files to the Inbox folder. The Static.xml file includes a fresh copy of lookup data, and Dynamic.xml incorporates the changes from other users.
ActiveSync copies the files to the device Inbox folder.
The device application has been suspended since step 2. It now sees a file in the Inbox folder. It reads the files as DataSet objects, deletes the files from the Inbox (which deletes them from the desktop), and allows normal operation to continue.
File synchronization seems fairly simple at first glance; however, it does entail issues you and your team will need to consider. First, ActiveSync forces the device to be tied to a desktop machine. Second, scalability is limited due to working with a desktop process instead of a server process. Third, file synchronization copies entire sets of data, whereas a more robust synchronization process will allow for more granularity by allowing users to drill down to the record or field level. For these sorts of scenarios you'll want to explore the SQL Server CE synchronization discussed in the next chapter.
NOTE
The primary scenario for using file synchronization is when the master data source will be constrained to the desktop machine.
If the device does not have network connectivity, then cradling is still required; and although you might think that this automatically means that file synchronization must be used, you should also explore the ActiveSync pass-through capability described later in this chapter.
As mentioned previously, RAPI is a core part of ActiveSync and allows a desktop application to control and query a device. Due to the need to access Rapi.dll, a .NET Framework application can use the PInvoke service described in Chapter 11. To give you a sense of how RAPI can be utilized, consider Listing 6-1, a simple example in C# that launches a game of Solitaire on the device from the desktop machine.
using System.Runtime.InteropServices; [DllImport("rapi.DLL", CharSet=CharSet.Unicode)] public static extern int CeRapiInit(); [DllImport("rapi.DLL", CharSet=CharSet.Unicode)] public static extern int CeRapiUninit(); [StructLayout(LayoutKind.Sequential,Pack=4)] public struct ProcessInfo { public IntPtr hProcess; public IntPtr hThread; public int dwProcessId; public int dwThreadId; }; [DllImport("rapi.dll", CharSet=CharSet.Unicode)] public static extern int CeCreateProcess (string lpApplicationName, string lpCommandLine, int Res1, int Res2, int Res3, int dwCreationFlags, int Res4, int Res5, int Res6, ref ProcessInfo lpProcessInformation); [DllImport("rapi.dll", CharSet=CharSet.Unicode)] public static extern int CeCloseHandle(IntPtr Handle); static void Main(string[] args) { // Intialize RAPI int hr = CeRapiInit(); //if safe to continue if (hr==0) { string strProg = @"\windows\solitare.exe"; //build needed structure used in API call ProcessInfo pi = new ProcessInfo(); //make important RAPI call to start Solitare CeCreateProcess(strProg, "", 0, 0, 0, 0, 0, 0, 0, ref pi); //release handles created process and thread CeCloseHandle(pi.hProcess); CeCloseHandle(pi.hThread); } // Shutdown RAPI connection CeRapiUninit(); return; }
NOTE
As with any calls to PInvoke, the CeRapiInit, CeRapiUnit, and CeCreateProcess functions can be encapsulated in a managed class and exposed as static methods to make them easier to call and maintain.
For more detailed information on using RAPI in Compact Framework applications, we recommend Chapter 16 of Paul Yao and David Durant's Programming the .NET Compact Framework in C#, as noted in the "Related Reading" section at the end of the chapter. Not only does this chapter provide coverage of the 78 functions in RAPI, it also explores more advanced examples, including how to interoperate with COM.
Even though the synchronization example looks interesting, it should be used only in specific scenarios. Although some will suggest that a solution should use file synchronization when over-the-air security is an issue or when there is no network connectivity built into the device, there is another option as well.
These issues are also good reasons for using the cradle, but for not using file synchronization. A more robust architecture is one that uses servers, rather than desktops; therefore, if the requirements include cradling, the solution can still utilize servers. One of the primary advantages, of course, is that scalability is more achievable, for example, when using SQL Server as opposed to Access or MSDE on a desktop.
Fortunately, ActiveSync can assist this type of application as well. With the combination of ActiveSync 3.5 (and higher) and Pocket PC 2002 (and higher), ActiveSync provides network connectivity to a device while cradled. This is known as ActiveSync pass-through and gives a device access to the network accessible from the desktop PC using TCP/IP. As a result, it is possible to have a cradled device that communicates directly with servers on a network.
There is, however, the issue of partnerships to resolve. In the default configuration of ActiveSync, the cradling of an unrecognized device always creates a prompt on the desktop asking users to choose to create a partnership or act as a guest, as described previously. In many basic scenarios the mobile application requires using PCs that have cradles, and those PCs are often located in field offices that may already be logged in under a different user's identity. As a result, either the logged on user will have to logoff and the roving user login, or the logged on user will have to create a partnership or choose guest with each of the devices that connect using its cradle.[4]
[4] Creating named partnerships is obviously problematic due to the existence of the PC user's Outlook information. You wouldn't want this information ending up on each device that synchronizes with the cradle on the PC.
To avoid forcing the desktop user to choose guest continually when synchronizing, there is a registry setting that will change the default behavior. By adding GuestOnly, a DWORD entry with a value of 1 that resides at HKLM\Software\Microsoft\Windows CE Services, the prompting will cease. Instead, sessions with unrecognized devices will automatically connect as guest. And, in these cases, the pass-through capability is all that is required. | http://etutorials.org/Programming/building+solutions+with+the+microsoft+net+compact+framework/Part+II+Essential+Architectural+Concepts/Chapter+6.+Primitive+Synchronization/Developing+ActiveSync+Applications/ | CC-MAIN-2017-04 | refinedweb | 3,755 | 52.29 |
Pythonista like HTML
Is it possible to bios I website with pythonista? Like HTML?
What do you mean? If you want to serve a website you can use a script like this:
import SimpleHTTPServer import SocketServer import webbrowser handler = SimpleHTTPServer.SimpleHTTPRequestHandler httpd = SocketServer.TCPServer(("", 0), handler) port = httpd.server_address[1] webbrowser.open(':' + str(port)) httpd.serve_forever()
and put your index.html file in the same directory.
Ok cool thanks
You might want to check out bottle. It's included in pythonista but I do not believe the documentation is.
@briarfox and forum, as a simple pythonic web framework do you happen to know the relative strengths and weaknesses of other, similar WSGI-based, lightweight web frameworks/servers for their use within Pythonista. I mean a comparison say of Tornado, CherryPy, Bottle, Waitress, Gunicorn, Spawning, Chausette, etc.
Two good sources of info I have found are:
I am messing around with some server-based utilities in/for Pythonista and some of these that are pure python frameworks may be better suited than others for this usage.
If it wasn't clear, the intent is to run BOTH clients and servers in Pythonista, something that in itself is a problem as I see it currently. Is their any way at all in the current Pythonista to run a service as a background thread of the main Pythonista interpreter process but return so that other script(s) can be run? Nothing I have tried can start say a server in background but bring Pyhonista main window back to be able run anything else -- the "x" remains and there is no "arrow" icon. Any tricks to supporting the appearance of multiple processes?
Why not look at @Gerzer's
Servrwhich is in the Apple AppStore?
It was built with Pythonista but it is now
a standalone appso you can run it and still have Pythonista available to you. Even I f Servr does not offer the features that you need, you could still follow the same model for your own server. Build a standalone server app and then use Pythonista for the client or whatever.
I like Bottle and Flask which both are available in Pythonista v1.6 while the others that you mentioned are not.
If you use SebastianJarsve's excellent Pythonista-Webbrowser, this presents in a ui panel; the console is still fully "alive" allowing you to run whatever you want from the console, and swap back and forth if you so choose. Obviously, start the Webbrowser first, then the server.
I have to admit, a lot of people want to do it, but I've never seen the appeal of serving up html over a port only to be consumed within pythonista (it is not well documented, but you can open html files directly using abspath's, at least in WebView). In old pythonista versions this was necessary as webbrowser actually opened up safari.
@cc and @JonB, thanks for some good ideas. A standalone web/sppserver would be alright, but I still find it preferable to have everything easily at my finger tips. Were IOS more multitasking friendly it not be such a big deal but it is clearly not the case. For this reason I am drawn to a one-shop idea.
@ltddev I'd be happy to give you a build of Servr - mobile edition with a built-in web browser (or a button that launches Safari and automatically loads the webpage) through TestFlight. Just know that Servr - mobile edition is only compatible with iPhone (I am working on an iPad version) and can only serve one file at a time (except you can serve as many Camera Roll pictures as you want). In addition, I am currently debugging version 2.1 which integrates with iCloud Drive and any other iOS 8 storage provider (like Dropbox, Box, Google Drive, Quip, Documents 5 by Readdle, and more). I can also post the core WSGI code if you want.
If none of this appeals to you, I can post another way to do things that works completely inside Pythonista.
I have been working on a new project and faced with these same issues. You can use a variety of back-end webserver packages such as web2py, Flask, and Bottle and they run fine under Pythonista. When you try to build your front-end using ui and webview you run into problems.
For instance running Sebastian's webbrowser with web2py blows up when you try to type into the url edit box:
Traceback (most recent call last): File "/var/mobile/Containers/Data/Application/F76C3EC3-4473-4EC8-A7EF-1CAB2BB0B437/Documents/webbrowser/webbrowser.py", line 260, in textfield_did_begin_editing self.set_url() File "/var/mobile/Containers/Data/Application/F76C3EC3-4473-4EC8-A7EF-1CAB2BB0B437/Documents/webbrowser/webbrowser.py", line 28, in set_url addr_bar.alignment = ui.ALIGN_LEFT NameError: global name 'ui' is not defined
Same thing happens with Flask. Very odd - the global name 'ui' is not defined???
The other problem I have seen is these webserver frameworks wants to log HTTP requests to the console by default and this causes the Pythonista IDE to flip from the webbrowser over to the Console. Figuring out how to configure the webserver to go silent and log to a file should mitigate this.
@wradcliffe While I haven't looked at the Pythonista-Webbrowser project in much detail, the most likely reason for these errors is this:
When you run a script, Pythonista first clears all global variables. This includes imported modules, which are basically global variables as well, and it's done regardless of whether some UI is visible in a panel or sidebar. If that panel UI later calls an action that relies on the
uimodule being imported, it will fail because the global
uivariable is no longer there. So if you're doing any kind of UI that stays visible beyond the runtime of your script, I would recommend that you import required modules within the action methods etc. and not globally. You might be worried about the performance impact of doing
import uiin every action, but this usually isn't an issue because imported modules are cached anyway.
@omz - I just had a look and it has a couple of globals and uses several modules other then ui (json, os, pickle, urlparse) so I would have to scatter a lot of imports into that code.
It probably makes more sense to just hack the webserver code to create the browser and open it as part of its startup code.
You can turn off the global-clearing behavior in the settings menu.
Or, just run a single script that starts your server, then runs webbrowser (thus, nothing is ever cleared)
Can you detail the steps to reproduce this problem? How are you starting webbrowser? How are you running your server?
@JonB - I was testing things by loading and running scripts one after the other in the editor. The easiest scenario to repro is to run Sebastian's webbrowser.py and then bring up web2py using the web2py.py script. web2py.py ends with a call similar to most of the other frameworks (gluon.widget.start(cron=True)) which puts the console into a "running" state. Flask does something similar. If you now activate the webbrowser panel and try to enter the url (like) you will see the errors related to globals getting reset.
Turning off global-clearing works and verifies that this is the source of the problem.
Your idea of writing a script (using runpy?) to do this seems like a better and more practical method.
Runpy is not really needed. You could just import webbrowser.py (might be best to rename it to avoid conflicts, then make the stuff at the bottom only happen
if __name__=='__main__', then just copy those 3 lines into your own script to launch the view.
Alternatively, if you don't need the actual Sebastian webbrowser features,just use the built in
webbrowsermodule,
webbrowser.open. | https://forum.omz-software.com/topic/1606/pythonista-like-html | CC-MAIN-2021-17 | refinedweb | 1,323 | 62.88 |
getting 3d to work
I will start learning 3d in java and thought I should leave bluej and go for NetBeans. But it always takes days to make something new to work..... This is what I have downloaded and installed:
- NetBeans IDE 5.5.1
- java3d-1_5_0
- java SE Development Kit Version 6 Update 1
But I磎 getting: "package dose not exist". But they should exist once I have installed java3d right?:
import com.sun.j3d.utils.universe.*;
import com.sun.j3d.utils.geometry.ColorCube;
import javax.media.j3d.*;
import javax.vecmath.*;
Any idea what i have missed? Need to make some setting in NetBeans maybe? Thanks for help. I磎 not very good at new things =/ | http://www.java-index.com/java-technologies-archive/518/java-compiler-5187872.shtm | crawl-001 | refinedweb | 116 | 69.68 |
I noticed a few requests on various newsgroups for an image map control which could be used in a Windows Forms application.
I had not worked much with the System.Drawing namespace, so I decided to give it a try. This control is the result.
The control uses a standard PictureBox control internally, specifically it's ability to load and display an
image as well as it's inherited MouseMove, MouseLeave, and Click events. A ToolTip
component is used as well to display tooltips for defined regions. Currently, the key specified for a region is
used for the tooltip text, though it would be pretty easy to allow an additional "tooltip" property to be
assigned for each region.
PictureBox
MouseMove
MouseLeave
Click
ToolTip
key
The bulk of the logic is in the private getActiveIndexAtPoint method shown below. The method is called whenever
the mouse moves within the control to determine which region, if any, the mouse is within. If the mouse is within a region, the
region's index is returned by the method. This index is used to lookup the region's key, which is then used to set the text
displayed by the tooltip. The cursor is also changed to a hand and the index is stored in a private property to be re-used if
the mouse is clicked, avoiding the necessity to call this method again. If the method does not find a region, a -1 is returned
and the cursor is set to it's default.
getActiveIndexAtPoint
private int getActiveIndexAtPoint(Point point)
{
System.Drawing.Drawing2D.GraphicsPath path = new System.Drawing.Drawing2D.GraphicsPath();
System.Drawing.Drawing2D.GraphicsPathIterator iterator =
new System.Drawing.Drawing2D.GraphicsPathIterator(_pathData);
iterator.Rewind();
for(int current=0; current < iterator.SubpathCount; current++)
{
iterator.NextMarker(path);
if(path.IsVisible(point, this._graphics))
return current;
}
return -1;
}
You can add this control to your toolbox just as you would any other .NET control. Once, added, simply drag-and-drop an
instance of the control onto your form. Use the Image property to assign the desired image to the control.
Image
Once you've added the control to your form, you can begin to call the various Add- methods to define the "hot-spots"
within your image. The available methods are AddEllipse, AddRectangle, and AddPolygon.
I tried to follow the conventions of HTML image maps with respect to defining the various shapes, and I overloaded the
AddElipse and AddRectangle methods to accept ".NET-friendly" types such as Point
and Rectangle. Since the methods follow the convention of HTML image maps, you should be able to use any existing image map generation
software to determine exactly what points to specify for the desired regions within your image.
Add-
AddEllipse
AddRectangle
AddPolygon
AddElipse
Point
Rectangle
Here is the code required to define an HTML image map using the image in the first screenshot:
<!--Text BUTTON-->
<area shape="rect" coords="140,20,280,60">
<!--Triangle BUTTON-->
<area shape="poly" coords="100,100,180,80,200,140">
<!--Face BUTTON-->
<area shape="circle" coords="80,100,60">
Here are the equivalent regions defined using this control:
this.imageMap1.AddRectangle("Rectangle", 140, 20, 280, 60);
this.imageMap1.AddPolygon("Polygon", new Point[] {new Point(100, 100),
new Point(180, 80), new Point(200, 140)});
this.imageMap1.AddElipse("Ellipse", 80, 100, 60);
The control will raise an event - RegionClick - whenever the mouse is clicked within a defined region.
The event passes two values denoting the index and the key of the clicked region. You would then take whatever action you
wished based on the region clicked.
RegionClick
That's it! I hope this meets the needs of those who had requested an image map control for use in a Windows Forms
application. Comments and requests | http://www.codeproject.com/Articles/2820/C-Windows-Forms-ImageMap-Control?fid=4666&df=90&mpp=10&sort=Position&spc=None&tid=3050403 | CC-MAIN-2013-48 | refinedweb | 620 | 56.66 |
Gaurav Sharma here, I’m a developer with the Information Security Tools team.
A couple of months back when I was trying to understanding the .NET compilation model I encountered an interesting thing. I created a small program to print an Int32 type array to a console. The code follows.
1: using System;
2:
3: namespace Optimization
4: {
5: class Program
6: {
7: static void Main(string[] args)
8: {
9: Int32[] arr ={ 1, 2, 3, 4, 5, 6, 7, 8, 9, 0,11 };
10: for (Int32 i = 0; i<=arr.Length-1;i++)
11: {
12: Console.WriteLine(arr[i]);
13: }
14: Console.ReadLine();
15: }
16: }
17: }
I then checked the IL that is generated from this small code snippet. I checked various function calls that were made by the C# compiler and noticed an interesting thing. The C# compiler had optimized the above code by intelligently adding new instructions and removing unnecessary instructions. The IL generated follows.
The compiler stores the array length in IL_0000. After this step, as you can see that there is no call made to Array.Length property i.e. get_Length() method.
Only three calls are added by C# compiler;
- Initialize Array
- Write Line, and
- Read Line
The C# compiler had optimized my code internally so that there is no need to call Array.get_Length() method with every loop iteration. It uses the Array Length stored in the beginning of the instruction set. So, if you are working on similar kind of code snippet then there is no need to declare a variable to store the array length just before the loop and use that variable in the loop condition check; the C# compiler does that for you in the background!
Now my Tip of the Day for writing better code with Visual Studio
No one likes unwanted using or Imports statement in the beginning of your C# or VB.NET code file. There is a quick solution for this:
Happy coding! | https://blogs.msdn.microsoft.com/securitytools/2009/07/04/c-compiler-optimization/ | CC-MAIN-2017-09 | refinedweb | 326 | 65.62 |
Hi, I wanted to present to you guide on how you should set up TopCoder Arena in order to make competing in TC much more pleasant experience. Or at least guide how to do the same thing as I did which seem very comfortable to me compared to this big pain of competing in bare default Arena. In case you want to grab easy upvotes please leave here some joke about how I am jealous about Radewoosh getting insane boost in his contribution by writing his blogs, but the real reason is that I screwed my setup recently and needed to go through this painful and magical setup once again and it seems this information is very nontrivial and not available publicly in a known place (which I believe is one of reasons TopCoder is losing its popularity, but I hope thanks to this post I can give slight boost to it by settling up Arena problems once and for all for many people), so I wanted to share it with others. I want to express my sincere thanks to tomasz.kociumaka for guiding me by hand in this process (twice), because I would never be able to do this on my own. By the way, I use Linux, obviously. Probably many steps are also similar on Windows or Mac, but I'll leave any hypothetically needed changes up to you.
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Plugins I use are CodeProcessor, FileEdit, moj and pq. Overall functionality they offer me is:
1) When I open a problem statement they create a new file corresponding to this task in a designated location.
2) This file contains template code I used, created class for me with already correctly declared method I need to fill in and namespace with samples which are then executed in main.
3) After I end coding I am able to compile my program locally (with my own debugging flags), run it and get automated feedback about its speed and correctness. This is done by moj plugin.
4) I have a simple way of adding my own sample testcases.
5) When I'm ready to submit I need to hit "Compile" button in arena and then "Submit".
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
So let's go to the point of how we should go about setting this up:
1) Go to this page, download this zip and extract it in your fapfolder. I will denote path to this extracted folder as /path/TCplugins
2) Go to Arena, log in and go to Options->Editor->Add and fill this accordingly:
Name: CodeProcessor
EntryPoint: codeprocessor.EntryPoint
ClassPath: Add 4 paths here, to CodeProcessor.jar, FileEdit.jar, moj.jar and pq.jar (I've done this by multichoice with held Ctrl). After this it should look like /path/TCplugins/pq.jar:/path/TCplugins/moj.jar:/path/TCplugins/FileEdit.jar:/path/TCplugins/CodeProcessor.jar
save this and tick boxes next to newly added line (and maybe uncheck others if you have any ticks elsewhere)
3) Click on this newly added line and click "Configure". New big window will pop up. Fill Editor EntryPoint as fileedit.EntryPoint . Next to "CodeProcessor Scripts (...)" hit "Add" and add two entries "moj.moj" and "pq.MyPostProcessor".
4) Hit "Configure" next to Editor EntryPoint and specify path where your codes will be appearing (do not use "~" in your path, it doesn't work, you probably want to use something like /home/anon/something). Uncheck this very stupid option "Backup existing file then overwrite". Go to "Code template" and if you use C++ then modify your template to something like this (filling out parts within underscores): (I wanted to paste it here, but my attempts to correctly display dollar sign were futile, if you know a good way to do this, please message me). If you happen to use hack like "#define int long long" then also put "#undef int" before line with method declaration and "#define int long long" after it.
5) Restart you Arena and you are ready to go! Go to some practice room and do Div2 250 to familiarize with workflow with all these newly added plugins.
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
A few technical notes I would like to point out:
1) If I'm not mistaken on TopCoder your program is executed many times — once per every testcase. However locally your program is executed one time and it does many calls to specified method. It may seem like there is no difference, but there is a significant one — state of your global variables. You may leave your global variables in some random state and it will not affect execution of your program on testing servers, but it can affect behaviour of your program locally and you may get local WAs even though your program would be considered correct by TC. However I very rarely use global variables in TopCoder because only reason I can think of right now for using them is that when declared globally instead of putting them as attributes of my class, they are zeroed by default (which disappears if you want to be able to test your program locally, because you need to clean them anyway).
2) If you did some defines it may affect your remote compilation on server. Your code is somehow included in testing framework and this code TC adds to your code in order to grade your solution uses some variable names etc. too. It is unlikely that something like "#define SZ(x) (x).size()" will affect compilation on server, but "#define int long long" will very likely do so. That's why if you use some common expressions in defines, you better undef them (like "#undef int") at the end of your code, because otherwise you will get compilation errors when submitting even though your program compiles locally and it will be very confusing.
3) Apparently, informations about this setup are stored in ~/contestapplet.conf file. You can save a copy of it in some safe place (so that you can restore it whenever you do some mess or change your laptop) and be aware to not move this file around (that's how I screwed up my setup recently, because I cleaned my home directory of some weirdly looking files).
4) When testing locally, what is performed in order to check correctness of your code is to do some simple diff (it takes appropriate care when there are doubles in output, so don't worry about them). Be aware that problems with multiple allowed outputs exist and in such cases local testing tool will say that you FAILED, so in such cases you will need to validate your output by yourself (or use "Batch test"). But that is the same as in every other platform (unless we are provided some checker, but we are never provided, so yeah). Be aware that in such cases you should use option "Batch test" in arena which checks correctness of your output (but look at field "Correct example", not "Success")
5) Sometimes expected output is an empty vector. Code attached for testing on samples doesn't compile when expected output is an empty vector (in cpp), so you need to manually adjust that. For example by changing expected answer to {-1} and checking correctness by verifying that all your failed sample cases look like:
Received: { }
Expected: {-1} | https://codeforces.com/blog/entry/61252 | CC-MAIN-2020-29 | refinedweb | 1,217 | 64.54 |
Create A VueJS 3.0 Sample Application:
Let's understand routing concepts by doing a sample VueJS 3.0 application, so as a first step create a sample application.
Command To Install Vue CLI Globally On Your System npm run -g @vue/cli
Command To Create Vue App: vue create your_app_nameThe vue-router available as a separate library, so to install it run the following command.
Command To Install Vue Router Library(For Vue3.0) npm install vue-router@4After successfully installing the route library we can observe the route library reference in the package.json file.
Create Vue Components:
Let's create sample components like 'Home' and 'About'. We will load this component based on a route like pages.
src/components/Home.vue:
<template> <div> My Home Page </div> </template> <script> export default { } </script>src/components/About.vue:
<template> <div> About Page </div> </template> <script> export default { } </script>
Configure Routing:
Let's configure vue routing into our application.
src/appRouter.js:
import {createRouter, createWebHistory} from 'vue-router'; import Home from "./components/Home.vue"; import About from "./components/About.vue"; const routes = [ { path: "/", component: Home }, { path: "/about", component: About }, ]; export const router = createRouter({ history: createWebHistory(), routes: routes });
- The 'createRouter()' method creates an instance of the vue router that loads from the library 'vue-router'.
- The 'createRouter()' method takes javascript object literal as an input parameter as configurations for routing. The configuration is like 'history', 'routes'.
- The 'history' property takes 'createWebHistory()' as input value, the 'createWebHistory' maintains the application navigation history.
- The 'routes' property takes an array of route values as input parameters.
src/main.js:
import { createApp } from 'vue' import App from './App.vue' import * as routeConfig from './appRouter.js' const app = createApp(App); app.use(routeConfig.router); app.mount('#app');
- The 'routeConfig' loaded from 'appRoute.js' and configured into vue instance pipeline.
router-view Component:
The router-view component is a built-in vue component. This is a functional component that renders the matched component for the given path or route. Components rendered in can also contain their own, which will render components for the nested path.
Now let's update App.vue file to use the 'reouter-view' component
src/App.vue:
<template> <router-view></router-view> </template> <script> export default { name: 'App', components: { } } </script>
- (Line: 2) Used 'router-view' component.
router-link Component:
The 'router-link' is the default vue component, the navigate the user to the specified path or route. The target location specified with the 'to' property.
Now let's update our App.vue file by adding a bootstrap menu that uses the 'router-link' component to display menus.
src/App.vue:(Html Part)
<template> <nav class="navbar navbar-expand-lg navbar-light bg-light"> <div class="collapse navbar-collapse" id="navbarSupportedContent"> <ul class="navbar-nav mr-auto"> <li class="nav-item active"> <router-linkHome</router-link> </li> <li class="nav-item"> <router-linkAbout</router-link> </li> </ul> </div> </nav> <router-view></router-view> </template>
- (Line: 6&9) Used 'router-link' components for menu management. The 'to' property will be used to assign route value.
Navigating From Vue Code:
We can manage routing from inside of the component programmatically. Since we have configured our routing instance into the vue pipeline we can access it as 'this.$router'.
Let's update our Home.vue component where we will add a button on clicking but we navigate to the About.vue component.
src/components/Home.vue:
<template> <div> My Home Page <button type="button" v-on:Go To Details</button> </div> </template> <script> export default { methods:{ go(){ this.$router.push('/about'); } } } </script>
- The 'this.$.router.push()' navigates user to specified path pag.
Support Me!
Buy Me A Coffee PayPal Me
Wrapping Up:
Hopefully, I think this article delivered some useful information on Vue Router. I love to have your feedback, suggestions, and better techniques in the comment section below.
This is just amazing explanation, I got the basic routing concept.
Thank you.
This is Perfect explanation.
Please explain more about vuejs3.
API, Session, Auth etc.
Thank you. | https://www.learmoreseekmore.com/2020/12/vuejs3-vue-router-introduction.html | CC-MAIN-2021-17 | refinedweb | 670 | 50.73 |
- Getting Started
- Environment
- Code Inspection
- Code Editing
- Tools
Over the years, we as developers form our own sets of practices about how we write code. An inevitable consequence of this is that these practices are bound to differ between development teams or even between individual developers.
It should therefore come as no surprise that one of ReSharper's goals is to accommodate these differences by providing flexible customization options. Another important point is to give a simple and flexible way of sharing the customized settings within development teams.
This guide serves as an overview of ways in which you can customize your ReSharper experience to best align it with your needs and the needs of your organization.
Getting Started
To get started with ReSharper customization, choose ReSharper | Options in the Visual Studio menu bar. You will be presented with the following dialog box:
The Options dialog box is split into two parts. The left-hand part presents a feature tree, letting you choose which feature you want to customize. There are the following feature categories that represent root nodes of the feature tree:
- Environment - this category includes things related to UI and the editor, Visual Studio integration, and general settings.
- Code Inspection - here you can configure features related to code analysis.
- Code Editing - in this category you will find options that define code transformations, cleanup, formatting and styles.
- Tools - this category lists settings of ReSharper tools
The right-hand part shows the actual property page for the selected feature, where various options can be set. You can always hit F1 or click on the '?' icon in the Options dialog box to study detailed descriptions of every single option on the current options page.
As soon as you change any of the options, you will see a yellow warning in the bottom of the dialog box showing you the number of unsaved changes. You have two buttons that allow you saving the changes: Save and Save To. Save just saves the settings for you while Save To lets you choose a particular setting layer where the change is saved. Normally, you will need to click Save To and use setting layers only when you want to share custom settings. To learn more about saving settings, please read this article in the JetBrains .NET Tools Blog.
In the bottom of the Options dialog box, you will also find the Manage... button, which allows managing setting layers with the Settings Layers dialog box:
We won't expand on managing settings here (to learn more on the subject, please study posts tagged ‘ReSharper settings’ in the JetBrains .NET Tools Blog), however, one button in this dialog box is directly related to our current topic - Reset All Settings - it will allow you at any point to reset all settings to their factory defaults.
Now let's go through the options pages.
Environment
General
In this page, you will find general options that are mostly self-explanatory, like showing tips on startup and switching icon themes. A couple of settings that may need explanations are:
- Loop selection around ends of a list allows ReSharper to loop the scrolling of popup lists if you scroll them with the keyboard arrow keys. The setting applies to all popup lists that appear for navigation and search actions as well as for code completion actions.
- Caches. ReSharper caches solution model on the file system to speed up its operations, and these options let you define where exactly. If your solution is under VCS, then you might not want caches in the solution folder, but if you work alone, solution folder might be a good choice. At any moment you can remove cached files with the Clear Caches button..
Keyboard & Menus
ReSharper overrides some Visual Studio features. For example, instead of 6 native refactorings of Visual Studio, ReSharper provides more than 40. If you want these overridden commands to be replaced with corresponding ReSharper commands in context menus and toolbars, tick the Hide overridden Visual Studio menu items check box.
Some of ReSharper’s features are intended for constant use, which makes it less convenient to use the mouse. Naturally, ReSharper supports keyboard shortcuts, the only question being how you want to manage them. There are two sets of predefined shortcuts called 'keyboard schemes'. Check this reference page to learn both schemes in detail.
There are, three options related to keyboard shortcuts:
- The Visual Studio scheme will be useful for people who are already accustomed to VS shortcuts. Choosing this option will get ReSharper to add its own shortcuts in unoccupied slots, causing no conflicts. Which means you don’t have to re-learn anything.
- The setting for ReSharper 2.x or IntelliJ IDEA will be useful for two categories of people – those who have worked with ReSharper from the early days and those who also happen to work with IntelliJ IDEA. Beware – this setting overwrites several settings of Visual Studio itself, but ReSharper will hint you to resolve conflicting shortcuts as soon as you first use them.
- The None option lets you define no shortcut keys at all. This is useful if you want to selectively apply keyboard shortcuts only for the features you use frequently.
Speaking of keyboard shortcuts, ReSharper (predictably) publishes command names that can be bound to keyboard shortcuts via Visual Studio’s default interface (Tools | Customize | Keyboard). For example, here’s a shot of me assigning ReSharper’s all-important QuickFix command (used for opening up quick-fix suggestions and context actions) to the Alt+Enter key combination.
Editor
ReSharper changes your experience in the text editor with simple but addictive features, which you can configure in this page. These features include:
- Highlighting of the current line and of matching delimiters (parenthesis, brackets, braces, and quotes).
- Auto-formatting of code on semicolon and/or closing brace. Formatting rules themselves are configurable in the Code Editing | General Formatter Style and Code Editing | [language] | Formatting Style pages of ReSharper options.
- Taking into account words that compose CamelHumped names when extending/shrinking selection with Ctrl+W/Ctrl+Shift+W and moving the caret between words with Ctrl+Left/Ctrl+Right.
- Auto-insertion of closing delimiters after their opening counterparts.
Search & Navigation
Though being powerful enough, ReSharper's navigation and search features do not require much configuration. The options presented in this page are understandable at a glance, maybe except for the last one, Expand search results tree by default, which forces to expand the search result tree when it is shown in the Find Results window.
IntelliSense
As you may have noticed, ReSharper comes with its own brand of IntelliSense, which provides expanded features such as, for example, the ability of using inline Live Templates (ReSharper's take on Visual Studio code snippets). To learn more how ReSharper's code completion improves upon Visual Studio IntelliSense, read this help entry.
If you are using ReSharper's IntelliSense, there's a large number of options you can configure for it.
General
First of all, you need to decide whether you want to use ReSharper's brand of IntelliSense or Visual Studio's. You have three options here:
The first option enables full ReSharper's IntelliSense, the second options disables it, and the third allows you to choose a set of languages where you want to use ReSharper's IntelliSense. Please note that further IntelliSense options are applicable only if you chose full or limited ReSharper's IntelliSense.
Autopopup
This page allows you to define how the ReSharper's IntelliSense autopopup works in various contexts of different languages. By default, the completion list automatically appears when you type any character allowed in the context; the expected item is preselected:
If you see that the guess is correct, hit Enter or any non-alphanumeric character that closes or continues the expression.
In the Autopopup page, first of all you can choose whether to enable or disable automatic appearance of the IntelliSense popup at all. If it is disabled, you can access the IntelliSense popup with various shortcuts (e.g. Ctrl+Space; for more information, see Code completion in ReSharper help). If it is enabled, you can furthter customize how and when it should be displayed.
There are two options of showing the autopopup:
-.
Completion Characters
This page has a couple of options that define when to insert the item selected in the IntelliSense popup. One non-configurable way to do it is hit the Enter key. Besides, by default you can do it with the Space or any non-alphanumeric character. In this case, the inserted item is followed by the space or the non-alphanumeric character.
If you want the Space or some non-alphanumeric characters to cancel the IntelliSense popup, you can make the corresponding configuration separately for C#, VB.NET, and the rest of the supported languages. For instance, to disable completion on Space, dot ".", colon ":", and semicolon ";", uncheck the Complete on space check-box and type ".:;" in the Do not complete on box.
Completion Behavior
The Completion Behavior page allows you to fine-tune ReSharper's IntelliSense. Some of the things you can configure are:
- Whether to include language keywords in the completion list.
- Whether to narrow down the completion list on typing
- Whether to enable or disable automatic commit for specific completions (Symbol Completion, Smart Completion, and Import Symbol Completion) in cases where there is a single possible choice in the completion list.
- Whether to automatically insert opening or both parentheses after a completion is committed.
- The Match middle of identifiers option allows the completion list display items that include the typed substring in any place. For instance, by typing
boxin a WinForm application, you will have
MessageBoxamong other items.
Completion Appearance
This page lets you define how code completion appears – whether or not it uses default VS font or that of the text editor, whether to show member signatures and summary. There is also an option to filter members with the
[EditorBrowsable] attribute – either to show only normal members (
EditorBrowsableState.Always) or to show both normal and advanced members (
EditorBrowsableState.Advanced).
Parameter Info
The Parameter Information feature is configured as a part of ReSharper's IntelliSense though it can be used not only for typing assistance but for reviewing existing code too. The first option - Automatically show parameter info in ... milliseconds - applies only to typing method calls, the rest of the options apply to the Parameter Information popup in all situations.
Web Proxy Settings
In this page you can adjust Web proxy settings for ReSharper's Internet connection. By default, ReSharper follows your system settings and if any proxy server is defined in your system ReSharper will use it too for fetching sources from symbol servers, detecting updates, etc.
Code Inspection
Code analysis is one of ReSharper’s central features. Throughout the analysis process, ReSharper uses the marker bar (AKA error stripe) on the left of the text editor to highlight errors and suboptimal language usage in your code.
Markers on the marker bar are colored according to the severity of the corresponding issues. If necessary, you can configure these colors in the Visual Studio options. Go to Tools | Options and then choose Environment | Fonts and Colors page. Scroll through displayed items to find ReSharper Code Analysis [Error | Warning | Suggestion] Marker on Error Stripe and select the desired foreground:
The way ReSharper does analysis and, specifically, the severity it assigns to various cases of suboptimal language use, are configurable in under the Code Inspection category of ReSharper options:
Settings
Firstly, let’s take a look at the Settings page. This page governs the general settings related to code analysis - things such as whether it happens at all, and if it does happen, several further settings can be configured.
The Enable code analysis check box, as you may have guessed, determines whether code analysis happens at all. Let’s discuss the options that appear underneath it.
- The Color identifiers option, when checked, will tell ReSharper to color the text related to various inspection warnings/errors depending on severity
- The Highlight color usages option adds a colored underlines whenever color definitions (e.g.
Color.FromArgb(0, 33, 99)) are used in your code.
- The Analyze errors in whole solution turns on Solution-Wide Analysis – a feature that gets ReSharper to analyze all files in the solution. Watch out for a spinning wheel in the bottom right-hand corner of the Visual Studio window and note that this is a quite resource-consuming option, especially for large solutions.
- The Show code inspection options in action list determines whether ReSharper will let you configure analysis options right in the action list, which appears on Ctrl+Enter. This might be useful at the initial stages of ReSharper use, but may become less relevant, especially if you find that you never need to further customize inspection options.
- The option Show the “Import namespace” action popup determines whether or not you get a nice blue popup like the following for missing namespace imports:
- The Assume entry value can be null helps tuning the Value Analysis feature.
- The Edit Items to Skip button opens the Skip Files and Folders dialog box where you can configure the list of files, folders, and file masks that should be excluded from code analysis.
Generated Code
Not all code in a VS solution needs to be analyzed – for example, the Windows Forms designer files (Designer.cs or .vb) do not need to be analyzed. You may have some other files in your solution (for example, files generated via T4) that do not require analysis.
To tell ReSharper not to analyze them, you can use the Generated Code page to specify the elements ReSharper should ignore. Specifically, you can define:
- Individual generated files and folders that contain generated files
- File masks of generated files
- Regions within generated code (i.e., code within
#regiondirectives)
Inspection Severity
This is the ‘main’ page for configuring inspections. The interface is pretty self-explanatory - for each inspection you can pick the severity that ReSharper assigns to issues found by this inspection. The options are hint, suggestion, warning, error, or an option to not show the inspection at all.
Use the Search box to quickly find code inspection items.
Custom Patterns
This page is a central location for managing Structural Search and Replace (SSR) patterns. It replaces the ReSharper | Tools | Pattern Catalog dialog box that appeared in ReSharper 5.0. You can start with this blog post to learn how to work with SSR patterns.
Code Annotations
ReSharper uses annotations such as
[Null] and
[NotNull], but many developers do not want to reference a ReSharper assembly – instead, they’d rather host these attribute definitions in their own projects. The Code Inspection | Code Annotations options page allows configuring this:
The idea is simple: if you want the annotation types to be in your own solution (with your own namespaces), then you can click Copy default implementation to keyboard to get the definition of all attributes. Then, simply paste this in your project, change the namespace to the one you need, and open the Options dialog box again. You should now see your new namespace in the list of available namespaces. The Default annotation namespace combo box will also contain this new namespace.
All namespaces with checked boxes next to them will be used for annotations. As for the default annotation namespace, this actually determines which namespace will be used by default when referencing the annotation types.
To learn more about code annotations, you can study this help topic and check out the full list of code annotation attributes.
Code Editing
Naming Styles
Some developers prefer an underscore in front of their private variables; others do not. Both are valid choices, and they are supported in ReSharper with the concept of Naming Style. Naming styles are customizable on a per-language basis and the settings are organized similarly for each language.
Configuring Naming Styles
Let's see how to configure language settings for C# and then you can do the same for other languages: the relevant property page is under *Code Editing | [Language] | Naming Style in the feature tree.
As you can see, the property page shows us the naming rules for typical language features such as interfaces or private instance fields. Editing a naming rule is simple – you can select the corresponding line and either double-click it or, alternatively, click the Edit button. After you do so, you will be presented with a dialog box similar to the following:
Before we get to the question of why there is a list of naming conventions for this item, let’s deal with the bottom part. It’s fairly self-explanatory, really - your naming convention can include prefixes, suffixes, and a particular form of writing a variable in between. In the example above, the naming rule requires lower camel case identifier prefixed by an underscore.
Now, let’s get back up to why there’s a list control up above with options to move members up and down. First, why a list? That’s because you might have a code element that might be written in several different ways. For example, you might be writing constants in both UpperCamelCase and ALL_UPPER notations, and you don’t really want ReSharper to suggest you change those when you really don’t need to. Or, for instance, you might be used to naming your tests in a specific way that’s different from the way you name non-test code elements.
As to why there’s a list: you’ll notice that if you add more items, the top one is always in bold and marked as (Default). When ReSharper finds a code element that doesn’t conform to this rule (say, a constant that’s lowerCamelCase), it will offer to adjust the naming according to that default notation.
You may also notice the Enable inspections option when you open a naming rule for editing. Unchecking this option tells ReSharper to avoid checking naming for these types.
Sharing Naming Styles
If you work in a team, chances are that your team has common naming rules. Starting from ReSharper 6.1, you can use setting layers (briefly described in the beginning of this tutorial) to share naming styles as well as any other options. If you migrate from earlier versions, you can import your shared styles using the corresponding button on the Code Editing | Code Style Sharing page.
Formatting Styles
Besides configurable styles for symbol names, ReSharper provides configurable styles for code formatting. In contrast to naming styles, formatting styles are often enforced automatically; you may have noticed that as you type, ReSharper automatically adjusts code formatting. You can configure when to use auto-formatting in the Environment | Editor options page. Formatting styles are also applied when you expand code templates, generate code, use refactorings, etc. If you need to apply formatting styles to the existing code, use code cleanup.
General Formatting Styles
You can configure global formatting rules on the Code Editing | General Formatting Style page. Global options define how to indent multi-language files and how to use tabs in multi-line constructs when the tabs are enabled in Visual Studio options. The rest of formatting options are configured separately for each language.
Language-specific Formatting Styles
To configure language-specific formatting rules, go to Code Editing | [Language] | Formatting Style. Depending on the language there may be up to hundred different settings, which allow you to define every detail of your code layout. Formatting settings may be divided into several pages:
- Braces Layout lists settings that determine how curly braces are laid out. This feature is only available for languages that use curly braces for scoping, such as C#.
- Blank Lines lists settings that determine how many blank lines are added in particular use instances. A value of 0 implies that no blank lines are used.
- Line Breaks and Wrapping defines, predictably enough, how text is broken into lines (for example, on exceeding required document width) and the way in which it wraps around.
- Spaces lists settings that define whether or not a space is inserted in a particular instance.
- The Other page contains additional options for code formatting. For example, you can specify whether attributes are kept together like this
[A,B]or separately like this
[A][B].
The large number of settings, which may confuse you in the beginning, is compensated for with a very convenient editors. When you select a setting in the list, the bottom of the page shows a code sample that demonstrates how this setting affects formatting. You can play with the setting and check the changes in real time.
Members Generation
ReSharper provides quite a number of code generation features. The common settings for these features are available in the Code Editing | Members Generation options page.
The Generated member default body style setting applies to all generated methods and allows you to choose what to put in their bodies. The Generated property style setting defines how properties are generated for implementing and overriding members.
Also, you can opt for copying XML documentation from overridden members, wrapping generated members in regions, and annotating generated property accessors to prevent debugger from stepping into them.
File Headers
Many companies have standards regarding the headers that are present in all files in the solution. Typically, headers contain such data as copyright information, the author of the file, and so on. ReSharper lets you configure the settings for file headers across the solution:
After you define these settings, you can run code cleanup in order to have these headers automatically inserted into every code file in the solution. To insert the header into new files, use file templates.
Code Cleanup
Code Cleanup is a feature that allows wholesale reformatting and cleaning up of code - whether individual files or whole projects or even solutions. Code Cleanup performs a number of operations combined in sets, such as Full Cleanup or Reformat Code. You typically choose from these sets when performing the clean-up.
For each profile, you can see the settings it applies and, naturally, ReSharper lets you define your own profiles. This is available either by clicking Edit Profiles in the Code Cleanup dialog box, or selecting Code Editing | Code Cleanup page in the Options dialog box. To learn exactly what each Code Cleanup option means and how it is configured, read this help entry.
Either way, you get presented with a configuration editor that lets you add a new profile. Now, let’s say that I want a profile that doesn’t add
var and does no code reformatting. To get this, I click Add and name my profile:
Then, I select the corresponding saving options in the bottom part of the page:
And that’s it – I can run ReSharper | Tools | Cleanup Code and begin using this profile:
Context Actions
Context actions are similar to refactorings in some way, but they allow simpler one-step transformations of your code. However, in some situations you might have no need for particular context actions and want them out of the action list. For example, if you are not a fan of the
var keyword, you might not appreciate ReSharper offering you to change explicit types to
var in your code.
Luckily, you have the Context Actions options pages - one common and one for each language - specifying which actions are shown and which aren’t. Simply uncheck the actions you don’t want, and you’re set!
Localization
You can use the Code Editing | Localization page to determine the way in which resources are named when you move string to resources.
Additionally, for C# you can choose whether you want to analyze verbatim strings (starting with @) in your code and make ReSharper suggest moving such strings to resources.
Type Members Layout
By default, ReSharper has a preconfigured way of laying out the members (properties, fields, nested types, etc.) of a particular type. These layout rules are applied when you run full code cleanup and can be overridden in ReSharper’s options: Code Editing | C# | Type Members Layout. (Please note that this feature is currently only available for the C# programming language.)
There are two ways of using default members layout: with or without regions. If you choose to use regions then delegates, nested types, etc. are automatically wrapped into corresponding regions. Once you choose Custom Layout, you will be presented with an editor showing XML that allows you to change the order in which type members should appear in the code file.
To learn more about fine-tuning type members layout, refer to Hadi Hariri's In-depth Look at Customizing Type Layout with ReSharper on the JetBrains .NET Tools Blog.
Namespace Imports
For C# and VB.NET, ReSharper provides a number of features that help optimizing namespace imports. There are code inspections that detect redundant namespace imports and quick-fixes that can remove such imports, there is a Code Cleanup action that can optimize namespace imports in the desired scope up to the entire solution.
The Code Editing | C# or VB.NET | Namespace Imports options pages help you configure the way ReSharper treats
using (C#) and
import (VB.NET) directives. You can:
- Choose whether to use fully qualified names or insert
using/
importdirective
- Specify namespace imports that should be never removed
- Specify namespaces that should always be imported
Tools
Unit Testing
If you use unit tests in your solution, you may have noticed that ReSharper detects unit tests and allows running and debugging them right from the editor:
With ReSharper, you can also create test sessions and run them in parallel, run tests by categories and more. To configure unit testing features, use Tools | Unit Testing options page:
The first option - Enable Unit Testing - switches on or off all unit testing features including test detection in the editor. If it is enabled, you can configure other settings, all of which are self-explanatory.
Settings on this page apply to all supported unit testing frameworks. To modify settings that affect a particular unit testing framework, use the corresponding subpage.
To-do Items
In the Tools | To-do Items page of ReSharper options, you can configure the default notification items and create your own ones. To learn more about To-do items, see this blog post.
External Sources
Although most of navigation and search settings are in the Environment | Search & Navigation options page, the way ReSharper navigates to compiled code is configured in the Tools | External Sources page.
The default action for navigation to compiled code is not defined but ReSharper will ask to choose one as soon as you need it.
Imagine that you call the Go to Base Symbols command from a WinForm class. In this case, the base symbol is the
System.Windows.Forms.Form class and it is defined in the .NET Framework, which is compiled and referenced from your project. If you navigate to compiled code for the first time, you will see the following dialog box:
Whatever you choose, you can change it at any moment in the Tools | External Sources options page. In fact, ReSharper's features for compiled code are enabled only if you choose the Navigation to Sources option, then ReSharper will try hard to get you to the desired destination.
Depending on what options are enabled, ReSharper can do the following:
- Use debug information in PDB files to find sources of the compiled code locally
- If local sources are not found, it will try to get them from a symbol server
- If they are still not available, it will decompile the library code
Additionally, for decompiled code you can opt for XML documentation and reordering type members according to your code style settings.
The Advanced button allows you to define folder substitution rules, which may be helpful if you have library sources locally, but the library was compiled on another computer. In this case, folder substitution rules will help you remap source folder paths from the PDB to local sources. | http://confluence.jetbrains.com/display/NETCOM/ReSharper+Customization+Guide | CC-MAIN-2018-09 | refinedweb | 4,660 | 51.28 |
PyBarobo 0.1.12
Native Python Barobo robotics control library
This python module can be used to control Barobo robots. The easiest way to use
this package is in conjunction with BaroboLink. After connecting to the robots
you want to control in BaroboLink, the following python program will move
joints 1 and 3 on the first connected Linkbot in BaroboLink::
from barobo import Linkbot
linkbot = Linkbot()
linkbot.connect()
linkbot.moveTo(180, 0, -180)
You may also use this package to control Linkbots without BaroboLink. In that
case, a typical control program will look something like this::
from barobo import Linkbot, Dongle
dongle = Dongle()
dongle.connectDongleTTY('COM3') # where 'COM3' is the com port the Linkbot is
# connected on. In Windows, the COM port of the
# Linkbot can be identified by inspecting the
# Device Manager. On a Mac, the com port will
# appear in the "/dev/" directory, usually as
# something like "/dev/cu.usbmodem1d11". In
# Linux, it should be something like
# "/dev/ttyACM0".
linkbot = dongle.getLinkbot() # or linkbot = dongle.getLinkbot('2B2C') where '2B2C'
# should be replaced with the serial ID of your
# Linkbot. Note that the serial ID used here can
# be that of a nearby Linkbot that you wish to
# connect to wirelessly. If no serial ID is
# provided, the new linkbot will refer to the
# Linkbot currently connected via USB.
linkbot.moveTo(180, 0, -180)
- Author: David Ko
- Documentation: PyBarobo package documentation
- License: GPL
- Platform: any
- Package Index Owner: davidko
- DOAP record: PyBarobo-0.1.12.xml | https://pypi.python.org/pypi/PyBarobo/0.1.12 | CC-MAIN-2017-39 | refinedweb | 247 | 63.39 |
Plane by Python Generator
- indexofrefraction last edited by r_gigante
Hi,
In the built in Plane Object the segment count is (sadly) limited to 1000
Would it be possible to create an own Python Generator Plane Object without this limitation?
(I dont want a plugin solution, because that does not work with renderfarms)
import c4d def main(): w = op[c4d.ID_USERDATA,1] #witdh h = op[c4d.ID_USERDATA,2] #height ws = op[c4d.ID_USERDATA,3] #subw hs = op[c4d.ID_USERDATA,3] #subh p = c4d.BaseObject(c4d.Opolygon) .... # needs help return p
i need this for huge planes with a lot of segments (ocean surface), so it should be as fast as possible.
i think after creating an empty polygon either define all needed points/polys
or define just 4 points/1 polygon and then subdivide it
- FlavioDiniz last edited by FlavioDiniz
.
thanks Flavio !
with this i came that far...
it works, but hogs down c4d very fast! (12 subdivisons are already causing a huge lag)
also writing back to the userdata causes an indefinite loop :-/
what i need is are planes of 5000 to 10000m with subdivisions of 15 to 17,
which would result in segment sizes of 7.5 to 15cm [ 500000 / (2^15) = ~15 ]
is there a way to
- only update the plane if a parameter changes?
- update the userdata with the calculated segment size without causing the loop?
- get the already created plane and to not create/subdivide it again, if the parameters have not changed?
(not to re-create on refreshes of nothing changed)
Python Generator
UserData 1 : Size (Float)
UserData 3 : Subdivision (Integer)
UserData 5 : Segment Size (Static String)
import c4d def main(): w = op[c4d.ID_USERDATA,1] # just one dimension s = op[c4d.ID_USERDATA,3] # subdivision # op[c4d.ID_USERDATA,5] = str(w / pow(2, s)) # feedback causes indefinite loop :-/ p = c4d.PolygonObject(4,1) p.SetPolygon(0, c4d.CPolygon(3,2,1,0)) p.SetPoint(0, c4d.Vector(-w/2, 0, -w/2)) p.SetPoint(1, c4d.Vector(w/2, 0, -w/2)) p.SetPoint(2, c4d.Vector(w/2, 0, w/2)) p.SetPoint(3, c4d.Vector(-w/2, 0, w/2)) bc = c4d.BaseContainer() bc[c4d.MDATA_SUBDIVIDE_SUB] = s c4d.utils.SendModelingCommand(c4d.MCOMMAND_SUBDIVIDE, [p], c4d.MODELINGCOMMANDMODE_ALL, bc, doc) #c4d.EventAdd() # is it needed here? #print "done" return p
Just food for thought:
I once had the same desire to create landscapes in optimum resolution, but I came to realize that it is necessary to create a dynamic subdivision that reserves the higher resolutions for spots close to the camera. The numbers just accumulate too fast:
Subdivision of 15 means
2^15 = 32768 segments in either direction, which amounts to
32768^2 = 1,073,741,824 polygons (one gigapoly
).
Almost the same number of points is needed to define these polygons
(strictly one row and one line more, so 32769^2 = 1,073,807,361).
Each point is a vector from three floats, which is eight bytes each, resulting in a payload of roundabout 8GB for storing the points alone. Each polygon refers to four points which requires an index of four bytes at least (too lazy to look up the type now), which is another 16GB.
Thus, your plane has a memory footprint of 24GB for (2^15)^2 segments (for 2^17 it's 16 times more, or 384GB). A well equipped machine may still be able to work with that, but it's not practical, and causes inavoidable lags and most likely swapping. Not having a special datatype underlying, C4D may need to copy that data for rendering, too...
- indexofrefraction last edited by
@ Cairyn
i know that this hogs down c4d at some point.
that is exactly why i want a dynamically adjustable object, not a fixed subdivided polygon object
like this i can test the size/subdivision my machine can handle
of course it would be even better to have a plane with a polygon-resolution-fall off
but one step at the time :)
for now it would be interesting why c4d lags that fast...
i suppose there is something not good in this code,...
Hello,
mmm Ocean :D
as @Cairyn mention, this way is really too "brute force"
If on top of that you want to add a deformer to simulate the ocean, you are going to kill your computer.
You should search how you can build a mesh taking into account the screen/camera space. There's no point into having that kind of details km away from the camera. (I'm sure you are already aware of that)
You should try to build your own geometry instead of using the modeling command subdivide.
other ideas :
you don't want to go with a plugins because of the render farm but remember that you can still export the animated mesh with alembic. Of course that will create a huge file, but will reduce the render time.
You can also have a "light" grid using one of the Ocean deformer that are available out there and use a displacement map (you can have some feedback of that on the viewport with the viewport tessellation).
This is really shot depending also.
Cheers
Manuel
hm.. ok...
i guess what would be best is an adaptive plane....
- outer size, ie 5km
- center size, ie 500m
- outer segment size, 10m
- center segment size, 10cm
- falloff type (linear/exponential/...)
i still wonder if the code above is good,
or if it hogs down because of event loops
- m_magalhaes last edited by
hello,
The generator is already caching your object. That what "optimize cache" checkbox is for, you don't have to deal with CheckDirty and things like that.
If the generator see nothing different, it will send the cache. If you change a parameter, it will rebuild the object (execute the code, update the cache etc)
Try to pick a simple plane and launch the subdivide command by hand, you will see that after some, the command will start to be slow. Go step by step, start with one single polygon, and subdivide by 1, after 16 777 216 polygons, you will see the command take some times to go to 67 108 864 poly.
Cinema 4D should start to show that he's not happy. And it's not a generator.
Your code doesn't loop or anything, you are just asking for too much.
Cheers
Manuel
- FlavioDiniz last edited by FlavioDiniz
.
thank you guys,
im intrigued to make something like this as a Python Generator
- starting in the center
- generate a square of x square polygons
- then at some point double the polygon size
- generate x borders with the new polygon size
- etc etc
not such an easy algorithm, tough
... isnt there a built in solution for something like that?
adaptive / falloff subdivision ....
hello,
There's no such a function as we can tell in the sdk.
There's not one single solution for your problem. As it's for Ocean Simulation your problem is more 2D than 3D. You need to build a plan.
You can create points in your space and use a 2D Delaunay triangulation, that could work.
You can create a 2D grid on screen space and project that grid into world space.
I don't know witch one works and witch one could be the fastest.
a very naive way to go is this one but maybe that will help you to try to find formula and start somewhere.
import c4d #Welcome to the world of Python def main(): # Width l = 5000 # Height L = 5000 # Segments per lengh lSegments = 20 LSegments = 10 # Space beetween points lSpace = l /(lSegments - 1) LSpace = L /(LSegments - 1) # halfPoint lhalfSegment = lSegments * 0.5 # Creates points points = [] for z in xrange(LSegments): for x in xrange (lSegments): p = c4d.Vector(0.0) # This is where you can change the formula to change point position p.x = lSpace * ( x - lhalfSegment) # Updates the polygon object pol.Message(c4d.MSG_UPDATE) # Done return pol
Cheers
Manuel
thank you manuel,
atm i'm going another way...
- make a plane with a low subdivision
- select polygons with a distance from the center
- subdivide
repeat at 2. with lower distance
etc...
i guess this is probably faster anyway
(compared to setting up all points manually)
ps.
adjacent polygons are not sharing the same points in your code, right?
this is the tricky thing with setting up a plane, i guess .)
hello,
yea you can try this way also, but selecting polygons closer to camera will take some time because there will be more and more polygons. But that can do the trick.
@indexofrefraction said in Plane by Python Generator:
adjacent polygons are not sharing the same points in your code, right?
this is the tricky thing with setting up a plane, i guess .)
Of course they are. But there's no UVs.
Cheers
Manuel
still working on it, it looks very promising...
generating the geometry seems to work fast enough, but...
adding a phong tag to the python generator has no effect
the solution for this is here:
but would it be possible to get UVs for such a plane?
c4d.utils.GenerateUVW() ? TempUVHandle ?
EDIT:
similar to the phong tag solution i tried to add an uvw tag to the generators cache object with:
uvw = c4d.BaseTag(c4d.Tuvw) polygon.InsertTag(uvw)
now, if i convert the generator to polygons I get:
A problem with this project has been detected:
Object "OceanPlane" - Tag 5671 not in sync.
Please save and contact MAXON Support
with a description of the last used commands, actions or plugins.
the object then has an uv tag, tough,
but mapping does not work, yet.
any tips to get working uv coordinates on such a plane object?
hello,
I've updated my code, there were some bugs with type and things like that.
Also added the phong tag and a way to create a UVW tag in the most easiest way i found.
import c4d #Welcome to the world of Python def main(): # Width l = 500 # Height L = 500 # Segments per lengh lSegments = 500 LSegments = 500 # Space beetween points lSpace = l /(lSegments - 1.0) LSpace = L /(LSegments - 1.0) # halfPoint lhalf = l * 0.5 # Creates points points = [] for z in xrange(LSegments): for x in xrange (lSegments): p = c4d.Vector(0.0) # This is where you can change the formula to change point position p.x = lSpace * x - lhalf # Adds a phong tag (copy from generator or create a new one) # Checks for phong tag on python generator phong = op.GetTag(c4d.Tphong) if phong is None: # Creates one, if non-existent (careful with such operations, make sure, the tag is only created once) phong = op.MakeTag(c4d.Tphong) if phong is not None: pol.InsertTag(phong.GetClone()) # important to insert a clone of the original phong tag # Creates a new texture tag matTag = c4d.BaseTag(c4d.Ttexture) pol.InsertTag(matTag) # Changes the settings of that texture tag to cubic matTag[c4d.TEXTURETAG_PROJECTION] = c4d.TEXTURETAG_PROJECTION_CUBIC matTag[c4d.TEXTURETAG_POSITION] =c4d.Vector(0 , 0, L * 0.5 ) matTag[c4d.TEXTURETAG_SIZE,c4d.VECTOR_X] = l * 0.5 matTag[c4d.TEXTURETAG_SIZE,c4d.VECTOR_Z] = L * 0.5 # Generates uvwCoordinates from the texture tag uvwTag = c4d.utils.GenerateUVW(pol, pol.GetMg(), matTag, pol.GetMg()) # Inserts the uvwtag pol.InsertTag(uvwTag) # Removes the TEXTURE tag so we can add one on the generator. matTag.Remove() # Updates the polygon object pol.Message(c4d.MSG_UPDATE) return pol
Cheers
Manuel
Hey thanks ALOT, Manuel !
i'm creating the plane in a different way,
because i double the polygon sizes with distance
still, i must check your code for learning!
the phong & material tag solution is gold :)
EDIT:
all works like a charm, now :)
topic solved!
- m_magalhaes last edited by
hiya,
Feel free to share your code (if you want of course) as it could help other people.
If you think your question is solved, please change the states of this thread to solved.
Cheers
Manuel
Yes, I'd like to see it of course. Maybe you can share it via Github so others can for and contribute to it. :) | https://plugincafe.maxon.net/topic/11614/plane-by-python-generator/ | CC-MAIN-2019-47 | refinedweb | 2,010 | 72.76 |
Introduction to openpyxl
Openpyxl is a third-party library, which can process Excel files in xlsx format, that is, Excel files in Excel 2003 or above.
(the 2003 version can use xlrd and xlwt libraries to read, but I personally suggest that the XLS file should be handled with pandas. )
The first step is to install openpyxl
Enter PIP install openpyxl – I in CMD
(here, the author uses Douban source. The download speed is much faster than that without changing the domestic source. If the domestic source has been configured, you can directly input the previous part of – I)
-Creating files
from openpyxl import Workbook #Method 1: the default table name is sheet wb = Workbook() ws = wb.active #Create a new sheet #Method 2: this method can customize the table name and insertion position. If you enter the location, the default value is 0 ws = wb.create_sheet('Shee1', 0)
-Open the file
from openpyxl import load_workbook wb = load_workbook('xxx.xlsx')
-Get table name
#Get the names of all the sheets and get a list of all the sheet names sheet_names = wb.sheetnames
-Selection table
#Method one reads the selected sheet in Excel directly ws = wb.active #Method 2 enter the name of the sheet by yourself ws = wb['Sheet']
-Storing data
from openpyxl import Workbook wb = Workbook() ws = wb.active #The first is row and column ws.cell(row=1, column=2).value = 1 #It can be directly simplified as ws.cell(1, 2).value = 1 #The second method ws['B1'].value = 1 #Add one row of data at a time ws.append([1, 2, 3, '4']) #Note that any number added here should be enclosed in brackets #Finally, remember to save the data wb.save('file name. Xlsx')
-Read data
from openpyxl import load_workbook #Here test.xlsx You can create a file in the same directory of Py file and fill in the data wb = load_workbook('test.xlsx') ws = wb.active #Read a cell method one print(ws['A1'].value) #Read a cell method two print(ws.cell(1, 1).value) #Read the value of the cell and the value of the storage cell is similar, one is to read out the value, the other is to assign value
-Gets the maximum number of rows and columns of the sheet table
from openpyxl import load_workbook wb = load_workbook('test.xlsx') ws = wb.active #Get the maximum number of rows max_r = ws.max_row #Gets the maximum number of columns max_c = ws.max_column
-Column letters and coordinate numbers are converted to each other
from openpyxl.utils import get_column_letter, column_index_from_string from openpyxl import Workbook wb = Workbook() ws = wb.active #Returns letters based on the number of columns print(get_column_letter(3)) # C #Returns the number of columns based on letters print(column_index_from_string('D')) # 4
-Traversing cells
from openpyxl import load_workbook wb = load_workbook('test.xlsx') ws = wb.active #Attention #Openpyxl reads the index of Excel from 1 #Because the range function is left closed and right open, and the index starts from 1, the maximum value must be + 1 for i in range(1, ws.max_row+1): for j in range(1, ws.max_column+1): print(ws.cell(i, j).value)
The next article will write about some advanced operations of openpyxl, such as formatting cells, colors, and so on
Like the article can pay attention to me, if you think the article is useful to you, please point a praise and collection
This work adoptsCC agreementThe author and the link to this article must be indicated in the reprint | https://developpaper.com/basic-operation-of-openpyxl-office-automation/ | CC-MAIN-2020-50 | refinedweb | 585 | 63.29 |
Hi Everyone!
As you probably know wildfires are a huge problem, both here in California, and worldwide. In the last decade we witnessed the costliest, the most destructive and the deadliest wildland fires on record. Recent examples include the 2018 Camp Fire -- the deadliest and most destructive fire in California's history, and record 2019–20 bushfire season in Australia. I am working on an autonomous drone that can detect wildfires early using Computer Vision and Convolutional Neural Networks.
1.How it works?
My project uses the camera and a GPU-accelerated Neural Network as a sensor to detect fire.1.1 Dataset
The major problem with using Machine Learning for solving problems is that they require large datasets to train on. Unfortunately, existing fire datasets usually either images from a lab setting or human-perspective images, which don't transfer well to recognizing wildfires from air. So I created a domain-specific fire dataset 2'000 images myself. It's consists of drone imagery from real-world fire scenarios I collected from youtube.
You can download it here or at firedataset.org.
The neural network training & evaluation code is available on my github. The training was performed on Nvidia Tesla P100. The pre-trained weights are available here.1.3 Inference
The images from the RGB camera aimed downwards are captured at 1920×1080 and split into k=N×N segments. The number of segments can be adjusted.1.4. Performance
I evaluated the image classifier on a number of architectures on Jetson Nano with pretty good results. The FPS numbers are surprisingly high! For instance, flying my drone at 10m/s at altitude of 30m I need to take an image around every 5.4s in order to provide continuous land coverage. That means I can easily run 70 network inferences between captures, including the overhead of capturing & resizing the images.
I could describe in detail the steps to reproduce building the exact same hexacopter platform, but in reality you can run that on most DIY drones. The only strict requirements for a drone to be compatible with this project are:
1. The ability to carry 500g of payload (jetson nano + cameras + DC/DC converter + radios).
2. Flight controller that's capable of MAVlink communication. Your easiest bet will be a Pixhawk FC board with Ardupilot/PX4 software.
Jetson with raspberry pi camera. It is connected to Pixhawk 4 Mini with a custom cable, like here. I use a 10A DC/DC converter to power the system. The case is adapted from:
Here's how my setup looks like mounted on a drone.
This is a version with belly-mounted RGB + IR cameras for mapping
Make sure you can run a basic inference pipeline on Jetson Nano.
2.1. Download the weights file
2.2. Download the code
git clone
2.3. Install the requirements
pip3 install requirements_for_inference.txt
2.4. To install pytorch on nano, follow the instructions here
2.5. Make sure you can run the simplest inference pipeline
The most basic python code (also available in the github repo) for inference on Jetson Nano is available below:
import torch
import os
import time
import cv2
import numpy as np
from model import Model
# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
# Defaults to 1280x720 @ 60fps
# Flip the image by setting the flip_method (most common values: 0 and 2)
# display_width and display_height determine the size of the window on the screen
def gstreamer_pipeline(
capture_width=1280,
capture_height=720,
display_width=224,
display_height=224,
framerate=60,
flip_method=0,
):
return (
"nvarguscamerasrc ! "
"video/x-raw(memory:NVMM), "
"width=(int)%d, height=(int)%d, "
"format=(string)NV12, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink"
% (
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)
def avgerage(l):
return sum(l)/len(l)
# We can get some nice 8FPS from this image
if __name__ == "__main__":
# To flip the image, modify the flip_method parameter (0 and 2 are the most common)
print(gstreamer_pipeline(flip_method=0))
cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
device = torch.device("cuda:0")
model = torch.load('resnet50-epoch-15-valid_acc=-1-test_acc=0.9856.pt')
model = model.to(device)
if cap.isOpened():
while True:
start = time.time()
ret_val, img = cap.read()
img = np.swapaxes(img,0,2) # WxHxchannel convention to channelxWxH convention
img_mini_batch = np.expand_dims(img, axis=0)
tens = torch.Tensor(img_mini_batch).to(device)
result = model(tens)
print(result)
print(time.time()-start)
cap.release()
else:
(
"Unable to open camera"
)
2.6. If the code above works, you can proceed to deploying on a real drone by following the README here:. This code will allow you to scan an area and take images autonomously. When the drone detects fire, it will print an alert on the console. Hopefully soon I will be able to integrate it with a more sophisticated fire-response system 🙂
Visuals - what the drone can see
Here's what the drone 'sees' with a dual camera setup when flying over the fire.
Here's a Grad-CAM visualization, which is a useful technique to determine what's happening inside the neural network.
To understand how this video was created, read more about Grad-CAM visualizations: or read through the notebooks on my github 🙂4. Code and resources
Hackster allows you on only one repository in the description, so I list them here.
4.1 The code for training and inference:
4.2 The code for controlling the drone:
4.3 The pre-trained weights:
4.4 Custom drone fire dataset: | https://www.hackster.io/tomasz-lewicki/fire-detecting-drone-1ae9bf | CC-MAIN-2021-04 | refinedweb | 934 | 58.08 |
I'm used to press ctrl+shift+t to get a closed tab back in my browser that's why I did the same thing in sublime text and since there was no such keybinding available I implemented my own plugin and added a keymap for it.
This: pastebin.com/rXQ3XNcb
(old version v1.0.1) pastebin.com/Fw3ZUJ1z
The packages are not always in appdata: see the "--data" command line option. You can find them using "sublime.packagesPath()"
Thanks! I updated the plugin to use sublime.packagesPath().
This is my version of this plugin):
f = open(last_closed_file_path, "wb")
f.write(view.file_name())
f.close()
P.S.: It works without anything opened.
Minor tweak not to record unsaved disk files (New files without name):):
if view.file_name() != None:
f = open(last_closed_file_path, "wb")
f.write(view.file_name())
f.close()
I'm on fire.. now with history length, defined on line 5.
import sublime, sublime_plugin
import os
last_closed_file_path = sublime.packages_path() + "/User/last_closed_file.path"
last_closed_file_max = 10
def get_history():
f = open(last_closed_file_path, "rb")
history = f.readlines()
f.close()
return history
def save_history(history):
f = open(last_closed_file_path, "wb")
f.write("\n".join(map(str, history)))
f.close()
class OpenLastClosedFileCommand(sublime_plugin.WindowCommand):
def run(self):
if os.path.exists(last_closed_file_path):
history = get_history()
if len(history) > 0:
self.window.open_file(history.pop().strip())
save_history(history)
class OpenLastClosedFileEvent(sublime_plugin.EventListener):
def on_close(self, view):
if view.file_name() != None:
history = get_history()
history.append(view.file_name())
if len(history) >last_closed_file_max:
history = history-last_closed_file_max:]
save_history(history)
Oh boy I never knew about this plugin I was asking how to do this in the suggestion forums and had a shortcut key to open the single last file but this is great thank you so much!
I'm being really dumb, but how do you setup this plugin? I've yet to add any plugins to Sublime so all I did was select "New Plugin" from the menu and pasted the code in, saved the file then setup the key binding as mentioned in the example but it does not seem to run.
Phunky,
Your user key bindings are probably the older style. Throw this in there:
{ "keys": "ctrl+shift+t"], "command": "open_last_closed_file" }
...and I had to create a file in User called "last_closed_file.path" before it started to work.
I noticed something though, at least with dresende's latest version-- once a closed file is re-opened, it will start to walk the directory up each time you invoke the plugin. That is, say I close this file:
then if I hit ctrl+shift+t again, it will "re-open" the User directory, then the Packages directory, etc. Think it needs a bit of tuning but I really like the concept.
Excellent plugin. Thanks.
Slightly off topic, but what is the easiest way to debug these plugins? Say, nothing happens; where do I go? Are there any log/puts commands in python? Does it go anywhere?
Thanks!
Thank you so much ehamiter, that got the plugin working
Inspired by dresende's code, but not quite content, I implemented a slightly enhanced version of this plugin. This plugin keeps track of both closed files as well as the files that have been opened. The default behavior of the plugin is to open the most recently closed file, but it can also provide access to a searchable quick panel with the recently closed files followed by a history of the files that have been opened. This can be helpful when using multiple projects or when wanting to reopen a file that you closed a few hours or days ago.
The implementation uses the sublime.Settings API methods to store and cache the file open/close history. The history file is saved in the User sub-directory of the Packages directory (as History.sublime-settings). The history is formatted in JSON like the rest of the sublime settings.
import sublime, sublime_plugin
import os
HISTORY_SETTINGS_FILE = "History.sublime-settings"
HISTORY_MAX_ENTRIES=500
def get_history(setting_name):
"""load the history using sublime's built-in functionality for accessing settings"""
history = sublime.load_settings(HISTORY_SETTINGS_FILE)
if history.has(setting_name):
return history.get(setting_name)
else:
return ]
def set_history(setting_name, setting_values):
"""save the history using sublime's built-in functionality for accessing settings"""
history = sublime.load_settings(HISTORY_SETTINGS_FILE)
history.set(setting_name, setting_values)
sublime.save_settings(HISTORY_SETTINGS_FILE)
class OpenRecentlyClosedFileEvent(sublime_plugin.EventListener):
"""class to keep a history of the files that have been opened and closed"""
def on_close(self, view):
self.add_to_history(view, "closed", "opened")
def on_load(self, view):
self.add_to_history(view, "opened", "closed")
def add_to_history(self, view, add_to_setting, remove_from_setting):
filename = os.path.normpath(view.file_name())
if filename != None:
add_to_list = get_history(add_to_setting)
remove_from_list = get_history(remove_from_setting)
# remove this file from both of the lists
while filename in remove_from_list:
remove_from_list.remove(filename)
while filename in add_to_list:
add_to_list.remove(filename)
# add this file to the top of the "add_to_list" (but only if the file actually exists)
if os.path.exists(filename):
add_to_list.insert(0, filename)
# write the history back (making sure to limit the length of the histories)
set_history(add_to_setting, add_to_list[0:HISTORY_MAX_ENTRIES])
set_history(remove_from_setting, remove_from_list[0:HISTORY_MAX_ENTRIES])
class OpenRecentlyClosedFileCommand(sublime_plugin.WindowCommand):
"""class to either open the last closed file or show a quick panel with the file access history (closed files first)"""
def run(self, show_quick_panel=False):
self.reload_history()
if show_quick_panel:
self.window.show_quick_panel(self.file_list, self.open_file)
else:
self.open_file(0)
def reload_history(self):
history = sublime.load_settings(HISTORY_SETTINGS_FILE)
self.file_list = get_history("closed") + get_history("opened")
def open_file(self, index):
if index >= 0 and len(self.file_list) > index:
self.window.open_file(self.file_list[index])
I use the following shortcuts to access the plugin:
{ "keys": "ctrl+shift+t"], "command": "open_recently_closed_file", "args": {"show_quick_panel": true} },
{ "keys": "ctrl+alt+shift+t"], "command": "open_recently_closed_file" }
I added an updated version and uploaded it to github. Unfortunately commiting to github is blocked from my work so I had to add it as a gist.
The updated version now keeps both a global and a per-project history. This allows you to access the file history specific to the active project or the file history across all projects. I find this extremely helpful when opening a project after a few days and forgetting the name of a file that I closed when I last accessed that project.
gist.github.com/1133602
This is sweet jbjornson! I find myself constantly closing files by accident all the time. I only wish it would restore the tab to the same location, like Firefox does.
Glad you like it
I just had a quick check of the api and it looks like there is something that might help with that: window.set_view_index(view, group, index)
I'll have a look and see if I can get something to work.
Done (finally). It should now restore the tab to the previous location. Please let me know if it works...gist.github.com/1133602
I mirrored this plugin on github (here) so it can be installed via Package Control (just search for "File History").
Thanks FichteFoll! | https://forum.sublimetext.com/t/openlastclosedfile/729/12 | CC-MAIN-2017-39 | refinedweb | 1,151 | 50.12 |
DBus reply data accessed via QString
Hi Micland, thank you a lot for your answer. However when I adapt your code to mine in that way:
iface_Clementine = new QDBusInterface("org.mpris.clementine", "/Player", "org.freedesktop.MediaPlayer", QDBusConnection::sessionBus(), this); replyClementine = iface_Clementine->call("GetMetadata"); QList<QVariant> args = replyClementine.arguments(); if (args.count() == 0) { qCritical("Got no valid DBus response"); } else { QString a1 = args.at(1).toString(); qDebug() << a1; }
I get the following result:
"", In other word an empty string. Sometimes my app crashes. Why?
- kshegunov Qt Champions 2017
@amonR2
You should check the obtained interface and the reply for validity before doing anything with them. As for the crash, run it through the debugger, if having trouble paste the stack trace here.
Kind regards.
I get the following result:
"", In other word an empty string. Sometimes my app crashes. Why?
Did you run the
qdbusviewer(separate tool shipped with Qt) and inspect your interface manually? Does this viewer also get an empty string or anything else? If you get a valid answer using Qt I guess the backends is sending an empty string...
@ Kshegunov, thanks for your help. To check if the obtained interface is valid, is this code correct:
iface_Clementine = new QDBusInterface("org.mpris.clementine", "/Player", "org.freedesktop.MediaPlayer", QDBusConnection::sessionBus(), this); if (!iface_Clementine->isValid()) { qCritical("The player is not nactive"); } else{ replyClementine = iface_Clementine->call("GetMetadata"); QList<QVariant> args = replyClementine.arguments(); if (args.count() == 0) { qCritical("Got no valid DBus response"); } else { QString a1 = args.at(1).toString(); qDebug() << a1; } }
But how do you do to check if the reply is valid?
@ micland,. Why the backend sends an empty string instead of the actual result? And in my code, if you notice, I check if the number of items in the list is null. Does Qt counts an empty string as an item?
- kshegunov Qt Champions 2017
To check if the obtained interface is valid, is this code correct
Yes.
But how do you do to check if the reply is valid?
I personally check it through a
QDBusReplyobject (e.g. here). If using the dbus message class use QDBusAbstractInterface::lastError instead..
Uhm, that's a bit strange. Can you try to inspect the reply with the debugger or send the arguments to debug out (
qDebug() << args;) just to see if all arguments are empty or what Qt has interpreted from the message?
@kshegunov , thank you very much.
@micland , I don't know how to inspect the reply with the debugger so I prefer to give you the result of
qDebug() << args;which is:
(QVariant(, )). Should I set Qt or my header in a certain way? So far I use the following includes:
#include <QtGui> #include <QtDBus/QtDBus>
Is it ok? And what about the Q_DECLARE_METATYPE() that I don't use? Should I use it here? If yes, how?
I don't know how to inspect the reply with the debugger so I prefer to give you the result of
qDebug() << args;which is:
(QVariant(, )). Should I set Qt or my header in a certain way?
The order of the includes should not be relevant.
According to your debug output your program does not get the same answer as
qdbusviewergets. But I don't think that it's a bug in Qt because
qdbusvieweruses the same logic and should be affected by that bug, too.
Could you check for
reply->errorMessage();and double check if there is any typo in the specified DBus interface / object / method in your code?
Could you check for
reply->errorMessage();
There sorry but I am not sure what you are talking about, did you mean to do this instead:
qDebug() << replyClementine.errroMessage();? If yes, the output is still an empty string (
""). For the typo in the DBus interface, objet and method in my code I don't think there is one as
qDebug() << replyClementinegives me a correct response. But why this response is not "translated" into a list of variants and strings? What may "corrupt" the process here?
@amonR2
Yes, I wanted to see that there is no error message provided (or: I was hoping to see a message that gave us a hint ;-) ). I'm a bit confused that there is no error AND no usable reply.
Another idea: If you don't known how to debug the arguments, can you print out
args.count()? Perhaps there is just one entry in the list and this entry might be a new list (or hash map) - and you're accessing the (not existing) second argument.... (the output
(QVariant(, ))might indicate that there is just one argument...).
The output of
qDebug << args.count();gives me
1. There is something I have changed in my code's environment: previously it was in release mode and I have just thought to put it in Debug one's which gives more info. With this code:
QString a1 = args.at(1).toString(); qDebug() << a1;
my app crashes and the IDE gives me this message:
ASSERT failure in QList<T>::at: "index out of range", file /usr/include/qt4/QtCore/qlist.h, line 469. But when I use this one:
QString a1 = args.at(0).toString(); qDebug() << a1;
my app doesn't crash and
qDebug()gives me an empty string.
@amonR2
Ok, the argument list contains just one argument - that's why you get the "index out of range" error because you try to access the second argument. Please add the line
qDebug() << args.at(0).typeName();to see the type of the first argument. Maybe it's a list where you find all your expected data in...
The reply from
qDebug() << args.at(0).typeName();gives me:
QDBusArgument.
@amonR2
Huh, a nested
QDBusArgument? Well check the inner argument if it serves the expected information:
args.at(0).value<QDBusArgument>().args().
(You should really try to inspect the reply using the debugger and iterate through the object tree to see what's encapsulated in the arguments. That's a lot easier than poking with
qDebug()- perhaps this link will help you getting started?)
Huh, a nested
QDBusArgument?
sorry I don't understand what you mean there but that's all qDebug displays.
Then when I try to qDebug this
args.at(0).value<QDBusArgument>().args();, the compiler says:
'class QDBusArgument' has no member named 'args'. So I tried with this:
args.at(0).value<QDBusArgument>()and received the following from the compiler:
no match for 'operator<<' in 'qDebug()() << QVariant::value() const [with T = QDBusArgument]()' [...] no known conversion for argument 1 from 'QDebug' to 'QDBusArgument&'.
Finally, by puting a breakpoint (thank you for the link) at
QList<QVariant> args = replyClementine.arguments();in my code and launching the debug-mode, the compiler stops and shows me this through the "Locals and Expressions" window:
args <inaccessible> QList<QVariant> this @0x83f87d8 FenetreNNS QWidget QWidget Deviselabel 0x0 QLabel * MainLabel @0x844d210 QLabel Nextm @0x8451f28 QPushButton NoTrackLabel @0x8454628 QLabel Play @0x844f628 QPushButton Previous @0x8451088 QPushButton TemoinSousTitre false bool Volume @0x8453a48 QSlider VolumeLabel @0x844d160 QLabel boutonDeviseBaht @0x8439978 QPushButton boutonDeviseDenar @0x843cce0 QPushButton boutonDeviseDollar @0x843f968 QPushButton boutonDeviseEuro @0x8440418 QPushButton boutonDeviseLira @0x843e608 QPushButton boutonDevisePeso @0x8442098 QPushButton boutonDevisePound @0x8444e58 QPushButton boutonDeviseRuble @0x843b078 QPushButton boutonDeviseRupee @0x844a4b0 QPushButton boutonDeviseWon @0x844b2e0 QPushButton boutonDeviseYen @0x8448a28 QPushButton boutonDeviseYuan @0x8449e70 QPushButton iface_Clementine @0x845e788 QDBusInterface machine @0x8455420 QStateMachine principalefen @0xbffff7b4 FenPrincipale replyClementine QDBusMessage d_ptr @0x845d098 QDBusMessagePrivate signMapper @0x8454ac0 QSignalMapper signMapper2 @0x84553e8 QSignalMapper sliderSousTitre @0x844d998 QSlider state1 @0x8451f60 QState state2 @0x8451f70 QState str1 "Currency" QString tempo @0x84505b0 QTimer
Should I show the content of some other windows?
Huh, a nested
QDBusArgument?
sorry I don't understand what you mean there but that's all qDebug displays.
Aye, sorry - I was wrong, read your answer too fast and got confused...
It's hard to find this error without reproducing it. If I find some time at the weekend I'll try to debug it. Can you tell me which software you installed? will say: what's that player your communicating with?
@micland
The name of the player is "Clementine" and I am using Qt 4.8 . No worries, thank you a lot for your help anyway. I will keep looking for a solution. If I can't find anything I will use one of the other APIs. Cheers again.
@amonR2
I'm a little bit closer, but did non really succeed...
First: Qt 4.8 is some days older, I used 5.6 for my experiements. Inspecting the DBus interface using
qdbusviewer(from Qt5.6) ended with the error message "Unable to find method GetMetadata on path /Player in interface org.freedesktop.MediaPlayer" (the method is listed but I can't call it) - but the cli client
qdbusprovided me the expected data. I can't say if that's a bug in Qt or a "conformity problem", but it shows that calling the clementine DBus interface from Qt works not straight forward.
I tried to access the interface using a simple Qt program (like the example code you posted) and got the same reply as you did. Inspecting the received
QDBusArgumentshowed that its
currentyType()is a MapType (4) which means that the arguments are organized as a key/value list (the same says the MPRIS spec: see "Metadata" is an array of dict entries in the form (string, variant) eg: {sv}.,)
I think you have to extract the values manually and play arround with
beginMap(),
beginMapEntry(),
endMapEntry(),
endMap()of
QDBusArgument. (I spent just a little time but did not succeed ...)
But I'm interested if that's the right way so if you find a solution please share it here! (If I find some time I will try it again, too...)
@amonR2
out of curiousity I was looking for a solution (I didn't know before that maps and structs are supported by DBus). And I found a way - using Qt 5.6, but should work with Qt 4.8 as well:
// first the same code as you did already QDBusInterface *iface_Clementine = new QDBusInterface("org.mpris.clementine", "/Player", "org.freedesktop.MediaPlayer", QDBusConnection::sessionBus()); if(iface_Clementine->isValid()) { QDBusMessage replyClementine = iface_Clementine->call("GetMetadata"); qDebug() << replyClementine; // there is just one argument and this argument is a map QList<QVariant> args = replyClementine.arguments(); const QDBusArgument &arg = args[0].value<QDBusArgument>(); // the map has to be extracted entry by entry which is a tuple of key and value // in this case, the key is a string but the type of the value depends on the key (string, int, ...) arg.beginMap(); while (!arg.atEnd()) { QString key; QDBusVariant value; arg.beginMapEntry(); arg >> key >> value; arg.endMapEntry(); qDebug() << "Key:" << key << "\t\tValue:" << value.variant(); } arg.endMap(); }
You still have to inspect the type of the value (depends on what the entry represents: album or track number, or whatever). And the argument parsing can be wrapped in a stream operator, see
Some further helpul information can be found here:
I have no clue why the qdbusviewer of Qt 5.6 is unable to call the DBus method but the code snippet above is working fine.
WOOW!!! Thank you sooo much micland!! It works!!!
I tried a code similar to yours but some data were missing. Just needed to do a very tiny modification on your code:
qDebug() << key << value.variant().toString();
or directly
qDebug() << value.variant().toString();
Except it, it is perfect! Thank you again.
@micland said:
I have no clue why the qdbusviewer of Qt 5.6 is unable to call the DBus method but the code snippet above is working fine.
I hope it is going to change as I intend to use it or the 5.7 version very soon.
Cheers again. | https://forum.qt.io/topic/68184/dbus-reply-data-accessed-via-qstring/22 | CC-MAIN-2019-35 | refinedweb | 1,914 | 57.87 |
I do a test because I was thinking that it's useless to activate a pin if it's already done. But maybe I'm wrong ? I don't know...
Anyway, thank you
I do a test because I was thinking that it's useless to activate a pin if it's already done. But maybe I'm wrong ? I don't know...
Anyway, thank you
Thank you Luni, this do exactly what I want. I had 2 simple functions to LED.h :
#ifndef LED_h
#define LED_h
class LED
{
public:
LED(unsigned _pin)
Thank you !
I choose to rename my variables ;-)
But I have another problem... TeensyTimerTool throw errors :
Have to figure why..
Hi Luni,
I installed the last version of TeensyTimerTool and use it with a project that already have the previous version working. In this project I use TeensytimerTool to drive a chronometre with...
Thank you Luni,
I'll try this ASAP
Hi,
I have found the problem, but have hard time to figure how to solve it.
In my setup function, I have this :
for (int i = 0; i < 5; i++)
{
analogWrite(PIN_LED, LED_BRIGHTNESS);...
Last test I've done is to place button init at the very beginning of the setup().
Doing this highly reduce the pin pullup thing to 350ms. but the result is still the same
The last one for today :
22325
Yellow : the main power (+12,5V)
Magenta : Teensy 3v3 Regulator
Blue : the voltage at digital pin at teensy side (pullup activated)
Green : the voltage at the...
Here are scope trace of what I think do this :
22319
Yellow : the main power (+12,5V)
Magenta : the voltage at button (0V when not pressed)
Blue : the voltage at digital pin at teensy side...
This is the code :
#include "Bounce2.h"
#define PIN_BOUTON1 2
#define PIN_BOUTON2 3
#define PIN_BOUTON3 4
#define DEBOUNCE_TIME 5
#define SHORT_PRESS 500
Look like a code problem too.
If I try to inverse the way I trigger the photocoupler (button before the led, and resistor after the LED), it's behave the same.
Hello,
I currently test an hardware that use Digital Input (3 inputs). I try to use an photocoupler : Renesas PS2801-4
I think I have make mistake with the way I use it because if I set :...
Autodesk which own Eagle today recommand to NOT use the autorouter. Instead, it recommand to use it to verify if your component placement is good :
They say that if autorouter can't complete 85%...
Thank you.
Hi,
Added Teensy 4.0/4.1 to Chris O. fork.
22123
Edit : typo.
Luni, your MP box is full ;-)
This is what I do, and it require some adjustments in vsTeensy.json AND c_cpp_properties.json ;-)
Also I noticed in some vsTeensy.json a "dummy" project after the real project. The dummy project...
Hello Luni,
I'd like that VisualTeensy include ALL libraries in the project. It will be better for us to keep a working project, even if/when a library change and break compatibility with an older...
Sorry Luni, this was a cache problem
Cleaning the project and rebuild it solve all strange behaviour
thank you
Hello Luni,
today I updated TeensyTimerTool library on my computer and since, my project don't operate as it use to. Periodic timers don't start.
I tried to change some command according to your...
Thank you guy,
ultra constructive answer. In France starting a post with "And?" mean that you feel like the one who asked is a pure idiot. Same in Italy ?
Excuse me to refer to ...
Tested OK today,
Thank you Luni.
Did you ever play with GIT function in VScode ? I can't make it operate with my Synology NAS GIT server. Does it need SSH or only user/password ?
Hello,
In arduino delay is blocking. So one have to replace delay with his own function ?
Is this one a good sample or is there more efficient ?
void wait(uint_least16_t timeToWait)
{
...
Will test it soon and report.
Thank you
It mean that "Mes%20documents" need to become "Mes documents" to operate without warning at compilation.
If I don't change this, the project can still compil, but claim for an error about...
Thank you Luni for the link to cpp issues.
This one is the one to use to solve the problem :
One have to change...
In VT, I notice that when you generate "c_cpp_properties.json" you use different formats for files paths.
One is (for example) :
"C:\\Program Files...
It's odd. If I compile the project it's OK. So I don't think it's a typo error in the main.cpp file.
But if I intend the main.cpp file all goes wrong and then I can't compile anymore...
I'll wait...
Hello,
I notice a strange behaviour of VSC today, not related to VT. Maybe someone have the solution :
When I use Shift-Alt-F to intend code, it wipe the code all out. Previously it work OK....
Thank you.
TCK64 work great here.
I'm use Teensy 3.2 so there is no GPTs available.
But it was simple with TCK. I just used a "callback execution counter" and a loop to execute the relevant code each time the counter is at the...
Hello Luni,
Is there a way to get a timer each minute simply ?
I understand that TCK timer use (int) for the period which is too small for a minute. I have others timers that use (millis).
I can...
Hello Luni,
Thank you for this addon to VSC. I like it and will never go back to arduino IDE !
To answer to myself :
tMot.mini = (int16_t)EEPROM.readInt(25);
Hello,
Is there a simple and effective way to convert a uint16 value to a int16 value ?
Turrently I use this :
tMot.mini = EEPROM.readInt(25);
if (tMot.mini > 65000) tMot.mini =...
Thank you to everyone.
@Luni : TeensyTimerTool problem was due to a space in the path to reach it ;-)
Thank you Luni.
Another one for you. Today I gave a try to VCS and VisualTeensy. I use NotePad++ with success but it miss Intellisense...
I create a new project that meet the requirements for an...
I finally end with this code. It operate but I think it can be better :
#define debounceScreen 200 // Millisecondes
#define longPress 3500
bool wasTouched = false;
bool wasReleased =...
I'm sorry, but it's absolutely unclear :-(
At least difficult to do (for me)
Hello Luni,
It's look like the callback for timers must be void functions without any parameter. Is that it ?
#define RefreshADCPeriod 40
Hello,
I'm looking for a convenient way to allow short press and long press on the display. Is there an easy way to do it ?
Actually my code is this :
void loop() {
if...
Hi,
I'm sorry, it was a code error. I placed this definition in a "switch/case" case and it complain (and break the switch/case choices).
I then moved the definition in a function that is called...
In fact it work, but compiler complain ;-)
Is there another way to do this ?
the goal is to call a string in regard with a number.
Hello,
It's look like we can't do this with arduino/teensy ?
char Nom[10][20]={"Paris","Marseille","Lyon","Rennes","Toulouse","Strasbourg","Grenoble","Lille","Nantes","Bordeaux"};
Or do I...
Thank you for this explanation.
I confirm that including TeensyTimerTool before other libraries work.
However, I think there is a bug with FTM timers and Teensy 3.2 (72Mhz - compile as Faster).
When I use this as a periodic...
Thank you,
I'll try and let you know
teensy 3.2
Hello Luni
I found an incompatibility between TeensyTimerTool and Paul's ILI9341_t3.
When one want to use both libraries in the same sketch, compiler can't complete and throw errors.
Here is... | https://forum.pjrc.com/search.php?s=03ce10935ed08a9eb745adfbebfb463d&searchid=5829595 | CC-MAIN-2020-50 | refinedweb | 1,307 | 77.13 |
IRC log of prov on 2012-11-10
Timestamps are in UTC.
13:27:29 [RRSAgent]
RRSAgent has joined #prov
13:27:29 [RRSAgent]
logging to
13:27:31 [trackbot]
RRSAgent, make logs world
13:27:31 [Zakim]
Zakim has joined #prov
13:27:33 [trackbot]
Zakim, this will be PROV
13:27:33 [Zakim]
ok, trackbot; I see SW_(F2F)8:00AM scheduled to start 27 minutes ago
13:27:34 [trackbot]
Meeting: Provenance Working Group Teleconference
13:27:34 [trackbot]
Date: 10 November 2012
13:28:10 [pgroth]
Agenda:
13:28:25 [pgroth]
rrsagent, make logs public
13:34:14 [Dong]
Dong has joined #prov
13:34:15 [lebot]
lebot has joined #prov
13:35:59 [Luc]
Luc has joined #prov
13:36:01 [lebot]
Zakim, do you come in on Saturdays?
13:36:01 [Zakim]
I don't understand your question, lebot.
13:36:59 [hook]
hook has joined #prov
13:37:07 [Curt]
Curt has joined #prov
13:37:52 [Luc]
@Dong, we are waiting for Ivan to bring in the speakerphone
13:41:42 [SamCoppens]
SamCoppens has joined #prov
13:42:48 [CraigTrim]
CraigTrim has joined #PROV
13:43:04 [pgroth]
dong are you online?
13:43:23 [lebot]
Zakim, will the chairs be benevolent today?
13:43:23 [Zakim]
I don't understand your question, lebot.
13:43:32 [smiles]
smiles has joined #prov
13:43:56 [GK]
GK has joined #prov
13:44:06 [Zakim]
SW_(F2F)8:00AM has now started
13:44:09 [TomDN]
TomDN has joined #prov
13:44:13 [Zakim]
+??P0
13:44:31 [GK]
(Silence)
13:44:35 [smiles]
zakim, ??P0 is me
13:44:35 [Zakim]
+smiles; got it
13:45:39 [pgroth]
simon, dong can you get on skype
13:46:13 [pgroth]
we don't have a polycom right now
13:46:54 [pgroth]
Topic: Implementation Report
13:47:04 [GK]
Luc: this session will be about implementation report
13:47:13 [GK]
Thinks we'd like to do:
13:47:23 [GK]
1. update on where we are
13:47:35 [GK]
(Paul notices we're 15 minutes early)
13:48:26 [Zakim]
+Curt_Tilmes
13:48:39 [GK]
OK… we'll restart in 15 minutes… maybe we'll have a speakerphone
13:48:41 [Luc]
zakim, who is on the phone?
13:48:41 [Zakim]
On the phone I see smiles, Curt_Tilmes
13:48:55 [GK]
(Curt's experimenting with a mobile phone connected to Zakim)
13:48:57 [smiles]
yes
13:49:02 [Luc]
zakim, who is on the call?
13:49:02 [Zakim]
On the phone I see smiles, Curt_Tilmes
13:49:14 [Zakim]
+??P2
13:49:16 [GK]
I hear you!
13:49:42 [Dong]
??P2 is me
13:49:54 [Dong]
zakim, ??P2 is me
13:49:54 [Zakim]
+Dong; got it
13:51:07 [laurent]
laurent has joined #prov
13:51:40 [Zakim]
-Curt_Tilmes
13:53:29 [Luc]
scribe: GK
13:53:37 [Luc]
chair: Luc
13:56:17 [ivan]
ivan has joined #prov
13:56:39 [Zakim]
+ +1.617.715.aaaa
13:56:53 [ivan]
zakim, this is f2f
13:56:53 [Zakim]
ivan, this was already SW_(F2F)8:00AM
13:56:54 [ivan]
zakim, who is here?
13:56:55 [Zakim]
ok, ivan; that matches SW_(F2F)8:00AM
13:56:55 [Zakim]
On the phone I see smiles, Dong, +1.617.715.aaaa
13:56:55 [Zakim]
On IRC I see ivan, laurent, TomDN, GK, smiles, CraigTrim, SamCoppens, Curt, hook, Luc, lebot, Dong, Zakim, RRSAgent, trackbot, stain
13:57:17 [pgroth]
pgroth has joined #prov
13:57:17 [pgroth_]
pgroth_ has joined #prov
13:57:38 [pgroth]
pgroth has left #prov
13:58:05 [pgroth]
pgroth has joined #prov
13:58:30 [GK]
Restarting...
13:58:38 [GK]
Session about implementation report
13:58:43 [GK]
Would like to:
13:58:49 [GK]
1. update from Paul
13:59:42 [GK]
concerned about getting to end of implementation phase, then finding features are not implemented
14:00:00 [GK]
would like to have advance indication of what people will implemented
14:00:36 [jcheney]
jcheney has joined #prov
14:00:38 [GK]
2. review what we'll do for constraints; in particular what we do for constraints
14:00:43 [pgroth]
14:01:16 [GK]
Paul: talking about "gathering implementation evidence"
14:01:36 [GK]
3 parts (see page at link above)
14:01:57 [GK]
Overall happy with framework as described
14:02:16 [GK]
Ivan: what are the arrows on table 2?
14:02:59 [GK]
Paul: link to implementation blue arrows consumes, green arrows produces term
14:03:05 [GK]
s/Paul?Dong/
14:03:35 [Luc]
action: Dong to describe blue and green arrows in implementation report document
14:03:35 [trackbot]
Created ACTION-138 - Describe blue and green arrows in implementation report document [on Trung Dong Huynh - due 2012-11-17].
14:03:38 [Luc]
q?
14:03:49 [GK]
Paul: more questions?
14:04:45 [GK]
Ivan: minor thing… use usual W3C editorial style - do we intend to publish as note? (Looks like it might be one) Clarify that implementation report does not need to be published as TR.
14:05:20 [Dong]
ok, I'll change it to a note
14:05:24 [Luc]
action: pgroth to change the respec style for implementation report
14:05:24 [trackbot]
Created ACTION-139 - Change the respec style for implementation report [on Paul Groth - due 2012-11-17].
14:05:31 [Luc]
q?
14:05:44 [pgroth]
14:06:11 [GK]
Paul: prov constraints process document… idea to outlines process for testing constraints
14:06:25 [GK]
format for test case files (sect 2.1)
14:06:56 [Luc]
q+
14:07:07 [pgroth]
ack Luc
14:07:14 [GK]
identifier… constraint identifiers are embodied in the test case identifier
14:07:34 [Zakim]
+[IPcaller]
14:07:41 [Luc]
q-
14:07:42 [zednik]
zednik has joined #prov
14:07:43 [GK]
Luc: some of the constraints will be renumbered following removal of mentionOf
14:07:49 [ivan]
zakim, who is here?
14:07:49 [Zakim]
On the phone I see smiles, Dong, +1.617.715.aaaa, [IPcaller]
14:07:50 :08:00 [pgroth]
action: dong check constraints are matching to the updated document
14:08:00 [trackbot]
Created ACTION-140 - Check constraints are matching to the updated document [on Trung Dong Huynh - due 2012-11-17].
14:08:30 [GK]
GK: are constraint numbers fragile for this?
14:08:59 [GK]
Paul: wanted automated reporting of test case coverage.
14:09:07 [Luc]
q?
14:09:12 [GK]
Ivan: change respec style for this document too
14:09:35 [Luc]
q+
14:10:09 [GK]
Paul: hasn't really been reviewed as yet. Need some early review.
14:10:13 [ivan]
zakim, aaaa has SamCoppens TomDN laurent hook Curt pgroth Luc jcheney ivan GK lebot CraigTrim
14:10:13 [Zakim]
+SamCoppens, TomDN, laurent, hook, Curt, pgroth, Luc, jcheney, ivan, GK, lebot, CraigTrim; got it
14:10:26 [ivan]
zakim, who is here?
14:10:26 [Zakim]
On the phone I see smiles, Dong, +1.617.715.aaaa, [IPcaller]
14:10:27 [Zakim]
+1.617.715.aaaa has SamCoppens, TomDN, laurent, hook, Curt, pgroth, Luc, jcheney, ivan, GK, lebot, CraigTrim
14:10:27 :10:54 [GK]
Luc: would like to identify reviewers; preferebly developers; mostly not on call.
14:11:06 [Dong]
The sound on the phone line is broken, I have to rely on the scribe :(
14:11:08 [hook]
this one in respec.js? : var respecConfig = { specStatus: "ED", // specification status (e.g. WD, LCWD, NOTE, etc.).
14:11:37 [GK]
James: happy to look at this; biggest problem is managing data as number of test cases grows
14:12:10 [zednik]
zakim [IPcaller] is zednik
14:12:20 [GK]
Luc: need to be clear if test case is expected to succeed; currently in table, but should be in name for automated testing?
14:12:30 [GK]
Paul: I'm happy with that.
14:13:09 [ivan]
hook: 'unofficial' or 'base' could be used
14:13:22 [GK]
Dong: prefer using identifer to directory for different outcomes (pass/fail/etc.)
14:13:24 [ivan]
(per
)
14:14:05 [Luc]
action: Dong to update naming convention to include success/failure of test
14:14:05 [trackbot]
Created ACTION-141 - Update naming convention to include success/failure of test [on Trung Dong Huynh - due 2012-11-17].
14:14:06 [pgroth]
q?
14:14:09 [Luc]
q-
14:14:28 [GK]
Paul: last thing - questionnaire
14:15:09 [GK]
… idea was to ask implementers to fill out - whatthey support, and also other implementations with which they interoperate
14:15:21 [GK]
Stephan: questionnaire is complete, has been reviewed
14:15:35 [GK]
… want another round, get some more implementers to fill it out
14:15:42 [pgroth]
@stephan can you add a link
14:15:47 [ivan]
q+
14:15:57 [GK]
… discussion on mailing list about external vocabs using/extending prov
14:16:10 [GK]
… ask these groups to fill out questionnair
14:16:13 [pgroth]
q+
14:16:16 [Luc]
ack iv
14:16:48 [GK]
Ivan: if I am an implementer, do I see what's in the Google doc?
14:16:58 [GK]
Stephan: will add link to actial questionnaire
14:17:06 [pgroth]
14:17:23 [GK]
This is what implementers will see
14:18:06 [pgroth]
q?
14:18:09 [pgroth]
ack pgroth
14:18:15 [Dong]
I think we'll need a (wiki) page to explain the whole process of reporting an implementation (with links to all the relevant documents), which will be sent with the call for reports
14:18:55 [Dong]
Perhaps, the questionnaire can have include a link to the explanation as well
14:19:25 [smiles]
q+
14:19:31 [pgroth]
ack smiles
14:19:40 [GK]
Stephan: The first page collects information that controls information displayed on subsequent pages
14:19:42 [ivan]
q+
14:20:14 [GK]
Smiles: are tools like prov-python, ?, a framework of an application
14:20:16 [ivan]
zakim, who is here?
14:20:16 [Zakim]
On the phone I see smiles, Dong, +1.617.715.aaaa, [IPcaller]
14:20:17 [Zakim]
+1.617.715.aaaa has SamCoppens, TomDN, laurent, hook, Curt, pgroth, Luc, jcheney, ivan, GK, lebot, CraigTrim
14:20:17 :20:17 [Dong]
Prov python is a framework
14:20:47 [GK]
Stephan: they go down same path, so could combine these as single item.
14:20:54 [ivan]
q-
14:21:28 [ivan]
zakim, [IPcaller] is stain
14:21:28 [Zakim]
+stain; got it
14:22:12 [GK]
Paul: we have four divisions… is the distinction between libraries, services, applications clear?
14:22:24 [hook]
q+
14:22:46 [GK]
Stephan: distinction is not large - maybe not needed?
14:23:05 [smiles]
I think some people might unnecessarily worry about the distinction if there are multiple options
14:23:13 [Luc]
q+
14:23:18 [GK]
… also no sections for pure publishers of provenance. Or is that a service?
14:23:22 [Luc]
q?
14:23:41 [pgroth]
q+
14:24:09 [GK]
Hook: implementation type is single-choice, but some implementations may be more than one of these.
14:24:19 [Luc]
ack ho
14:24:44 [GK]
Stephan: currently have to fill form multiple times; may want to change the questionnaire to clarify this.
14:25:01 [GK]
… don't lnow if they can be handled in a single pass
14:25:14 [GK]
Luc: MentionOf shoukd be removed from the questionnaire
14:25:22 [Luc]
ack lu
14:25:46 [GK]
Paul: poiple would like to be able click on the questionnaire and see all the questions before filling out.
14:26:39 [Luc]
q?
14:26:40 [GK]
… maybe have several different questionnaires for each kind of implementation. Click on link, see all questions, without having to branch within the form.
14:26:47 [lebot]
+1 to it's a barrier to "continue" in the survey.
14:26:47 [Luc]
ack pg
14:26:50 [hook]
q+
14:26:50 [GK]
Stephan: I think that's reasonable
14:27:11 [Luc]
q?
14:27:39 [GK]
q+ to ask if common questions across all questionnaiure tyopes can be auto-filled
14:27:49 [Luc]
action: zednik to create 3/4 questionnaires instead of a single branching one (+ remove mention)
14:27:49 [trackbot]
Created ACTION-142 - Create 3/4 questionnaires instead of a single branching one (+ remove mention) [on Stephan Zednik - due 2012-11-17].
14:27:51 [Luc]
q?
14:28:04 [Luc]
ack ho
14:28:19 [lebot]
q?
14:28:26 [GK]
hook: clarofy what is meant by publisher(?) in this context
14:28:33 [lebot]
I added PROV-O to
14:28:50 [GK]
paul: anyone who creates provenance that appears somewhere on the web. (Following SKOS?)
14:29:02 [lebot]
q+ to ask if prov-o's prov-o is in "Publishers" like Curt
14:29:14 [GK]
ack gk
14:29:14 [Zakim]
GK, you wanted to ask if common questions across all questionnaiure tyopes can be auto-filled
14:29:22 [Dong]
q+ to ask about translating answers to the questionnaire to the exit criteria
14:29:54 [GK]
stephan: don't know how it can be done
14:29:59 [Luc]
ack lebo
14:29:59 [Zakim]
lebot, you wanted to ask if prov-o's prov-o is in "Publishers" like Curt
14:30:50 [Luc]
q?
14:30:53 [GK]
Tim: Does the provenance in PROV-O the document count as publishing
14:30:58 [lebot]
q-
14:31:45 [GK]
Ivan: possible add provenance statement in ReSpec … that would be an implementation, also every published spec
14:31:49 [Luc]
q?
14:32:24 [GK]
Dong: mapping answers from questionnaire to CR exit criteria
14:32:44 [pgroth]
q_
14:32:46 [pgroth]
q+
14:32:51 [Luc]
ack don
14:32:51 [Zakim]
Dong, you wanted to ask about translating answers to the questionnaire to the exit criteria
14:32:58 [GK]
… need two implementations each feature. Can they be vocabs, or apps that consume/produce ?
14:33:42 [GK]
Paul: we need *pairs* of impl; vocabs count toward coverage, but not really qualifying as a member of a pair
14:33:51 [Luc]
q?
14:34:01 [Luc]
ack pg
14:34:29 [Luc]
q+
14:34:52 [GK]
Paul: we need applications that generate/consume every construct in each serialization
14:35:05 [GK]
q+
14:35:37 [GK]
q+ to say that I think consime/produce pairs for vocab terms - ensures devs agree about how the modelling works
14:36:26 [GK]
Luc: hear something that bothers me - constraints don't need prodcue/conbsume pairs
14:36:32 [Luc]
ack lu
14:37:13 [Luc]
ack gk
14:37:13 [Zakim]
GK, you wanted to say that I think consime/produce pairs for vocab terms - ensures devs agree about how the modelling works
14:37:14 [Luc]
q?
14:37:39 [ivan]
q+
14:38:20 [GK]
Paul: my biggst concern. We need to get constraint test cases in order and ready to go. Wouldlike these available before/as we go to CR, before facing the the dragon\\\\\'director
14:38:31 [Luc]
ack iv
14:40:05 [GK]
Ivan: Director may ask: Why did we not use W3C facilities make the forms; data belongs to Google. Answer may be that form has branching structure ()but we just got rid of that). But data ownership may be a concern.
14:40:32 [GK]
q+ to ask if it's enough to take a data dump and put it on W3C site
14:40:46 [GK]
Ivan: some companies may have concerns about giving data to another company
14:40:57 [GK]
q-
14:41:20 [zednik]
q+
14:41:47 [lebot]
q?
14:41:59 [GK]
Ivan: Once data ois stored by Google, it will stay there, can't be removed. But companies (and comany lawyers) will say "no way".
14:42:09 [Luc]
ack ze
14:42:11 [lebot]
but, won't google crawl the w3c-native results that we publish at w3.org?
14:42:46 [Paolo]
Paolo has joined #prov
14:42:50 [GK]
q+ can we have alternative of submitting a spreadsheet based on supplied template?
14:43:02 [Luc]
q?
14:43:22 [pgroth]
q+ to ask craig
14:43:23 [hook]
q+
14:43:30 [GK]
Ivan: Lawyers job is to be paranoid
14:43:51 [Luc]
ack pg
14:43:51 [Zakim]
pgroth, you wanted to ask craig
14:43:52 [Zakim]
+??P4
14:44:08 [Paolo]
zakim, ??P4 is me
14:44:08 [Zakim]
+Paolo; got it
14:44:18 [Luc]
q?
14:44:28 [GK]
Paul: suggest consider using WBS. If it's easy, that's preferable, if it's hard we can argue the toss.
14:44:54 [zednik]
zakim, [IPcaller] is me
14:44:54 [Zakim]
sorry, zednik, I do not recognize a party named '[IPcaller]'
14:45:14 [Luc]
q?
14:45:52 [GK]
Paul: I can help with WBS
14:45:55 [Luc]
action: zednik to look at wbs for the implementation questionnaire
14:45:55 [trackbot]
Created ACTION-143 - Look at wbs for the implementation questionnaire [on Stephan Zednik - due 2012-11-17].
14:46:11 [GK]
Stephan: I'll look. Questionnaire just got simpler.
14:46:15 [Luc]
q?
14:46:33 [GK]
Hook: concern may be w.r.t. public release of intellectual property.
14:46:38 [jcheney]
q+ to say what do sparql/xquery wgs do
14:46:42 [Luc]
q?
14:46:46 [Luc]
ack hoo
14:47:05 [GK]
q+ to ask if there should be an option for confidential submission
14:47:27 [Curt]
@gk -- results go into public implementation report
14:47:27 [Luc]
q?
14:48:12 [Luc]
q?
14:48:12 [GK]
q-
14:48:32 [Luc]
ack jc
14:48:32 [Zakim]
jcheney, you wanted to say what do sparql/xquery wgs do
14:48:36 [Luc]
q?
14:48:50 [GK]
Luc: moving on...
14:49:06 [GK]
Luc: want to get a feel for which features people will implement
14:49:26 [GK]
… have produced a Google doc to gather information (!)
14:49:34 [Luc]
14:51:25 [GK]
Form isn't editable yet...
14:51:34 [GK]
… it should be now
14:52:38 [Paolo]
q+
14:52:45 [lebot]
POI: tracedTo is now wasInfluencedBy
14:56:50 [Curt]
@zednik -- take a look at the GCIS line in the spreadsheet -- edit as needed
14:57:20 [pgroth]
q+
14:58:16 [GK]
(people are filling in the document)
14:58:25 [Luc]
q?
14:58:34 [pgroth]
ack Paolo
14:58:51 [zednik]
q+
14:59:07 [pgroth]
yes
14:59:24 [Luc]
ack pg
15:01:48 [pgroth]
ack zednik
15:02:01 [Luc]
action: Dong to remove reference of prov-json in implementation report, and allow entry for "other serialization"
15:02:01 [trackbot]
Created ACTION-144 - Remove reference of prov-json in implementation report, and allow entry for "other serialization" [on Trung Dong Huynh - due 2012-11-17].
15:02:04 [Luc]
q?
15:03:39 [Dong]
q+ to ask about the eligibility for PROV-JSON only implementations
15:07:07 [Luc]
q?
15:07:52 [Luc]
q?
15:08:04 [pgroth]
q+
15:08:16 [Luc]
ack dong
15:08:16 [Zakim]
Dong, you wanted to ask about the eligibility for PROV-JSON only implementations
15:08:37 [ivan]
q+
15:08:45 [GK]
General discussion as people look at spreadsheet...
15:08:50 [Luc]
ack pg
15:08:50 [ivan]
ack pgroth
15:09:18 [GK]
(question from phone): do we have to support one of the specific formats to be included in the report?
15:09:50 [Luc]
q?
15:09:53 [GK]
Paul: no, we can include "other" than core serializations as evidence of use or/support for prov
15:10:13 [pgroth]
q+
15:10:37 [Dong]
How about NASA?
15:10:39 [Luc]
ack iv
15:10:57 [GK]
Ivan: implemenations listed so far are essentially from academic sources - not so many commercial implementations.
15:11:14 [Luc]
q?
15:11:15 [GK]
Paul: we have some
15:11:28 [GK]
q+
15:12:25 [Dong]
A few implementations from commercial company are currently listed here
15:12:30 [Luc]
ack pg
15:13:10 [Luc]
ack gk
15:13:13 [Luc]
q?
15:14:32 [GK]
GK: would distinguish implementation for live service from for-academic-paper production
15:14:35 [Paolo]
q+
15:14:52 [zednik]
q+
15:15:12 [GK]
Ivan: this might be a useful topic for the questionnaire: is their an intention to support the provenance application beyond a current research project?
15:15:33 [GK]
Paul: this could be hard to formulate appropriately.
15:15:39 [Curt]
even the grad students developing a prototype always hope that their product will spin off and live on in the long term
15:15:50 [GK]
q+
15:15:53 [GK]
q-
15:16:43 [GK]
For demonstrating interoperable implementability, intended future deployment isn;t necessarily an issue, IMO
15:16:56 [Luc]
q?
15:16:56 [pgroth]
Q+
15:16:58 [GK]
Paolo: how public is the list of intended implementations?
15:17:02 [GK]
Ivan: ity's public
15:17:20 [ivan]
s/ity's/it's/
15:17:22 [pgroth]
the thing that is public is this:
15:17:30 [Luc]
q?
15:17:48 [GK]
Luc: we are not collecting commitments here and now - this is for WG planning, not public.
15:17:49 [ivan]
ack Paolo
15:17:51 [Luc]
ack pao
15:17:53 [Dong]
It's useful to include such information (e.g. future support, live service, etc.) in the report, but what is the impact it has on the exit criteria, I'm wondering
15:17:54 [CraigTrim]
CraigTrim has joined #PROV
15:17:56 [pgroth]
ack pgroth
15:18:00 [GK]
Luc: see link above.
15:18:06 [Luc]
q?
15:18:31 [pgroth]
q+ to say I don't think it should be included
15:18:40 [Luc]
ack ze
15:18:55 [GK]
Stephan: we have a structure for the implementation report; are we happy putting this distinction between research/commercial in the report -m don't want to ask things that don';t go in the report
15:18:56 [Luc]
ack pg
15:18:57 [Zakim]
pgroth, you wanted to say I don't think it should be included
15:19:00 [Luc]
q?
15:19:01 [GK]
Paul: agree, shouldn't ask
15:19:08 [Luc]
q?
15:19:21 [pgroth]
@ivan we can battle :-)
15:19:27 [GK]
Luc: next sub-topic
15:19:41 [GK]
Constraints
15:19:41 [pgroth]
Topic: Constraints Implementation
15:19:42 [ivan]
pgroth: it is an information we should have if the question comes
15:19:59 [TomDN]
+q
15:20:08 [pgroth]
raises hand
15:20:08 [jcheney]
will try but may not have time
15:20:25 [GK]
Luc: Would be good to knwo who is planning to implement any of the constraints features. "show of hands" to IRC please
15:21:06 [Luc]
q?
15:21:11 [Luc]
ack to
15:21:13 [GK]
Luc: thius could be intent to produce valid provenance, or to consume/assume/check it
15:21:14 [Paolo]
I am planning to pursue the Datalog-based implementation which I started this year, although the extent to which that is possible using that particular framework still needs to be clarified
15:21:38 [lebot]
implementing constraints: perhaps.
15:21:40 [Luc]
q?
15:21:50 [GK]
Paul: will implement, don;t know if will pass all tests, due to levels of inference needed.
15:21:50 [Luc]
ack pg
15:21:57 [Luc]
ack
15:22:15 [pgroth]
q?
15:22:16 [pgroth]
ack shows
15:22:17 [Luc]
q?
15:22:23 [zednik]
q+
15:22:35 [Luc]
ack ze
15:23:05 [Luc]
q?
15:23:06 [pgroth]
good question
15:23:18 [GK]
Stephan: is there a distinction between validator or building a producer of valid prov? JHad assumed implementation must be a validator. True or false?
15:23:32 [GK]
(Luc checks exit criteria)
15:23:33 [pgroth]
so it must be a validator
15:23:38 [pgroth]
or something similiar
15:23:39 [Luc]
For each of the test cases defined by the working group, at least two independent implementations pass the tests and claim to conform to the document.
15:23:44 [pgroth]
q+
15:24:17 [Luc]
ack pg
15:25:02 [Luc]
q?
15:25:03 [GK]
q+ to say that an important part of the constraints spec is that devs can understand it well enough to produce only valid prov
15:25:14 [Luc]
ack gk
15:25:14 [Zakim]
GK, you wanted to say that an important part of the constraints spec is that devs can understand it well enough to produce only valid prov
15:25:51 [zednik]
q+ does a implementation of the constraints require consumption + check vs. constraints
15:27:12 [zednik]
@GK audio is breaking up while you are talking
15:27:40 [Luc]
ack ze
15:28:34 [pgroth]
q+ to say we implement some constraints
15:28:43 [Luc]
q?
15:28:44 [jcheney]
q+ to say there are guidelines we don't / can't easily check
15:28:51 [Luc]
ac pg
15:28:58 [Luc]
ack pg
15:28:58 [Zakim]
pgroth, you wanted to say we implement some constraints
15:29:01 [GK].
15:29:29 [Luc]
q?
15:29:38 [TomDN]
+q
15:29:43 [GK]
^^s/or help/but still help/
15:29:47 [TomDN]
-q
15:30:00 [TomDN]
+1 for what Paul just said
15:30:08 [pgroth]
ack pgroth
15:30:11 [TomDN]
(the "one line" thing)
15:30:11 [Luc]
ack jch
15:30:11 [Zakim]
jcheney, you wanted to say there are guidelines we don't / can't easily check
15:30:42 [Luc]
q?
15:30:53 [GK]
Paul: we need to show we have two validators, but also some indication that there is prov being produced that satisfies the constraints
15:31:24 [GK]
jchney: there is useful information we can collect that it may not be sensible to try and formalize
15:31:44 [TomDN]
How about: "For the features that you implement, do you support the PROV-CONSTRAINTS?"
15:32:22 [jcheney]
i will try but can't promis anything (maybe work with Paolo)
15:32:55 [jcheney]
Reza also said he thought orcal would implement (but caveat about oracle)
15:32:56 [pgroth]
action: zednik add a question to ask about use of constraints by applications (e.g. "or the features that you implement, do you support the PROV-CONSTRAINTS?")
15:32:57 [trackbot]
Created ACTION-145 - Add a question to ask about use of constraints by applications (e.g. "or the features that you implement, do you support the PROV-CONSTRAINTS?") [on Stephan Zednik - due 2012-11-17].
15:33:03 [jcheney]
s/orcal/oracle/
15:33:39 [Luc]
q?
15:35:50 [pgroth]
q+
15:35:51 [GK]
GK: expect to see implementations, producing and consuming, coming from the Wf4ever project. Also Jun is looking at further work to build and evaluate provenance data from other sources. Details not yet c,ear (to me), but expect something from this corner
15:36:10 [Luc]
ack pg
15:36:27 [GK]
Luc: how do we build the test cases? (?)
15:36:39 [GK]
Paul: I'd rather focus on implementation
15:37:25 [GK]
Luc: I'll volunteer (Dong?) and myself to convert validator tests to a general test suite.
15:37:45 [Dong]
Yes
15:38:13 [lebot]
how does this differ from
?
15:38:18 [Luc]
action: GK to talk to Jun about implementation of constraints and specifically test cases
15:38:18 [trackbot]
Created ACTION-146 - Talk to Jun about implementation of constraints and specifically test cases [on Graham Klyne - due 2012-11-17].
15:38:20 [pgroth]
q+
15:39:11 [Luc]
q+
15:39:16 [Curt]
You're also looking for examples both of success and failure
15:39:26 [pgroth]
ack pgroth
15:40:20 [pgroth]
q+
15:40:24 [Luc]
ack luc
15:40:30 [GK]
q+ to ask if implementers of validators if they can report which constraints are validated by their systems, as a way to get a view of coverage
15:40:33 [Curt]
separate "unit" tests from "integration" tests
15:40:53 [Luc]
it's about to review
15:40:54 [Curt]
some are focused on success/failure of a few particular tests
15:41:03 [jcheney]
q+ to advocate small test cases
15:41:05 [Curt]
some are more comprehensive
15:41:12 [Luc]
ack pg
15:42:01 [Luc]
ack gk
15:42:01 [Zakim]
GK, you wanted to ask if implementers of validators if they can report which constraints are validated by their systems, as a way to get a view of coverage
15:42:58 [Luc]
that's what I produced
15:43:03 [Curt]
edge cases
15:43:11 [Luc]
q+
15:43:15 [pgroth]
q+
15:43:32 [GK]
jcheney: small constraint-focused tests are probably more useful than big multi-constraint provenance data
15:43:49 [zednik]
@GK, yes, the constraint branch of the survey allows the user to specify constraint coverage
15:44:32 [Luc]
q?
15:44:50 [jcheney]
q-
15:44:58 [GK]
@zednik I was thinking about having the *validators* report the constraint tests invoked by test data presented
15:45:34 [zednik]
@GK that would be a nice feature of a validator
15:45:59 [Luc]
q?
15:46:01 [Luc]
ack pg
15:46:04 [jcheney]
@ivan agree we need realistic examples too (for scalability etc.) not just corner cases
15:46:04 [Dong]
@zednik, I think we're not going to ask people to fill the constraint questionnaire, but submit the results of the tests as per 1.2 in
15:46:38 [Curt]
use simple identifiers, and put a structured comment with a list of constraints exercised at the top of each test case, use a script to pull those comments into a matrix to embed in the report
15:46:39 [Luc]
q?
15:46:40 [zednik]
@Dong, but does submitting the results of tests give us an idea of supported coverage?
15:46:40 [GK]
Luc: useful to have tests marked with constraints they aer supposed to exercise, separately from examples that are additional data that can be used for testing/discussion
15:46:42 [Luc]
ack luc
15:47:42 [Dong]
@zednik, that's why we need to catalogue the test cases against specific constraints
15:48:14 [GK]
Luc: what do we need to prepare for the CR teleconference?
15:48:23 [Luc]
q?
15:49:33 [GK]
Luc: propose to bootstrap the process with a few examples, then ask for volunteers to bulk out
15:51:20 [GK]
… concern that as test case author and developer, test cases fro not properly independent
15:52:17 [GK]
Ivan: would be concerned if you were the *only* implementer, but if other implementers do similar, and than merge test cases, then there's a reasonable level of cross-checking that takes place.
15:52:22 [Luc]
q?
15:52:27 [Paolo]
nothing substantial
15:52:39 [jcheney]
q+
15:52:46 [CraigTrim]
q+
15:52:55 [Paolo]
my focus is to explore the boundaries of what can be supported using a particular implementation model
15:53:44 [GK]
@paul: even if you just use Luc's test cases, that's effectively an independent review of those tests
15:53:50 [Luc]
ack jch
15:53:50 [Paolo]
(very hard to follow James BTW)
15:53:58 [Paolo]
yes
15:54:02 [Paolo]
thanks
15:54:31 [Paolo]
I was planning to start from Luc's test suite
15:54:37 [Paolo]
I would be happy to use that
15:54:39 [Luc]
q?
15:54:43 [Luc]
ack cr
15:55:38 [Luc]
q?
15:55:38 [GK]
q+
15:56:12 [pgroth]
ack GK
15:56:52 [Luc]
q?
15:56:54 [Curt]
That would help with example development too...
15:56:54 [Paolo]
I will prob skip the next session but this was useful thanks
15:57:29 [GK]
Session ends. Resume at 11:15, to discuss Primer
15:57:38 [Zakim]
-Paolo
15:57:43 [Dong]
bye all
15:58:33 [zednik]
15:58:49 [Zakim]
-stain
16:04:13 [Zakim]
-Dong
16:17:13 [pgroth]
pgroth has joined #prov
16:17:20 [pgroth]
Topic: Primer
16:17:35 [pgroth]
Scribe: CraigTrim
16:18:08 [CraigTrim]
pg: Primer - in particular the status and what we want to do about
16:18:32 [pgroth]
16:18:33 [CraigTrim]
simon: big changes in draft; primarily to clarify/fix problems, but more extensive work on samples
16:18:49 [pgroth]
craig use tab :-)
16:18:53 [pgroth]
so pgroth
16:19:01 [pgroth]
or smiles
16:20:21 [ivan]
(there is a funny empty arrowhead on the figure right before section 3.6)
16:20:56 [CraigTrim]
smiles: simon made various corrections suggested by Ivan - what prov-n means for arguments,
16:21:09 [ivan]
(missing arrowhead on the figure right before 3.9, pointing at ex:compile)
16:21:36 [CraigTrim]
smiles: also at some point want to include something on collection - this would be useful in primer (show relationship between web page and image on web page)
16:21:49 [CraigTrim]
smiles: this will be moved to next working draft, but not on this one
16:22:36 [CraigTrim]
smiles: two issues raised on primer; implements and informedBy - this might go into the appendix and one issue (now resolved) but need stephan to close, about delegation
16:22:52 [pgroth]
q?
16:22:54 [pgroth]
q?
16:22:55 [Luc]
q+
16:23:01 [pgroth]
ack Luc
16:24:15 [CraigTrim]
pgroth: prov-dm should be normative
16:24:40 [pgroth]
q?
16:25:07 [CraigTrim]
pgroth: Is this ready for the CR doc as is?
16:25:09 [CraigTrim]
smiles: yes
16:25:11 [pgroth]
q?
16:25:32 [CraigTrim]
pgroth: let's vote on releasing as working draft now - as we did yesterday for CR
16:25:36 [CraigTrim]
pgroth: add editor's check
16:25:40 [Luc]
q+
16:25:53 [ivan]
q+
16:25:56 [pgroth]
action: smiles editor's check on the primer
16:25:56 [trackbot]
Created ACTION-147 - Editor's check on the primer [on Simon Miles - due 2012-11-17].
16:25:58 [ivan]
ack Luc
16:26:20 [CraigTrim]
Luc: as part of this editorial action, bibliography needs updating because it doesn't have right editors from some specs
16:26:28 [CraigTrim]
Luc: do we need to use short URIs?
16:26:36 [ivan]
q-
16:26:37 [CraigTrim]
ivan: yes - it's more consistent
16:26:59 [pgroth]
q?
16:27:05 [CraigTrim]
Luc: I will produce a javascript file that has bibliographic entries - and we can share this across
16:27:28 [Luc]
action: Luc to produce js file with biblio entries for prov documents
16:27:28 [trackbot]
Created ACTION-148 - Produce js file with biblio entries for prov documents [on Luc Moreau - due 2012-11-17].
16:27:34 [CraigTrim]
smiles: do we want an ack. on public comments by robert prior to deployment?
16:27:40 [CraigTrim]
pgroth: not necessarily if we have sent out a reply
16:27:52 [CraigTrim]
pgroth: in particular if we've tried to address his comments somewhere
16:27:55 [CraigTrim]
pgroth: this is also a note
16:28:09 [CraigTrim]
smiles: can I set a deadline for which the WG can say they are happy with the responses?
16:28:21 [CraigTrim]
pgroth: WG will say that it's fine ...
16:28:37 [CraigTrim]
smiles: will send a reminder
16:29:01 [pgroth]
q?
16:29:33 [pgroth]
proposed: release primer as working draft synchronized with CR given that all editorial actions are complete
16:29:40 [ivan]
+1
16:29:41 [TomDN]
+1
16:29:42 [Curt]
+1
16:29:44 [jcheney]
+1
16:29:45 [lebot]
+1
16:29:46 [SamCoppens]
+1
16:29:47 [hook]
+1
16:29:50 [smiles]
+1
16:29:54 [CraigTrim]
+1
16:30:06 [pgroth]
accepted: release primer as working draft synchronized with CR given that all editorial actions are complete
16:30:34 [Zakim]
-smiles
16:30:50 [pgroth]
Topic: PROV-DC
16:31:00 [CraigTrim]
pgroth: this is important mapping
16:31:03 [Zakim]
+??P0
16:31:12 [smiles]
zakim, ??P0
16:31:12 [Zakim]
I don't understand '??P0', smiles
16:31:15 [CraigTrim]
pgroth: who has worked on this mapping? anyone?
16:31:15 [smiles]
zakim, ??P0 is me
16:31:15 [Zakim]
+smiles; got it
16:32:05 [CraigTrim]
pgroth: update - luc & I have read through it the other day - we think all content is there but the doc needs quite a bit of review and sculpting in terms of the text
16:32:22 [CraigTrim]
pgroth: lot of informal language ... there needs to be a check that lang is more like a spec - more precision
16:32:30 [CraigTrim]
pgroth: are all mappings in fact correct?
16:32:39 [CraigTrim]
pgroth: think most of them are, but need to check them through
16:32:53 [CraigTrim]
pgroth: so would like another round of review - a second round prior to working draft
16:33:10 [CraigTrim]
Luc: we want to check if mapping to prov is correct - we had identified a couple of issues
16:33:16 [CraigTrim]
Luc: then someone to help with some of the english
16:33:33 [CraigTrim]
pgroth: comments we had include ns for dc-prov not correctly entered, needs to be cleared that it's the prov ns
16:33:44 [CraigTrim]
pgroth: there is graph inside doc not compat with our doc style
16:34:14 [CraigTrim]
pgroth: some naming is different - "publication activity" - activity is appended to the end of definitions
16:34:22 [CraigTrim]
pgroth: and again emphasizing informal use of lang
16:34:27 [pgroth]
16:34:33 [CraigTrim]
I can help
16:34:36 [smiles]
I can review and help edit for style (I should have before)
16:34:40 [Curt]
I'll review the language/expression, but I'm not a DC expert..
16:34:40 [lebot]
+1
16:35:19 [CraigTrim]
ivan: are we sure this URL is the latest version?
16:35:41 [CraigTrim]
ivan: I had similar comments, and had replies that things were changed - so let's make sure we have the right draft
16:36:00 [CraigTrim]
pgroth: will email and ask authors for most current version
16:36:30 [CraigTrim]
pgroth: I want this as working draft for candidate rec in time - and the version above not ready
16:36:38 [CraigTrim]
ivan: in mercurial there's a later version
16:36:58 [ivan]
16:37:07 [CraigTrim]
ivan: this URL comes from mercurial
16:37:58 [pgroth]
action: pgroth check for the current version of dublin core mapping + then send email to tim and craig for review
16:37:58 [trackbot]
Created ACTION-149 - Check for the current version of dublin core mapping + then send email to tim and craig for review [on Paul Groth - due 2012-11-17].
16:38:18 [Curt]
Daniel changed the one on HG on Oct. 28
16:39:23 [pgroth]
accepted: short name for prov-dc is prov-dc and the namespace should be prov:
16:40:58 [pgroth]
q?
16:41:28 [CraigTrim]
pgroth: on agenda - next thing is time tabling but I think in this primer (dc space) we should talk about FAQ
16:41:29 [pgroth]
Topic: FAQ
16:41:39 [pgroth]
16:42:56 [CraigTrim]
pgroth: lot of responses gave to external reviewers that were quite informal
16:43:00 [CraigTrim]
pgroth: lot of intuition about the design of prov, in addition to modeling (how do you use constructs, best ways, etc)
16:43:04 [CraigTrim]
pgroth: people want hints - best practices - about where to use constructs
16:43:07 [CraigTrim]
pgroth: and design decisions that underly the entire spec (scruffy vs proper).
16:43:26 [CraigTrim]
pgroth: let's populate this FAQ with this info and it could evolve into best practices or another document ...
16:43:38 [CraigTrim]
pgroth: need contributions to updating/editing the FAQ with info - this is an easy task
16:43:42 [pgroth]
q?
16:43:47 [CraigTrim]
pgroth: we want to sign up people for this task
16:43:52 [smiles]
q+
16:44:12 [CraigTrim]
smiles: what is relation between FAQ and primer?
16:44:25 [CraigTrim]
smiles: we originally had a third section in primer for FAQ but was then removed
16:44:32 [CraigTrim]
smiles: is this a good section for it to me, or should it remain elsewhere?
16:44:47 [CraigTrim]
pgroth: idea is that FAQ can be updated after primer. The primer will eventually become static
16:44:55 [CraigTrim]
pgroth: so making FAQ separate is a good idea
16:45:12 [pgroth]
ack smiles
16:45:42 [CraigTrim]
ivan: just to clarify - semantic web wiki - there will be a separate page for prov, as there is today for RDF
16:45:47 [CraigTrim]
pgroth: already there
16:46:05 [CraigTrim]
ivan: link this page in from home
16:46:23 [CraigTrim]
ivan: it's a more generic space that will remain a wiki for this community to update FAQ etc
16:46:39 [CraigTrim]
ivan: when WG closes, WG wiki will become read only - so community work can still happen on semantic web wiki
16:46:59 [CraigTrim]
pgroth: any volunteers - just one FAQ entry?
16:47:10 [lebot]
q?
16:47:18 [smiles]
I can write one for the influenced/involved difference
16:47:23 [TomDN]
I'll do at least 1 entry :)
16:47:38 [TomDN]
(How do I refer to other PROV bundles?) ;)
16:47:44 [Curt]
I'll do at least 1..
16:47:47 [lebot]
+1 for why we didn't use FOAF
16:48:34 [Curt]
Hook will write one about ISO lineage vs. PROV
16:49:36 [pgroth]
accepted: Tim, Curt, Hook, Tom, Simon, Paul volunteer to create faq wiki entries
16:49:52 [ivan]
(b.t.w., when we go to CR, I will also ask for a prov 'button' like the ones n
)
16:50:28 [pgroth]
Topic: Messaging on document reading
16:50:36 [Luc]
@ivan, do you mean an official prov logo?
16:51:04 [CraigTrim]
pgroth: we have this issue where people read the constraints document first - before primer, before ontologies ... and they get scared
16:51:13 [TomDN]
Isn't that why we'll have PROV-OVERVIEW?
16:51:29 [CraigTrim]
pgroth: people go into wrong document - gives false impression
16:51:39 [CraigTrim]
pgroth: prov constriants for people writing validators ...
16:51:46 [CraigTrim]
pgroth: how do we get people to go to the right document?
16:52:03 [CraigTrim]
pgroth: we have the purpose of each document in the header of each document
16:52:11 [pgroth]
q?
16:52:22 [Curt]
Link to the YouTube intro talk
16:52:24 [CraigTrim]
q+
16:52:49 [smiles]
"This is not the document to read first." :)
16:52:57 [pgroth]
color coding - for type of user
16:52:59 [lebot]
+1 @GK, easy to glaze over the top of every W3C doc b/c it's boilerplate.
16:53:02 [ivan]
q+
16:53:06 [pgroth]
ack CraigTrim
16:53:09 [ivan]
16:53:09 [pgroth]
ack ivan
16:53:16 [CraigTrim]
ivan: this URL has overview for OWL
16:53:18 [CraigTrim]
ivan: OWL has similar issue
16:53:44 [CraigTrim]
ivan: toward end of document there is table with color coding to give 1 sentence on what various docs are for
16:53:59 [CraigTrim]
ivan: having something like this will be important
16:54:15 [CraigTrim]
ivan: does not have to be identical or as complicate to URL above, but use as guidance
16:54:24 [CraigTrim]
ivan: this is starting point in terms of references
16:54:45 [CraigTrim]
pgroth: has already taken this action
16:54:45 [pgroth]
Q?
16:55:10 [CraigTrim]
q+
16:55:17 [Luc]
q+
16:55:20 [pgroth]
ack CraigTrim
16:55:24 [pgroth]
ac Luc
16:55:26 [pgroth]
ack Luc
16:55:33 [ivan]
Another example:
16:55:42 [CraigTrim]
pgroth: we could have boilerplate, color coding, overview/table
16:55:46 [CraigTrim]
CraigTrim: not mutually exclusive
16:55:56 [pgroth]
q?
16:55:58 [CraigTrim]
ivan: this URL above - similar approach, but also different than OWL
16:56:03 [CraigTrim]
ivan: semi primer -
16:56:15 [pgroth]
q?
16:56:16 [CraigTrim]
Luc: what changes should we make in our existing docs?
16:56:20 [CraigTrim]
ivan: nothing ...
16:56:27 [CraigTrim]
Luc: do we need to edit current specs?
16:56:39 [CraigTrim]
pgroth: you can leave boilerplate that is good guidance (assuming it's read)
16:56:46 [CraigTrim]
pgroth: but additionally - what would we add - if any?
16:57:06 [CraigTrim]
pgroth: key is to add overview doc - and we can also add additional sentence/feature in each doc
16:57:33 [pgroth]
"The OWL 2 Document Overview describes the overall state of OWL 2, and should be read before other OWL 2 documents."
16:57:34 [CraigTrim]
ivan: for SPARQL and OWL ... they have at beginning boilerplate that lists docs
16:57:47 [CraigTrim]
ivan: in there they also list the reference to overview
16:57:50 [pgroth]
q?
16:58:00 [GK]
q+
16:58:05 [GK]
q-
16:58:06 [CraigTrim]
ivan: SPARQL had 11 docs, most were rec. Prov only has 4 rec, so somewhat simpler
16:58:06 [pgroth]
ack gk
16:58:18 [Luc]
q?
16:58:18 [CraigTrim]
GK: SPARQL doc are all hyperlinked, but we don't have this in the primer
16:58:26 [CraigTrim]
GK: hyperlinks will make nav simpler
16:59:18 [CraigTrim]
pgroth: in primer there is boilerplate for prov family specs ...
16:59:44 [CraigTrim]
smiles: are boilerplates centrally managed, or up to each editor to manage?
17:00:00 [CraigTrim]
Luc: maybe we should make this a common javascript addition?
17:00:09 [GK]
It's also a bug in PROV-AQ (no hyperlinks in the "family of specifications)
17:00:40 [CraigTrim]
ivan: this editorial check should be done by hand - javascript may just take more time and have to debug etc
17:00:45 [jcheney]
q+
17:01:05 [pgroth]
ack jcheney
17:01:14 [CraigTrim]
jcheney: suggest we make one clean copy we are all happy then copy+paste
17:01:27 [smiles]
+1 to jcheney's suggestion
17:02:28 [CraigTrim]
pgroth: first there is question - we need to update status to be correct and it must be consistent
17:02:47 [Luc]
17:02:54 [CraigTrim]
Luc: we have two sections in above URL
17:03:08 [CraigTrim]
Luc: (1) that documents and (2) that talks about how to read ... specs
17:03:25 [CraigTrim]
Luc: list must be updated ...
17:04:12 [CraigTrim]
Luc: how do we order? maintain existing order? or adjust ... ?
17:04:19 [CraigTrim]
ivan: starts with dm
17:04:26 [CraigTrim]
Luc: should start with recs
17:04:31 [pgroth]
q?
17:04:43 [CraigTrim]
pgroth: I think primer should be order of operations vs the recs
17:04:56 [CraigTrim]
pgroth: I would have notations first - primer, then maybe dm, then notations, constraints and then the notes
17:05:12 [pgroth]
q?
17:05:13 [CraigTrim]
Luc: that is how to read the family ...
17:05:56 [TomDN]
+q
17:06:00 [CraigTrim]
ivan: my instinct is similar to Paul's ... we want reader to start with primer or better yet overview then primer (assuming overview exists)
17:06:20 [CraigTrim]
ivan: "specifications are ... " - but neither primer nor overview are specs
17:06:34 [CraigTrim]
ivan: make it clear in each of those whether this is note or rec
17:06:42 [pgroth]
ack TomDN
17:06:55 [CraigTrim]
TomDN: I agree with Paul re: order - this is least confusing
17:07:13 [hook]
q+
17:07:13 [CraigTrim]
TomDN: but if you want to make sure recommendations stand out - do color coding, or specifically mention - or something like that
17:07:19 [TomDN]
-q
17:08:19 [pgroth]
q+
17:08:22 [pgroth]
ack hook
17:08:28 [CraigTrim]
hook: sounds like there are more facets to each description now
17:08:36 [CraigTrim]
hook: so maybe table format shows each doc name and intention, then color code rows
17:08:41 [CraigTrim]
ivan: that should go in overview
17:08:48 [CraigTrim]
ivan: but perhaps not in each rec
17:08:52 [jcheney]
17:08:54 [Curt]
q+
17:09:07 [CraigTrim]
ivan: in overview this is good entry point
17:09:08 [pgroth]
ack pgroth
17:09:21 [pgroth]
ack Curt
17:09:22 [CraigTrim]
Curt: in one of the presentations there is a diagram of one of the relatoinships - and that would really help on overview
17:09:31 [CraigTrim]
ivan: I will review overview
17:09:41 [pgroth]
"The OWL 2 Document Overview describes the overall state of OWL 2, and should be read before other OWL 2 documents."
17:10:00 [CraigTrim]
pgroth: we should add something like this to every abstract in every spec
17:10:07 [CraigTrim]
+1
17:10:08 [Curt]
+1
17:10:42 [ivan]
+1
17:11:24 [Curt]
With the link to PROV-OVERVIEW in the sentence
17:11:29 [ivan]
17:11:31 [CraigTrim]
pgroth: so do we refer to overall as ... ? "prov" .. ?
17:11:35 [ivan]
17:12:30 [CraigTrim]
pgroth: "prov family"
17:12:41 [pgroth]
approved: add sentence "The PROV Document Overview describes the overall state of PROV, and should be read before other PROV documents."
17:12:49 [pgroth]
q?
17:13:04 [CraigTrim]
Luc: is this something that can be used to say "this is prov compliant"
17:13:44 [pgroth]
approved: add sentence "The PROV Document Overview describes the overall state of PROV, and should be read before other PROV documents." in the last sentence of the abstract of each specification
17:14:08 [pgroth]
q?
17:14:26 [CraigTrim]
Luc: will commit changes for review
17:15:06 [pgroth]
action: pgroth remind simon what he's supposed to do
17:15:06 [trackbot]
Created ACTION-150 - Remind simon what he's supposed to do [on Paul Groth - due 2012-11-17].
17:15:31 [Luc]
17:15:52 [Curt]
That sentence should link to the PROV-OVERVIEW document.
17:16:39 [CraigTrim]
pgroth: "prov family of specifications" ... but some of these aren't specs - is that ok? or "prov family of documents"
17:16:49 [CraigTrim]
pgroth: so this latter phrase should be used everywhere
17:16:52 [CraigTrim]
ivan: only in status section
17:17:12 [CraigTrim]
ivan: how committed are we for notes will be published later?
17:17:22 [CraigTrim]
Luc: we have to be cautious
17:17:51 [CraigTrim]
ivan: I think dc ... for first public draft - we can trust it will be there - so ok to add to list
17:17:58 [CraigTrim]
ivan: pending is dictionary ... ?
17:18:09 [CraigTrim]
pgroth: only want to put things there that are first public working draft
17:18:16 [CraigTrim]
Luc: we hope dc will be there in time
17:18:29 [CraigTrim]
ivan: pending dictionary, semantics ...
17:18:40 [CraigTrim]
Luc: will see if I can get mention ready in time for CR
17:18:46 [pgroth]
q?
17:19:00 [TomDN]
PROV-LINKING !
17:19:01 [CraigTrim]
pgroth: can we use another ... the name prov-mention is ... can we use something else?
17:19:21 [pgroth]
q?
17:20:11 [CraigTrim]
pgroth: remaining is time-tabling and out reach - planning out reach
17:20:36 [CraigTrim]
Luc: have Ivan explain what's coming up ...
17:20:37 [pgroth]
Topic: Planning
17:20:45 [CraigTrim]
ivan: CR then PR ... these are the foremost steps
17:20:59 [CraigTrim]
ivan: this requires approval formally from director that everything is kosher and can be published
17:21:30 [CraigTrim]
ivan: prior to physically publishing doc ... we have to have call (2 chairs, Ivan and editors optional)
17:21:37 [CraigTrim]
ivan: and also on W3C side 2 or 3 ppl
17:21:48 [CraigTrim]
ivan: a tranistion call to defend our case that we did everything necessary
17:22:08 [CraigTrim]
ivan: we answered all comments and record of that .... a clean plan ... we have covered all outstanding issues etc
17:22:12 [pgroth]
q+ to ask about call for implementations?
17:22:13 [CraigTrim]
ivan: proves we are done - this must be well documented and presented
17:22:22 [CraigTrim]
Luc: is there an actual presentation?
17:22:33 [CraigTrim]
ivan: we have telco - on telco there is agenda - various points
17:22:46 [CraigTrim]
ivan: we list various links - in those links (eg to impl plan)
17:22:52 [CraigTrim]
ivan: so there is a pattern for that
17:23:06 [CraigTrim]
ivan: we have to find right time of about an hour .. 5 people ...
17:23:17 [CraigTrim]
ivan: means that timing this can be a challenge - so must prep
17:23:34 [CraigTrim]
ivan: to get to transition call there must be a call for all other working group chairs - tell them we declare ourselves ready
17:23:48 [CraigTrim]
ivan: tell them that we are going to impl and other working groups can object
17:24:15 [CraigTrim]
ivan: this is the declaration of intent call ... and between this call and the transition call - there must be 5 biz days
17:24:24 [CraigTrim]
ivan: this is how we calculate back our own timing
17:24:42 [CraigTrim]
this means if we say we want to publish on a given day in nov - then we have to come back ... a week or 2 weeks to be on safe side
17:24:48 [CraigTrim]
ivan: to account for all readiness on our side
17:25:01 [CraigTrim]
ivan: we have to try to get date - then set date with webmaster that date of pub is OK
17:25:26 [CraigTrim]
ivan: when we call out to other WG - here it is - the document should not change after that point
17:25:30 [CraigTrim]
ivan: that is point of readiness for docs
17:25:41 [CraigTrim]
ivan: only change is if we don't make it to proposed date, then things will change
17:25:48 [CraigTrim]
pgroth: question about call for impl ...
17:25:53 [CraigTrim]
ivan: this is official named CR
17:26:12 [CraigTrim]
ivan: you send out email to chairs - we intend to do CR - once the transition call happens and publication has happened
17:26:21 [CraigTrim]
ivan: then all members are told and it appears on home page
17:26:25 [pgroth]
ack pgroth
17:26:25 [Zakim]
pgroth, you wanted to ask about call for implementations?
17:26:30 [CraigTrim]
ivan: and we are looking for implementations
17:26:45 [CraigTrim]
ivan: that will be W3C-side announcement of this
17:27:18 [CraigTrim]
ivan: looking ahead for proposed rec - mechanism is set - proposed rec we will have same transition call to prove there has been an impl
17:27:38 [CraigTrim]
ivan: it is a similar mechanism - but at the end of PR, the team officially votes and members can agree yes or no to publish
17:27:41 [pgroth]
q?
17:27:45 [CraigTrim]
ivan: and we simply need enough votes
17:28:23 [CraigTrim]
pgroth: what kinds of changes we can do between CR and PR?
17:28:38 [CraigTrim]
ivan: minimal
17:28:42 [CraigTrim]
ivan: editorial can be done between PR and rec - even though this is stricter
17:28:57 [CraigTrim]
ivan: but beyond that guiding principle is that any change which would affect impl means we have to go back to last call
17:29:09 [CraigTrim]
ivan: if we make a change that invalidates a validation process - we need that last call round again
17:29:18 [CraigTrim]
ivan: editorial change is ok
17:30:35 [CraigTrim]
ivan: changes are a case by case basis - but basically, are impls changed? this is guiding principle
17:30:47 [jcheney]
q+
17:30:53 [pgroth]
ack jcheney
17:31:13 [CraigTrim]
jcheney: for example in constraints doc where I think what I've written in clear - so putting more detail is OK
17:31:26 [CraigTrim]
ivan: yes - clarification is always ok - it helps implementation
17:32:09 [CraigTrim]
ivan: let's set a date for the CR pub
17:33:18 [laurent]
laurent has joined #prov
17:35:31 [CraigTrim]
jcheney: suggest that doc list be consistent in ordering
17:35:39 [CraigTrim]
jcheney: eg read prov-n before constraints
17:36:02 [CraigTrim]
ivan: re-ordering is a good idea
17:40:40 [pgroth]
start back at 1:30
18:15:16 [pgroth]
pgroth has joined #prov
18:15:22 [Curt]
Curt has joined #prov
18:26:21 [jcheney]
jcheney has joined #prov
18:34:16 [pgroth]
Topic: Outreach & Planning
18:34:22 [laurent]
laurent has joined #prov
18:34:38 [CraigTrim]
CraigTrim has joined #PROV
18:34:45 [smiles]
yes
18:35:01 [hook]
pgroth: wrt to outreach, couple of things. need easier way/entry point for external implementors to know what we want them to do.
18:35:13 [lebot]
lebot has joined #prov
18:35:23 [hook]
... would be good to have text on guidance, why it is important, what they get in return.
18:36:13 [hook]
pgroth: I'll give it a go. could add separate section for request for implementations.
18:36:20 [pgroth]
action: pgroth to add a section on implementing prov and why and how
18:36:20 [trackbot]
Created ACTION-151 - Add a section on implementing prov and why and how [on Paul Groth - due 2012-11-17].
18:36:51 [CraigTrim]
q+
18:36:57 [hook]
pgroth: anything we can do to encourage more implementations of PROV. Any ideas?
18:37:31 [hook]
CraigTrim: business to have use cases. want to target the enterprise. To help them in their line of business.
18:38:04 [hook]
... there are people in healthcare, auditing and compliance, risk management, military context for following rules of engagement
18:38:18 [hook]
... legal and police work, logistical supply chains.
18:38:33 [pgroth]
q?
18:38:35 [ivan]
q+
18:38:39 [hook]
... I can take this up. a paragraph of directed text of how it can help in this context.
18:38:40 [ivan]
ack CraigTrim
18:38:48 [hook]
pgroth: would also help to have a template.
18:39:41 [hook]
ivan: would also be great if use case also has 1-2 sentences of why provenance is important and how the model we have is useful this way.
18:39:49 [pgroth]
18:39:58 [pgroth]
18:40:22 [pgroth]
action: CraigTrim to write a paragraph motivating needs for provenance
18:40:22 [trackbot]
Sorry, couldn't find CraigTrim. You can review and register nicknames at <
>.
18:41:00 [pgroth]
action: CraigTrim to write a paragraph motivating needs for provenance
18:41:00 [trackbot]
Sorry, couldn't find CraigTrim. You can review and register nicknames at <
>.
18:41:14 [lebot]
18:41:19 [pgroth]
action: Craig Trim to write a paragraph motivating needs for provenance
18:41:19 [trackbot]
Created ACTION-152 - Trim to write a paragraph motivating needs for provenance [on Craig Trim - due 2012-11-17].
18:41:28 [hook]
GK: what time frame are we looking at for this outreach material?
18:42:11 [hook]
pgroth: ASAP, but we don't really have a deadline except for end of WG. But it would be useful to get this out to the implementors. To encourage adoption.
18:42:19 [hook]
... we are not at point where specs are stable.
18:42:43 [hook]
CraigTrim: has blog post with 1500 hit. on abridged prov primer.
18:42:59 [hook]
ivan: would it be possible to make a copy of that?
18:43:04 [lebot]
18:43:20 [hook]
... could give completed blog text to chairs.
18:43:28 [pgroth]
q?
18:43:32 [pgroth]
ack ivan
18:43:59 [hook]
pgroth: we had a question on is there a simple implementation that we could do?
18:44:59 [hook]
ivan: Christine would like to have a webpage where I can fill out provenance form and it would produce PROV RDF and/or Turtle output.
18:45:12 [hook]
lebot: like the FOAF generator.
18:45:14 [lebot]
18:45:32 [Curt]
q+
18:45:55 [pgroth]
ack Curt
18:46:02 [hook]
ivan: from my own experience, going back band forth to find the right terms. would be useful for this example.
18:46:47 [hook]
Curt: we had information modeling people working with scientists. would be useful to tie it all together.
18:46:56 [lebot]
q+
18:47:19 [hook]
ivan: for my use case, it's only me. but would still be a useful service.
18:47:46 [pgroth]
ack lebot
18:47:51 [hook]
lebot: could write web page with even 3 buttons to incrementally generate trace.
18:48:02 [Curt]
q+
18:48:06 [pgroth]
ack Curt
18:48:35 [hook]
Curt: we are working with Peter Fox and Marshall (Ma?), if lebot has ideas to help drive that, it would be useful.
18:49:04 [hook]
Luc: what can we advertise on implementation?
18:49:50 [hook]
ivan: some WGs do not really make good use of it. anything that is relevant is ok.
18:50:09 [pgroth]
q?
18:50:12 [hook]
pgroth: we can also do a blog post. i.e. a link to the tutorial material.
18:50:54 [hook]
ivan: regarding timelines, what is a reasonable time that we an expect all of the documents to be ready?
18:52:17 [hook]
Luc: my intent would be aiming for this week. complete the changes by 2012-11-21.
18:52:28 [hook]
jcheney: 2-weeks would probably be doable.
18:52:47 [hook]
ivan: we should take whatever is realistic.
18:53:37 [hook]
lebot: 2-weeks is during Thanksgiving holiday for US folks.
18:54:21 [hook]
pgroth: I have Overview document as well.
18:54:50 [hook]
ivan: Nov 27th is Tussday. a good day to have the documents publication ready.
18:55:41 [hook]
pgroth: Overview currently does not exists. we also have DC, so have to check when Daniel is back. And XML is also new. Do these have more leeway?
18:56:48 .
18:57:19 [hook]
ivan: what we have to do then in 1-2 weeks to have a feeling of where we are, and contact Ralph and Thomas.
18:57:37 [hook]
... the other possibility is to put the publication date on 11th (Tuesday).
18:57:51 [hook]
pgroth: we should try to start getting the informal meeting already.
18:58:20 [hook]
ivan: are we ready? the meeting should be on the 5th. it needs 5-working days in advance.
18:58:38 [hook]
pgroth: need to start now since busy schedules for pgroth and Luc.
18:58:51 [hook]
ivan: we need to find time between 5th and 10th.
18:59:02 [hook]
pgroth: publication date on 11th is fine.
18:59:22 [hook]
Luc: Tim, is it possible to have documents complete before Thanksgiving holiday?
18:59:49 [hook]
lebot: will try to get things done sooner than later. have 3-4 day window before Thanksgiving. will work on DM and PROV-O.
19:00:04 [hook]
lebot: could push to get it done by the 20th.
19:00:42 [hook]
ivan: we should not push for tight restrictions. let's be realistic. Let's aim for the 11th, so as soon as the mail goes out to the chairs, we can contact Thomas and Ralph.
19:00:59 [hook]
pgroth: we need to schedule it now.
19:01:23 [hook]
ivan: we can write email. or simplest thing is setup a Doodle for that week.
19:02:06 [hook]
ivan: publication date is Tuesday 11th. setup Doodle for those 4 days prior.
19:02:59 [hook]
pgroth: with publication date and CR on 11th, what about Notes?
19:03:28 [hook]
pgroth: should we aim for Dec 4th for publication request for Notes?
19:03:41 [hook]
Luc: do we need to have group resolution that we go for publication?
19:04:30 [hook]
ivan: the DC exists, needs beautifying. for first public draft is ok as is. have no problem voting for it now.
19:04:42 [hook]
pgroth: we an do that on upcoming telecon or email.
19:05:22 [pgroth]
accepted: proposed publication date of cr dec 11
19:06:05 [pgroth]
accepted: request for publication of prov-dc, prov-primer, prov-overview dec 4 with pub date dec 11
19:06:17 [hook]
ivan: CR publication request goes out Nov 27th. pgroth to setup Doodle on Dec 5-10.
19:06:42 [pgroth]
accepted: announce cr on Nov 27
19:06:52 [hook]
Luc: I will produce a bibliographic file. should include URIs of all the documents.
19:06:57 [lebot]
20121211 is a good pile of digits
19:07:21 [hook]
ivan: will see with the web master if he is ok with the dates as well.
19:07:43 [hook]
pgroth: we are fine with dates.
19:08:22 [hook]
Luc: from yesterday, "mentions" will be a Note.
19:08:47 [hook]
ivan: for CR we have one more date to finalize. will be part of CR call.
19:09:33 [hook]
Luc: there are no Constraints. will look at all of the implementations and compile the implementation report. then go through same exercise for PR.
19:11:05 [hook]
ivan: will go through same exercise for PR, but people can work on it sooner. but consider Christmas holiday break. the period after CR could be shortened if we plan ahead. could shoot for Friday, Feb 1st
19:11:25 [pgroth]
accepted: Feb 1, 2013 end of CR
19:11:41 [pgroth]
q?
19:12:00 [hook]
Luc: what happens when we are there, and a feature X does not have two implementation.
19:12:10 [hook]
ivan: that means that feature is useless and we remove it.
19:13:06 [hook]
pgroth: we have a bigger issue with Constraints. bigger task to implement.
19:13:34 [hook]
... already have Provtoolbox, can throw provenance at it and visualize. then that's two implementations.
19:13:44 [hook]
Luc: consumer has to be generic.
19:14:10 [hook]
ivan: don't have to be overly generic. it's the intention that counts.
19:14:37 [hook]
... it forces us to think through all of the implementation issues.
19:15:25 [hook]
Luc: we need a resolution for DC for first public draft. we don't have it.
19:16:16 [hook]
pgroth: we said we will need first acceptance of public draft in telecon...Nov 29th. or can do by email.
19:16:34 [hook]
ivan: I will be on travel on Nov 29th.
19:17:35 [hook]
pgroth: we need to accept the "mentions" as a Note.
19:18:11 [hook]
... (1) voting for documents and (2) we would create a Note for mentionOf.
19:18:14 [Curt]
and what should you call the mention note?
19:18:39 [pgroth]
accepted: mentionOf will be put in a separate note as per action-135
19:19:55 [hook]
pgroth: smiles, you sent mail to working group list for public comments responses.
19:20:18 [hook]
smiles: was sending reminder for repsonses.
19:20:46 [hook]
Luc: I thought it was for the eternal reviewers and not for the working group.
19:20:55 [Luc]
19:20:55 [hook]
smiles: as you like.
19:21:26 [smiles]
@lebot :)
19:23:08 [lebot]
are we proposing to accept those responses?
19:23:10 [pgroth]
proposed: the responses to public comments for primer ISSUE-561 ISSUE-562, ISSUE-563, ISSUE-564 are working group responses
19:23:13 [lebot]
+1
19:23:14 [ivan]
+1
19:23:14 [jcheney]
+1
19:23:19 [Curt]
+1
19:23:19 [SamCoppens]
+1
19:23:21 [CraigTrim]
+1
19:23:23 [GK]
+1
19:23:25 [hook]
+1
19:24:14 [pgroth]
accepted: the responses to public comments for primer ISSUE-561 ISSUE-562, ISSUE-563, ISSUE-564 are working group responses
19:24:27 [hook]
pgroth: smiles, you can make those changes.
19:24:35 [hook]
smiles: changes made. so we are done.
19:25:18 [hook]
Luc: looking at responses to public comment, an I invite the editors to check that everything is fine in terms of responses.
19:25:51 [hook]
... ISSUE-592. made resolution yesterday but need response.
19:26:04 [hook]
lebot: will update and send out request for group response.
19:26:42 [GK]
GK has left #prov
19:26:44 [smiles]
Bye, talk to you soon!
19:26:45 [hook]
pgroth: wrapping up, earlier than planned. thank you everyone.
19:27:04 [Curt]
Curt has left #prov
19:27:06 [pgroth]
rrsagent, set log public
19:27:12 [pgroth]
rrsagent, draft minutes
19:27:12 [RRSAgent]
I have made the request to generate
pgroth
19:27:21 [Zakim]
-smiles
19:27:24 [pgroth]
trackbot, end telcon
19:27:24 [trackbot]
Zakim, list attendees
19:27:24 [Zakim]
As of this point the attendees have been smiles, Curt_Tilmes, Dong, SamCoppens, TomDN, laurent, hook, Curt, pgroth, Luc, jcheney, ivan, GK, lebot, CraigTrim, stain, Paolo
19:27:32 [trackbot]
RRSAgent, please draft minutes
19:27:32 [RRSAgent]
I have made the request to generate
trackbot
19:27:33 [trackbot]
RRSAgent, bye
19:27:33 [RRSAgent]
I see 17 open action items saved in
:
19:27:33 [RRSAgent]
ACTION: Dong to describe blue and green arrows in implementation report document [1]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: pgroth to change the respec style for implementation report [2]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: dong check constraints are matching to the updated document [3]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: Dong to update naming convention to include success/failure of test [4]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: zednik to create 3/4 questionnaires instead of a single branching one (+ remove mention) [5]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: zednik to look at wbs for the implementation questionnaire [6]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: Dong to remove reference of prov-json in implementation report, and allow entry for "other serialization" [7]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: zednik add a question to ask about use of constraints by applications (e.g. "or the features that you implement, do you support the PROV-CONSTRAINTS?") [8]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: GK to talk to Jun about implementation of constraints and specifically test cases [9]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: smiles editor's check on the primer [10]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: Luc to produce js file with biblio entries for prov documents [11]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: pgroth check for the current version of dublin core mapping + then send email to tim and craig for review [12]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: pgroth remind simon what he's supposed to do [13]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: pgroth to add a section on implementing prov and why and how [14]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: CraigTrim to write a paragraph motivating needs for provenance [15]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: CraigTrim to write a paragraph motivating needs for provenance [16]
19:27:33 [RRSAgent]
recorded in
19:27:33 [RRSAgent]
ACTION: Craig Trim to write a paragraph motivating needs for provenance [17]
19:27:33 [RRSAgent]
recorded in | http://www.w3.org/2012/11/10-prov-irc | CC-MAIN-2015-27 | refinedweb | 12,204 | 73.71 |
Getting Test Doubles in Place, Page 3
Listing 8: Injection via subclass override.
public class Portfolio { // ... public int value() { StockLookupService service = createStockLookupService(); int total = 0; for (Map.Entry<String,Integer> entry: holdings.entrySet()) { String symbol = entry.getKey(); int shares = entry.getValue(); total += service.currentValue(symbol) * shares; } return total; } protected StockLookupService createStockLookupService() { return new NASDAQLookupService(); } // ... }
The downside of injecting via subclass override is that, once again, I violate encapsulation. My test knows more about the implementation details of Portfolio than it would otherwise. The trouble is that I can no longer change such details without impacting or possibly breaking tests. That's an acceptable cost because I value the ability to test far more than notions of design perfection. But, it's a reminder that there might be trouble if I take this concept too far, and let my tests know more significant amounts of detail about the targets they verify.
Conclusion
There are other interesting ways of injecting test doubles than I presented here. I might, for example, consider using aspects. But these three techniques—constructor/setter injection, factory injection, and subclass override injection—are the ones that I consistently use. Using these different injection techniques gives me a bit more flexibility when it comes to incorporating fakes into a system. But, an important thing I must remember is that the very introduction of these fakes implies that I now have a "hole" in my system—something that I will be unable to unit test. I can't neglect my integration tests!<< | http://www.developer.com/java/article.php/10922_3719336_3/Getting-Test-Doubles-in-Place.htm | CC-MAIN-2013-48 | refinedweb | 252 | 50.33 |
This action might not be possible to undo. Are you sure you want to continue?
EDITION D
EPW/CI POLICY AND PROCEDURES
EPW/CI POLICY AND PROCEDURES Subcourse Number MP2011 EDITION D United States Army Military Police School 5 CREDIT HOURS Edition Date: October 1996
SUBCOURSE OVERVIEW Enemy prisoner of war (EPW) and civilian internee (CI) operations is one of four missions that the military police (MP) are responsible for. History tells us that EPW and CI are as much a part of armed conflict as weapons and tactics. The next battlefield will be characterized by nonlinear operations (rear, close, and deep) and decentralized command and control. Mobility of tactical units is critical because of the increasing lethality of weapons systems. Prompt evacuation of EPW from tactical units will give our combat forces greater freedom to maneuver. This subcourse provides an overview of MP doctrine for collecting, evacuating, and interning enemy prisoners of war and civilian internees. The doctrine in this subcourse may be applied to all levels of conflict including mature theaters with forward deployed units, contingency operations, and in operations in war other than war. No prerequisites are required for this subcourse. This subcourse reflects the doctrine that was current at the time of preparation. In your work situation, always refer to the latest publications and use the most current doctrine. Unless otherwise stated, the use of masculine pronouns includes both men and women. TERMINAL LEARNING OBJECTIVE ACTION: CONDITION: STANDARD: Plan EPW and CI operations in a theater of operations. You have this subcourse, paper and pencil. To demonstrate competency of this task, you must achieve a minimum score of 70 percent on the subcourse examination.
i
MP2011
TABLE OF CONTENTS Section Subcourse Overview................................................................................................................................................. .. LESSON 1: US POLICY AND PROCEDURES.......................................................................................................... . Part A: US Policy....................................................................................................................................... ........... Part B: Handling Captured Materials........................................................................................................ ............ Part C: Actions Required by Capturing Troops........................................................................................... .......... Practice Exercise.................................................................................................................................................. Answer Key and Feedback........................................................................................................ .......................... LESSON 2: OPERATIONS AT BRIGADE, DIVISION, AND CORPS........................................................ .................. Part A: Division Forward Collecting Point............................................................................................... .............. Part B: Division Central Collecting Point...................................................................................... ........................ Part C: Corps Holding Area.................................................................................................................... .............. Practice Exercise.................................................................................................................................................. Answer Key and Feedback........................................................................................................ .......................... LESSON 3: OPERATIONS AT ECHELONS ABOVE CORPS......................................................... ........................... Part A: Theater Army EPW and CI Organizations........................................................................ ........................ Part B: Processing US Captured EPW and CI................................................................................ ..................... Part C: Initial Disposition of EPW........................................................................................................ ................. Part D: Accounting for EPW and CI................................................................................................................ ...... Part E: Internment of EPW and CI........................................................................................................................ Practice Exercise.................................................................................................................................................. Answer Key and Feedback........................................................................................................ .......................... Page i 1-1 1-1 1-7 1-12 1-20 1-22 2-1 2-1 2-11 2-19 2-25 2-26 3-1 3-1 3-17 3-41 3-46 3-58 3-85 3-86
MP2011
ii
THIS PAGE INTENTIONALLY LEFT BLANK
iii
MP2011
LESSON 1 US POLICY AND PROCEDURES MQS Manual Task: 01.3759.01.9002 OVERVIEW LESSON DESCRIPTION: In this lesson you will learn the US policy for handling EPW and CI. TERMINAL LEARNING OBJECTIVE: ACTION: CONDITION: STANDARD: REFERENCES: Understand US policy and procedures for handling EPW and CI. You are a lieutenant or captain in the Military Police Corps and are involved in EPW or CI operations. To demonstrate competency of this task, you must achieve a minimum score of 70 percent on the subcourse examination. The material contained in this lesson was derived from the following publications: AR 190-8, AR 19040, AR 190-57, FM 34-52, STANAG 2044, and STANAG 2084. INTRODUCTION Violating international law or U.S. policy concerning EPW and CI cannot only lead to criminal charges against the violator, but also lead to increased manpower requirements for prisoner security. It is important that the Military Police consistently observe these policies and be prepared to train others in the correct procedures concerning EPW and CI. PART A - US POLICY Basic US policy towards EPW and CI is derived from the Geneva Conventions and is found in AR 190-8 and AR 190-57. All persons who are captured, interned, or held by US forces during a conflict must be given humanitarian care and treatment from the moment they are captured until they are finally released or repatriated. This requirement applies whenever the US is a party to a conflict even if a state of war hasn't been declared. Any act or allegation of inhumane treatment or other violation of AR 190-8 and AR 190-57 must be reported to Headquarters, Department of the Army (HQDA) as a serious incident report (SIR) according to AR 190-40. The provisions in AR 190-8 and AR 190-57 for reporting inhumane treatment applies to enemy
1-1
MP2011
prisoners of war, civilian internees, and persons that are known or suspected of committing serious offenses that could be characterized as war crimes. PRISONER CATEGORIES Captured persons may be classified into several categories: Enemy Prisoners of War (EPW), retained person (RP), Civilian Internees (CI), and detainees. Enemy Prisoners of War Enemy prisoners of war include the following: o o Members of the enemy armed forces as well as members of the military or volunteer corps that form part of such armed forces. Members of militias and other volunteer corps. Included are organized resistance movements that belong to an enemy nation and operate in or outside of their territory, even if this territory is occupied. Such militias or volunteer corps, including such organized resistance movements, must-o o be commanded by a person responsible for subordinates. have a fixed, distinctive sign that can be recognized at a distance. carry arms openly. conduct their operations according to the laws and customs of war.
Members of regular armed forces who profess allegiance to a government or authority not recognized by the detaining power. Persons who accompany an armed force without actually being members. They must have received authorization from the armed force they accompany. They must also have an identification card that was issued by the armed forces they are accompanying. Examples include the following: Civilian members of military aircraft crews. War correspondents. Supply contractors. Members of labor units. Members of services responsible for the welfare of enemy armed forces.
o
Members of crews who do not benefit by more favorable treatment under any other provisions of international law. Examples include crews of
MP2011
1-2
civil aircraft of the enemy power and masters, pilots, and apprentices of the Merchant Marine. o Inhabitants of a nonoccupied territory who, on the approach of US armed forces, spontaneously take up arms to resist the invading forces. These persons musto have not had time to form themselves into regular armed units. carry arms openly and respect the laws and customs of war.
Persons belonging, or having belonged, to the armed forces of a country occupied by the US. The theater army (TA) commander must consider it necessary to intern these people because of their allegiance. Persons may fall into this category even though the US may have originally liberated them from prisoner of war status while hostilities were going on outside the occupied territory. This category includes persons who-have made an unsuccessful attempt to rejoin the armed forces to which they belong and which are engaged in combat. have failed to comply with a summons made to them because they fear internment.
Retained Persons A retained person is also considered under the broad classification as an EPW. It also refers to retained persons taken into custody and held against their will. Retained persons include the following: o o Medical personnel who are members of the medical service of their armed forces. Medical personnel exclusively engaged in-o o searching, collecting, transporting, or treating the sick or wounded. preventing disease. administrating medical units and establishments exclusively.
Chaplains attached to enemy armed forces. Staff of national Red Cross societies and other voluntary aid societies duly recognized and authorized by their governments. The staff of such societies must be subject to military laws and regulations to qualify as retained persons.
1-3
MP2011
Civilian Internees A civilian internee is an enemy national protected by the Geneva Conventions who is found in occupied territory or in the territory of a party to the conflict and interned by the US Army. Internment may be ordered for imperative security reasons according to Article 78, Geneva Conventions. Additionally a person convicted of an offense against the US can be interned in lieu of confinement in accordance with Article 86 of the Geneva Convention. Nationals of a state not bound by the Geneva Conventions are not protected persons. Nationals of states that are cobelligerents of the US are not protected persons as long as their states have normal diplomatic representatives in the US. Nationals of neutral states bound by the Geneva Conventions are protected persons as are stateless persons. Detainees The final category used to classify personnel is detainee. A detainee is all other persons, including innocent civilians, displaced persons/refugees, suspect civilians, terrorists, espionage agents, and saboteurs. These individuals are being held by the U.S. until a definitive legal status can be established by competent authority and are treated as EPW until that time. PRISONER OBLIGATIONS AND PROTECTIONS The Geneva Conventions establish prisoner obligations. EPW and CI are obliged to provide their name, rank, service number, and date of birth. EPW and CI are also requiredo o to observe the Code of Conduct, lawful camp regulations, and laws of the capturing state that apply. to perform nonmilitary labor or work that is not humiliating, dangerous, or unhealthy.
Punishment Enemy prisoners of war may be punished under US Army laws, regulations, orders in force, and the Uniform Code of Military Justice (UCMJ). Regulations, orders, and notices that EPW are required to obey must be published in the language the prisoner understands. They must be posted where prisoners have access to them. Under normal circumstances this applies mainly to EPW operations at echelons above corps (EAC). It also applies where prisoners are interned on a permanent or semipermanent basis. Judicial proceedings against EPW may be by courts-martial or civil court. The Manual for Courts-Martial (MCM) and the UCMJ are used when prisoners are court-martialed. EPW may be delivered to civil authorities when authorized by the Secretary of the Army. An EPW will not be delivered to civil authorities
MP2011
1-4
for an offense unless a member of the US armed forces would be delivered for committing a similar offense. Civilian internees may be tried by general court-martial. They may not be tried by summary or special court-martial. The internment facility commander may impose disciplinary punishment under the provisions of AR 190-57. Disciplinary measures should be used instead of judicial punishment whenever possible. Disciplinary measures include the following: o o o o o Discontinuing privileges that are granted over and above the treatment provided for by the Geneva Conventions. Confinement. A fine not to exceed one-half of the advance pay and working pay that the EPW and CI would otherwise receive during a period of not more than 30 days. Fatigue duties not to exceed two hours per day. This punishment will not be applied to officers. Noncommissioned officers (NCOs) may only be required to do supervisory work. A restricted diet along with disciplinary segregation may be imposed upon a detainee in confinement.
Prisoners can't be punished under disciplinary proceedings until they are given precise information regarding the offense they are accused of and given a chance to explain their conduct and defend themselves. They will be permitted to call witnesses and use an interpreter if necessary. Repatriation EPW and CI who aren't sick or wounded are normally repatriated or released at the end of hostilities as directed by the State Department and Department of Defense (DOD). Some sick and wounded prisoners and internees are considered for repatriation during hostilities by a mixed medical commission established by HQDA. The mixed medical commission consists of two members appointed by the International Red Cross and approved by the parties to the conflict. The third member is a medical officer of the US Army selected by HQDA. Prisoners who have lost a hand or foot because of injury, paralysis, or disease and those who suffer from a disease or injury that will probably result in death despite treatment for one year are eligible for direct repatriation. Mixed medical commissions do not have to examine EPW in this category. Protecting Powers The Geneva Conventions set up an inspection system that works through "protecting powers." Any willing and able neutral country or impartial
1-5
MP2011
organization, agreed upon by the parties in a conflict, may act as a protecting power. Basically, the duty of a protecting power is to check on proper application of the Conventions' rules. It also suggests corrective measures where necessary. For instance, a protecting power must periodically inspect prisoner of war camps. Prisoners have a right to ask for these inspections. Representatives of a protecting power may not be prohibited from visiting EPW and CI where they are interned or confined except for imperative military necessity. Labor Prisoners of war and civilian internees may be required to perform essential work where qualified civilian labor isn't available. Essential work is that which would have to be done whether or not EPW are available. Prisoners will not be employed in positions that require or permit them-o o o access to classified defense information or records of other personnel. access to telephones or other communication systems. authority to command or instruct US personnel.
Prisoners may be employed in the following types of labor: o o o o o o o Administration, construction, and maintenance of EPW camps. Commercial business, arts and crafts. Agriculture. Public works, public utilities, and construction that have no military character or purpose. Transporting and handling stores that are not military in character or purpose, Industries connected with the production or extraction of raw materials, and manufacturing industries, with the exception of metallurgical machinery, and chemical industries. Domestic service.
Prohibited Labor Officer EPW will not be required to work. The only time that officer EPW are allowed to work is when they have submitted a voluntary request in writing. NCO EPW will be required to do supervisory work only. Nonsupervisory work may be allowed if the EPW submits a request in writing. Enlisted EPW are required to do work that is not specifically prohibited. Unauthorized work includes
MP2011
1-6
unhealthy or dangerous work, humiliating work, and other specifically prohibited labor such as chemical industries, removal of mines or similar devices, etc. Unhealthy or dangerous work includes-o o exertion beyond physical capability. use of inherently dangerous mechanisms or material such as high speed cutting instruments or explosives, mechanisms that are dangerous because the person is unskilled in their use, and climbing to dangerous heights or exposure to risk of injury from falling objects in swift motion and not under full control. humiliating or degrading labor. Humiliating work is labor that would be looked upon as degrading or humiliating if performed by a member of the US armed forces. This prohibition does not prevent EPW from doing ordinary and frequently unpleasant tasks such as digging a ditch or manual labor in agriculture.
o
Specifically prohibited labor includes work in an area where EPW are exposed to combat zone fire. It also includes tending bars or serving alcohol in officer's messes or similar establishments. Any labor near convicts or inside state prison walls is also prohibited. PART B - HANDLING CAPTURED MATERIALS Captured enemy documents (CED), captured enemy equipment (CEE), and associated technical documents (ATD) are reported and handled accordingo o to military intelligence doctrine and STANAG 2084 for items not associated with EPW or CI. to military police doctrine and STANAG 2044 for items found in the possession of EPW and CI.
CAPTURED ENEMY DOCUMENTS Captured enemy documents include any recorded information that was possessed by the enemy and comes into US possession. Captured enemy documents are normally found on dead enemy soldiers, the battlefield, and EPW. Captured enemy documents are categorized as-o o official documents such as maps, orders, field orders, overlays, and codes including government and military items. personal documents such as letters, diaries, newspapers, and books including private and commercial items.
Captured enemy documents are divided into four categories--A, B, C, and D.
1-7
MP2011
Category A These are documents that contain reportable information, are time sensitive, contain significant intelligence information, and may be critical to friendly courses of action. Category B Category B includes anything that pertains to enemy communications or cryptographic systems. Category B documents are given a tentative secret classification and handled accordingly. Category B documents are evacuated through secure channels to military intelligence. Category C These documents contain information that is of general intelligence value. They do not indicate a significant change in enemy capabilities or intentions. Category D These documents do not contain any information of intelligence value. Documents are classified Category D only after they have been examined by military intelligence personnel at EAC. Processing and Evacuating Documents Captured enemy documents not associated with EPW or CI are evacuated to the nearest military intelligence interrogating team as quickly as possible. Interrogating teams are normally located at or near EPW collecting points established by military police. These locations include-forward collecting points (brigade support area). central collecting points (division support area). corps holding areas.
Documents found on EPW and CI are tagged with part C of the standard capture tag. (See Figure 1-1.) Documents found in the possession of EPW by MP are evacuated with prisoners (in the possession of guards). Documents are examined by military intelligence personnel at the first opportunity. EPW and CI should never have direct access to CED. Identification (ID) cards are not considered CED. Confiscating an EPW's ID card violates the Geneva Convention and AR 190-8. Any CED that merits special attention, as specified in the intelligence annex of an operations order, may be evacuated ahead of EPW to the nearest military intelligence activity. Captured enemy documents must be accounted for at all times. A continuous receipt system should be used.
MP2011
1-8
Captured enemy documents that are found on dead enemy soldiers or recovered from the battlefield are tagged by the capturing unit and evacuated to military intelligence personnel. Part C of the standard prisoner of war captive tag (Figure 1-1) is used for captured enemy documents that are found on the battlefield or on dead enemy soldiers. A plain piece of paper may be used to tag CED found on dead enemy soldiers or recovered from the battlefield when a standard capture tag (Figure 1-1) isn't available. Plain paper used instead of document tags must have the following information: o o o o o Date and time captured. Place captured. Capturing unit. Identity of source. Circumstance of capture.
Markings of any kind should never be made on captured enemy documents. Captured enemy documents not associated with EPW are identified with the captured document tag (Figure 1-2) and evacuated by the capturing unit to the nearest military intelligence activity for exploitation. The first interrogating element that has the time and personnel available will screen captured enemy documents for information that answers priority intelligence requirements (PIR) or information requirements (IR). This element will assign each CED a category. The category assigned is not permanent and may be changed as necessary. Security Classification Captured enemy documents containing communications or cryptographic information are given a temporary classification of secret. They must be evacuated through secure channels to the tactical control and analysis element (TCAE). Special Handling Documents of any category that are captured from crashed enemy aircraft, particularly if they relate to enemy antiaircraft defense or enemy air control and reporting systems, are evacuated through command channels to the nearest Air Force headquarters. Documents taken from ships are evacuated through command channels to the nearest Navy headquarters. Captured maps and charts that have operational graphics are evacuated through command channels to the nearest all-source production section or branch (ASPS or ASPB).
1-9
MP2011
CAPTURED ENEMY EQUIPMENT Captured enemy equipment may provide technical intelligence information of immediate value for targeting purposes, order of battle intelligence, or to aid in determining enemy capabilities and weaknesses. Capturing units are responsible for guarding and reporting CEE through their command channels. Captured enemy equipment is tagged by the capturing unit using Part C of DA Form 5976. Captured enemy equipment is disposed of according to instructions from higher headquarters in coordination with military intelligence (G2). Under normal circumstances, CEE is evacuated to a military intelligence technical intelligence team in the corps area. If the possibility exists that CEE will be recaptured by the enemy-obtain information, photographs, and ATD pertaining to the CEE. destroy or disable the CEE to prevent the enemy from using it.
ASSOCIATED TECHNICAL DOCUMENTS Associated technical documents (ATD) pertain to equipment of any type. They are evacuated with the equipment they pertain to. If this isn't possible, a cover sheet is attached to the document. The word "TECDOC” is written across the top of the cover sheet in large letters. A detailed description of the equipment is written on the cover sheet. Photographs that include a measurement guide should also be submitted if possible.
MP2011
1-10
Figure 1-1. Standard Prisoner of War Captive Tag (STANAG 2044).
1-11
MP2011
Figure 1-2. Captured Document Tag (FM 34-52). PART C - ACTIONS REQUIRED BY CAPTURING TROOPS Enemy prisoners of war are an inseparable part of armed conflict. Combat, combat support, and combat service support units may capture prisoners of war or accept the surrender of defeated enemy soldiers anywhere on the battlefield. Military police are responsible for collecting, evacuating, and interning prisoners of war. Other units have battlefield missions that are just as important. The effectiveness of these units is degraded when they are encumbered with prisoners of war. A delay caused by prisoners of war on yesterday's battlefield that had little or no effect on combat operations may prove to be fatal tomorrow. Combat forces are able to move greater distances in less time. Weapons are more accurate and deadly. Collecting, evacuating, and interning EPW is a critical element of the commander's mobility.
MP2011
1-12
DOD OBJECTIVES Everyone is responsible for treating prisoners of war according to the objectives established by DOD. These objectives are-o o o o o to maximize intelligence information. to prevent escape or liberation. to promote proper treatment of captured US personnel by example. to weaken the will of the enemy to resist. to use EPW and CI as a source of labor.
Maximize Intelligence Maximizing intelligence information means getting decisive information to the proper commander in time to be useful. Knowledge is power. Getting prisoners to intelligence personnel quickly may be the difference between succeeding and failing. All intelligence is perishable. Some is valuable for minutes or hours. Other information lasts for days or weeks. Military intelligence personnel are trained to know the difference. They are able to piece small bits of otherwise useless information into significant findings that can change the course of battle. Information needed by the commander for present and future combat operations may be identified in the intelligence annex of the operations plan or operations order as PIR and IR. Anyone who is responsible for prisoners should be familiar with the commander's PIR and IR. Examples include the following: o o o o Mission. Information that describes the present, future, or past missions of specific enemy units. Each unit for which mission information was obtained is identified. Composition. Information that identifies specific enemy units. This identification should include the type of unit (artillery, transportation, armor, and so forth) and a description of the unit's organizational chain of command. Strength. Information that describes the size of enemy units in terms of personnel, weapons, and equipment. A unit identification must accompany each description. Disposition. Information that establishes locations occupied by the enemy units or activities. The information will identify the military significance of the disposition, other enemy units there, and any security measures.
1-13
MP2011
o o o
Tactics. Information that describes the tactics in use, or planned for use, by specific enemy units. The doctrine governing the employment of these tactics description will provide unit identification and information about personnel and equipment losses and replacements, reinforcements, morale, and combat experiences of its members. Logistics. Information that describes the means by which the enemy moves and sustains its forces. This includes any information on the types and amounts of supply required, procured, stored, and distributed by enemy units in support of current and future operations. Electronic technical data. Information that describes the operations parameters of specific enemy electronic equipment. This includes both communications and noncommunications systems. Miscellaneous data. Information that supports the development of any of the other order of battle elements. Examples are passwords, unit histories, radio call signs, radio frequencies, unit or vehicle identification numbers, and psychological operations.
o
o o
Military intelligence interrogating teams may be located in each brigade support area (BSA) near the forward EPW collecting point, in the division support area (DSA) near the central EPW collecting point, and in the corps near EPW and CI holding areas. Units with prisoners of war can maximize intelligence information by evacuating prisoners without delay. Prevent Escape or Liberation Prisoners of war who escape or are liberated by the enemy or sympathizers can degrade combat operations by disrupting combat support and combat service support operations. Plans for guarding prisoners of war need to include measures taken-o o to prevent prisoners from escaping. to reduce the ability of enemy forces and enemy sympathizers to liberate captives.
Promote Proper Treatment Promoting proper treatment of US soldiers held captive by the enemy is a major reason behind US policy towards EPW and CI. Young soldiers must be taught how
MP2011
1-14
important it is to treat captives humanely. The trauma and stress of battle increases the likelihood of maltreatment. Leaders at every level must educate and supervise subordinates who handle captives. Weaken Enemy Will Enemy soldiers will be more inclined to surrender when facing defeat by US forces if they believe that they will receive fair treatment. Enemy soldiers who are afraid of being tortured or maltreated have a reason for fighting to the death. Provide Source of Labor The final objective established by DOD for handling prisoners of war is to use them as a source of labor. This includes work needed to construct, administer, manage, and maintain prisoner of war camps. INDIVIDUAL RESPONSIBILITIES Soldiers who capture or accept the surrender of enemy prisoners of war have specific responsibilities. Remembering the key word "5Ss plus T" may help soldiers remember their responsibilities. The "5Ss plus T" stand for search, silence, segregate, safeguard, speed, and tag. Search Prisoners are searched and unarmed. Use common sense when searching prisoners. They should be searched when they are first captured, and every time they are transferred. The capturing unit has primary responsibility for preparing a capture tag for each prisoner evacuated to an EPW collecting point. Silence Silencing prisoners reduces the opportunity for escape. Guards should be cautioned not to talk where prisoners can overhear their conversation. Communicating with prisoners should be restricted to issuing orders or instructions. Segregate Segregating prisoners is essential for control. Prisoners should be segregated into the following categories whenever possible: o o o o Officers. Noncommissioned officers. Enlisted soldiers. Males.
1-15
MP2011
o o o o o
Females. Civilians. Nationality. Ideology. Deserters.
Identifying prisoners at the capturing unit level may prove to be difficult because of a language barrier. Prisoners who cannot be readily classified into one of these categories should be segregated by themselves and watched more closely. Anyone captured by US forces who can't be properly identified is entitled to treatment as an EPW. Coercion of any kind may not be used to obtain information from prisoners. This includes getting the minimum data that prisoners are obliged to tell their captors (name, service number, rank, and date of birth). Prisoners who will not provide the information necessary to make a proper classification should be segregated from other prisoners until a determination can be made. EPW who cannot be readily identified by name, service number, rank, and date of birth may be accounted for by other means such as capture tag control number. Evacuating EPW should not be delayed to obtain information to complete capture tags. Safeguard Safeguarding prisoners Supports the DOD objective of reducing the will of the enemy to resist, and promotes proper treatment of US soldiers held captive by the enemy. Safeguarding prisoners includes shielding them from friendly and enemy weapons fire. This is an important aspect to consider when units establish a collecting point for prisoners of war. Prisoners shouldn't be located next to obvious targets such as ammunition sites; petroleum, oil, and lubricant (POL) facilities; or communications equipment. Safeguarding prisoners also means allowing them to retain protective equipment such as gas masks, chemical protective suits, body armor, and helmets. If protective equipment is confiscated for intelligence purposes by military intelligence (a unique item, for example) a like item should be issued if possible. We are responsible for issuing protective gear to prisoners who do not already possess it. Issuing protective equipment to prisoners isn't required when doing so will degrade the protective posture of our own forces. First aid and medical treatment is also provided to the same extent that we provide care to our own soldiers. Sick and wounded EPW are evacuated separated from, but in the same manner as, US and Allied forces. Accountability and security of prisoners and their equipment in medical facilities is the responsibility of the echelon commander.
MP2011
1-16
Reporting prisoners who are evacuated from the battlefield into medical channels to the military police and military intelligence is important. Unit standing operating procedures (SOPs) should address how to report, guard, and evacuate wounded prisoners of war. Prisoners should not be given food by the capturing unit under normal circumstances when evacuation to a collecting point can be readily accomplished. Giving prisoners water depends on the situation--climate, the prisoners' condition, and length of time until they will be evacuated. Speed Speed, or evacuating prisoners from the combat area quickly, is related to maximizing intelligence information and safeguarding the prisoners. Military intelligence interrogators are located at or near EPW collecting points and holding areas operated by the military police. Prisoners are evacuated by the capturing unit to the nearest prisoner of war collecting point or holding area as quickly as possible. The type of transportation available for evacuating prisoners of war will vary by theater and depend upon the situation. Units which capture prisoners are responsible for coordinating transportation. Specific methods used by military police to evacuate prisoners of war that can be adopted by other units who have to handle EPW are discussed in a subsequent lesson. A plan to evacuate prisoners to the nearest collecting point should be included in local operations orders or SOPs. Use of force, escape attempts, reacting to ambush and attack, and specific methods used for each type of transportation (air, vehicle, boat, foot march) should be covered. Tag Capture tags have preprinted control numbers on them; for example 1234567A. This number is the same on all detachable parts of the tag. The control numbers are used in the same manner as control numbers on other blank forms such as the Armed Forces Traffic Ticket. Captive tag control numbers are not related to the internment serial number assigned to EPW and CI by a processing point in the communications zone (COMMZ). Captive tag control numbers are used so that detachable pieces and copies of captive tags can be matched with the original captive tag that is fastened to EPW. EPW may be accounted for by captive tag number when they are not willing or able to provide their name, rank, service number, and date of birth. Situations like this may arise when EPW are wounded or when EPW do not speak English and interpreters are not available. Evacuating EPW should not be delayed to obtain administrative data. When the name, rank, service number, and date of birth cannot be obtained, MP may use the captive tag control number to account for EPW.
1-17
MP2011
Everything the prisoner has should be checked. Any property that is taken from prisoners must be tagged. It may become extremely important to link a particular prisoner with certain property. The most economical way of tagging a prisoner's property is to bundle it and attach an identification tag (Part C of the standard captive tag.) Property found in the possession of enemy soldiers should be evacuated with them to the nearest prisoner of war collecting point. Identification cards and tags issued by the enemy soldier's government are not seized. CLASSIFYING EPW AND CI PROPERTY Personal effects found in the possession of prisoners of war are classified according to their disposition. Property may be retained, impounded, or confiscated. Retained Property Retained property includes personal effects that prisoners are allowed to keep. Retained property includes the following: o o o o o o o Clothing. Mess equipment except knives and forks. Badges of rank and nationality. Decorations. Identification cards and tags. Religious literature. Articles that have a sentimental value to the prisoner or are for personal use.
Impounded Property Impounded property is any article including personal effects that may make escape easier or are considered dangerous to US security interests. These articles include cameras, radios, and currency. In most cases, impounded property is eventually returned to the prisoner. Impounded property taken from prisoners must be accounted for. Confiscated Property Confiscated property includes items that are not normally returned to the prisoner. Examples of confiscated property are-o o weapons. ammunition.
MP2011
1-18
o o o o o
military equipment except mess equipment, helmets, and chemical protective gear. military documents such as codes, ciphers, pictures, maps, or sketches of military installations or implements of war. signal devices. contraband. illegally obtained money.
Impounded and confiscated property must be accounted for at all times. Items taken from prisoners must be tagged and identified so they can be associated with the prisoners from whom they were taken.
1-19
MP2011
LESSON 1 PRACTICE EXERCISE The following items will test your knowledge of the material covered in this lesson. There is only one correct answer for each item. When you have completed the exercise, check your answers with the answer key that follows. 1. What are three types of labor EPWs will NOT perform? A. B. C. D. Dangerous or unhealthy labor, humiliating labor, or chemical industries. Agricultural labor, humiliating labor, military-related labor. Chemical industries, hard labor, agricultural labor. Chemical industries, construction, military-related labor.
2. Detained persons include: A. B. C. D. Displaced persons. Terrorists. Saboteurs. All of the above.
3. NCOs who are EPW can be required to perform the following kinds of work: A. B. C. D. Supervise a prisoner detail digging a ditch. Clear mine fields if he holds the MOS. Maintain prisoner records. All of the above.
4. An EPW who assaults a guard while trying to escape: A. B. C. D. Cannot be punished because it is his duty to try to escape. Can be shot immediately by a firing squad. Can be court-martialed under the UCMJ. Must be turned over to the civilian authorities for trial.
5. While checking an enemy position that has just been overrun, you find a map that appears to show current enemy positions, units, and a scheme of maneuver. You should: A. B. C. D. Put your name, rank, unit, and other key information on the map and turn it over to the nearest MI team. Have one of the captured EPW keep it until you reach the division collection point. Tag it and evacuate it to the nearest MI team as soon as possible. Keep it as a war trophy.
MP2011
1-20
6. Proper treatment of EPW is important in order to: A. B. C. D. Maximize intelligence gathered from EPW. Minimize escape and liberation attempts. Weaken the will of the enemy to resist. All of the above.
7. The "T" in "5Ss plus T" stands for: A. B. C. D. Terminate. Tag. Transport. Thorough.
1-21
MP2011
LESSON 1 PRACTICE EXERCISE ANSWER KEY AND FEEDBACK Item 1. 2. 3. 4. 5. 6. 7. A. D. A. C. C. D. B. Correct Answer and Feedback Dangerous or unhealthy labor, humiliating labor, or chemical industries. Unauthorized work includes…(page 1-6, para 5). All of the above. A detainee is all…(page 1-4, para 3). Supervise a prisoner detail digging a ditch. NCO EPW will be required to…(page 1-6, para 5). Can be court-martialed under the UCMJ. Enemy prisoners of war may be punished...(page 1-4, para 6). Tag it and evacuate it to the nearest MI team as soon as possible. Captured enemy documents not associated with…(page 1-8, para 5). All of the above. Everyone is responsible for treating (page 1-13, para 1). Tag. The "5Ss plus T" stand for (page 1-15, para 4).
MP2011
1-22
LESSON 2 OPERATIONS AT BRIGADE, DIVISION, AND CORPS MQS Manual Tasks: 01.2759.01.9005 03.3370.00.0001
OVERVIEW LESSON DESCRIPTION: In this lesson you will learn the EPW operations performed at brigade, division, and corps level. TERMINAL LEARNING OBJECTIVE: ACTION: CONDITION:' STANDARD: REFERENCES: Describe EPW operations that are conducted by MP at the brigade, division, and corps levels. You are a lieutenant or captain in the MP Corps and are responsible for EPW operations at the brigade, division, or corps level. To demonstrate competency of this task, you must achieve a minimum score of 70 percent on the subcourse examination. The material contained in this lesson was derived from the following publications: AR 190-8, and FM 5-34.
INTRODUCTION The primary objective for the Military Police EPW mission in the forward area is to take custody of and evacuate EPW/CI as quickly as possible in order to free combat soldiers for battle. Evacuation of EPW/CI to rear areas also enhances security of the division area while increasing the safety of captured personnel. At the same time, Military Police must ensure that every effort is made to gather and quickly report any valuable intelligence information that the EPW/Cl may have. All of these objectives must be accomplished without diverting any essential resources from other units or missions. PART A - DIVISION FORWARD COLLECTING POINT MP in direct support of a brigade establish a forward EPW collecting point. Brigades that do not have MP in direct support (brigades in light infantry divisions, for example) are responsible for establishing their own forward
2-1
MP2011
collecting point. MP do not establish forward collecting points in a light infantry division. PURPOSE The forward EPW collecting point accepts prisoners captured by soldiers in brigade units. Capturing units evacuate EPW to the forward collecting point in the brigade area. MP in general support of the division rear area go forward to brigade collecting points (including the brigade collecting points in light infantry divisions) to evacuate EPW. Forward collecting points may be operated by MP from the platoon in direct support of the brigade. The number of MP needed is determined by the platoon leader based on the mission, enemy, terrain, troops available, and time (METT-T). Mission The priority of MP battlefield missions (EPW, battlefield circulation control, area security, and law and order) and the number of MP needed for each mission are constantly changing. The priority of EPW operations and the number of MP needed to operate a forward EPW collecting point depend on present and future operations in the brigade. Under normal circumstances, the EPW mission will be a higher priority and require more MP when the brigade is conducting offensive operations. The number of MP needed may be lower when the brigade is on the defense. It may be lowest when the brigade is in reserve status or being reconstituted. Enemy The enemy disposition and capability is a determining factor in the priority of the EPW mission and the number of MP needed to operate a forward collecting point. Factors such as enemy culture, morale, strength, supply status, and training are considered when planning EPW operations and committing resources. World War II offers an example of the impact that culture may have on how many EPW will be captured. Western armies in Europe captured one EPW for every four enemy soldiers that were killed. (A ratio of 4:1.) Western armies in the Pacific facing the Japanese captured only one EPW for every 120 enemy soldiers that were killed. (A ratio of 1:120.) Enemy morale, personnel strength, supply status, and training has an impact on EPW operations. Poorly trained enemy soldiers from understrength units that have exhausted their supplies and been demoralized by combat may be easier to defeat and capture than well-trained units at full strength with plenty of supplies and good morale. An example of this difference is illustrated by comparing the German coastal defenses that the Allies faced at Normandy in 1944 and the crumbling Berlin defense forces faced in 1945.
MP2011
2-2
Terrain Terrain affects EPW operations. Climate and physical characteristics of terrain plays an important role in determining where to locate EPW collecting points and the number of NP needed to sustain continuous operations. Shelter and warmth for EPW is considered when establishing collecting points in cold mountainous areas. Protection from the sun and heat is considered in desert terrain. More NP may be required to adequately guard EPW in dense jungle terrain. Troops Available Other missions, casualties, experience, and training affects the number of MP that are available and required at a forward collecting point. Preparing for EPW operations through effective training enhances flexibility. NP who are trained and experienced in EPW operations can establish a collecting point with less supervision and will be able to maintain operations with fewer people. Time Time is critical. MP must be able to establish, expand, and move EPW collecting points quickly. The fast tempo of the AirLand Battlefield requires flexibility. Initial planning may indicate the need for a collecting point capable of handling 30 or 40 EPW during an upcoming operation. The momentum of maneuver units cannot be stopped or slowed if more EPW are captured. A plan to handle the surrender of an entire enemy unit should also be considered. Being prepared and flexible enough to respond to contingencies increases freedom of maneuver for the supported commander. The number of MP teams needed for continuous operation of an EPW collecting point must also be considered. FLOW OF EPW IN THE BRIGADE Soldiers who capture or accept the surrender of EPW are responsible for taking them to the nearest EPW collecting point. MP in general support of the division will evacuate EPW from the forward collecting point to the central EPW collecting point in the division rear area. LOGISTICS CONSIDERATIONS Forward EPW collecting points are normally very austere. Basic considerations are barrier material, food and water, shelter, communications, and transportation. Barrier material (concertina wire, anchors, stakes, sandbags, and lumber) may be needed to establish a collecting point and provide protection from friendly and enemy weapons. Using existing structures may reduce the amount of barrier material, time, and effort needed to establish a collecting point. Using an existing structure may also reduce the number of MP needed for security.
2-3
MP2011
Water for hygiene and drinking is provided at forward EPW collecting points. Under normal circumstances, EPW are evacuated from the forward collecting point without delay and will not require food. Shelter from the elements may be needed for EPW at collecting points. This is particularly important in hot, wet, and cold climates. Use of existing structures reduces the need for fabricating shelter. Tents, blankets, and sleeping bags may be required, depending upon the climate. SELECTING THE SITE The platoon leader is given a general area to establish a forward EPW collecting point. MP conduct a reconnaissance of the designated area before determining the exact location. Forward EPW collecting points are established near or within the BSA. MP consider the following characteristics of a forward EPW collecting point before establishing operations. It should be located-o o o in defilade. The location should be far enough from the fighting to avoid minor fluctuations in the forward edge of the battle area (FEBA). Normally, this is five to ten kilometers behind the FEBA. in or near the BSA. close to a main supply route (MSR).
Forward collecting points provide protection from friendly and enemy weapons. Locating the collecting point in terrain features that provide defilade may reduce the amount of effort needed to provide cover. A series of trenches, holes, earthen berms, and sandbags may have to be provided when it is not possible to use terrain features to provide protection. Overhead cover should also be provided to protect EPW from secondary missiles and fragments. Obtaining supplies, transportation, and medical support is easier when collecting points are near resources in the rear. MP coordinate with the commander responsible for the BSA before selecting a specific site. The collecting point should never be located where EPW can observe activities in the BSA. MP report the exact location of the collecting point according to local SOP. Normally, this includes notifying the BSA tactical operations center (TOC), the supported brigade TOC, the provost marshal (PM) operations section, and the MP company commander. FIELD PROCESSING MP perform specific functions at a forward collecting point. An EPW and CI battalion (Table of Organization and Equipment (TOE) 19-646L) in the COMMZ processes EPW according to AR 190-8. This includes personnel record, fingerprints, photograph, height and weight, and assignment of an internment serial number. Complete processing of EPW by contingency forces may be by
MP2011
2-4
corps MP. This lesson is concerned with how EPW are processed by MP in a theater that is supported by an EPW and CI battalion. EPW are evacuated to the forward collecting point by the capturing unit. MP perform search, tag, report, evacuate, silence, and safeguard (STRESS) functions at a forward collecting point. Search EPW are searched by MP as they arrive at the forward collecting point. Capturing units are given a receipt for prisoners. Figure 2-1 shows how DD Form 629 (Receipt for Prisoner or Detained Person) may be used to account for two or more EPW. Figure 2-2 shows how DD form 629 may be used to account for one EPW. DD Form 629 is used by MP whenever EPW are transferred to someone else's custody. This includes when EPW are released to military intelligence (MI) for interrogation, to medical treatment facilities, and to other collecting points, holding areas, or processing points operated by MP. Using DD Form 629 establishes positive accountability for EPW before the DA Form 4237-R (Detainee Personnel Record) is completed. It shows the time, date, and general physical condition of EPW as they are evacuated. EPW are searched at the forward collecting point even if they were searched by the capturing unit. EPW are searched by MP every time they are received from someone else's custody. An example of this is when EPW are taken by MI for interrogation and returned to the collecting point. EPW are always searched before being put back into the collecting point. Accounting for EPW Property. Everything the EPW has is examined. Retained property is checked and returned to the EPW. Retained property includes personal effects, clothing, mess equipment (except knives and forks), badges of rank and nationality, decorations, identification cards and tags, religious literature, and articles that have sentimental value. Impounded property is examined and taken by MP for safekeeping. Impounded property includes any article such as personal effects that make escape easier or are considered dangerous to US security interests (cameras, radios, and currency). Foreign currency is impounded only when ordered by a commissioned officer. US currency found in the possession of EPW is confiscated as contraband and accounted for. A receipt (DA Form 4137 Evidence/Property Custody Document) is prepared by MP at the forward collecting point to account for all property that is taken from EPW. The original remains with the seized property and should be included in the EPW personnel file. Figure 2-3 shows how a DA Form 4137 can be used to account for property that is taken from EPW. Copies of DD Form 629 and DA From 4137 should be maintained at the forward collecting point or forwarded to the provost marshal operations section. These forms establish positive accountability of EPW and their property. They can be used to substantiate proper care and treatment at a later time.
2-5
MP2011
Disposition of EPW Property. MP at the forward collecting point ensure that the MI interrogating team in support of the brigade is notified and has the opportunity to examine property taken from EPW. Impounded and confiscated property is evacuated with EPW and accounted for. The property is transferred by using the DA Forms 4137 that were prepared at the forward collecting point. Impounded and confiscated property is handled by the guards that evacuate EPW.
CAPTIVE TAG # AB 99999 AB 99998 NOTE:
NAME YAVONOVICY, KRESKI B IVAN, VLADIMAR
SERVICE # 1234819 8213987
RANK Captain Private
DOB 17Ju160 23Sep64
Information listed in columns may be written on the back of DD Form 629. Or, when necessary because of a large number of EPW, it may be written on separate sheets.
Figure 2-1. DD Form 629 Used to Account For Two or More EPW. MP2011 2-6
NOTE: Evacuation of EPW is not delayed to obtain name, service number, rank, date of birth, or organization. EPW may be accounted for by the capture tag control number (see example in "offense" block above). Complete information establishes better accountability for EPW and should be recorded on DD From 629 when possible. But, gathering the information does not justify holding EPW at a collecting point.
Figure 2-2. DD Form 629 Used to Account for One EPW.
2-7
MP2011
Figure 2-3. DA Form 4137 Used to Account For EPW Property. MP2011 2-8
Impounded and confiscated property is evacuated with EPW in the possession of US guards. It is disposed of according to instructions issued by the highest level of command responsible for EPW. DA Form 4137 is used to account for property that is released before final disposition is ordered. An example of this is when MI wants to keep a weapon or document for closer examination that requires evacuation in MI channels. Figure 2-4 shows how DA Form 4137 may be used to account for property released to MI or other official agencies before final disposition of EPW property is ordered. The reverse side of the original DA Form 4137 may be used to account for EPW property if final disposition is ordered before a DA Form 4237-R or DA Form 1132 (Prisoner's Personal Property List) is prepared at the EPW camp (EPW battalion) in the COMMZ. Records of final disposition of EPW property are evacuated along with the EPW for inclusion in the EPW personnel record. Tag MP ensure that the capturing unit tag EPW. A standard EPW captive tag is prescribed in STANAG 2044. Captive tags are prepared for EPW who are not tagged when they arrive at a forward collecting point. Personnel from the capturing unit are responsible for preparing capture tags. Part "C" of the standard captive tag is attached to confiscated and impounded EPW property. This ensures that EPW can be associated with their property at a later time. Report All EPW who arrive at a forward collecting point are reported according to local SOP. This normally includes notifying the provost marshal operations section and the supported brigade TOC. Evacuate MP at the forward collecting point are responsible for requesting transportation and guards to evacuate EPW according to local SOP. Personnel to guard EPW during evacuation are provided by the division MP company. They normally come from a platoon in general support of the division. MP at the forward collecting point request transportation to evacuate EPW from the S4 of the supported brigade. The provost marshal tasks the division MP company for guards. Maximum use of backhaul transportation is desirable. The most likely type of transportation available will be vehicles delivering cargo and ammunition to the BSA.
2-9
MP2011
Figure 2-4. DA Form 4137 Use to Account for EPW Property Released to Military Intelligence. MP2011 2-10
Silence EPW are segregated into the same categories used by capturing units (officer, NCO, enlisted, male, female, civilian, ideology, nationality, and deserters). Segregating should prevent EPW from communicating by voice or visual means. Preventing EPW from talking keeps them from discussing security and planning escapes. Guards should communicate with EPW only to give commands and instructions. Only trained MI interrogators should be allowed to interrogate EPW at a forward collecting point. Safeguard MP protect EPW from being harmed by other EPW, US soldiers, and local civilians. Medical care is coordinated at the collecting point for sick and wounded EPW who do not require evacuation in medical channels. Basic sanitation is provided to EPW. Soap and water for washing and hygiene is provided, if possible, to prevent or reduce the spread of infection and communicable diseases. MP prevent EPW from escaping and being liberated. PART B - DIVISION CENTRAL COLLECTING POINT MP in general support (GS) of a division establish a central EPW collecting point. Prisoners are evacuated from forward collecting points in the brigade areas by MP in general support of the division rear area to the central collecting point near the division support area. PURPOSE The central collecting point accepts prisoners evacuated from forward collecting points and from units that capture EPW in the division rear area. Prisoners are held at the central collecting point until evacuation to a corps holding area or processing and reception station in the COMMZ may be coordinated. The central collecting point is operated by MP in general support of the division. This may include MP that are assigned or attached to the division from corps units. The number of MP needed is determined by the platoon leader tasked to establish the collecting point. The number of EPW being evacuated from forward collecting points and units in the division rear are considered when determining how many MP are needed. The projected capture rate for planned operations and the factors of METT-T (discussed previously) that may influence the number of MP required at a forward collecting point should also be considered. FLOW OF EPW IN THE DIVISION EPW are evacuated from forward collecting points to the central collecting point by MP in GS of the division. MP do not have organic transportation to evacuate EPW from forward collecting points. MP tasked to evacuate EPW from
2-11
MP2011
the forward collecting point use their vehicles for security only. EPW are not evacuated in MP vehicles. The forward collecting point notifies the PM operations section when EPW are ready to be evacuated. The forward collecting point also provides information to the PM operations section concerning transportation arrangements for EPW. The PM operations section notifies the MP company commander, who tasks a GS platoon to provide guards to evacuate the EPW. LOGISTICS CONSIDERATIONS Central EPW collecting points are larger and more developed than forward collecting points. Basic considerations for establishing a central collecting point includes barrier material, food and water, shelter, transportation, communication, and field sanitation. Barrier material (concertina wire, barbed wire, pickets, sandbags, and lumber) is needed to establish a collecting point and provide protection from friendly and enemy weapons. Using existing structures may reduce the amount of barrier material, time, and effort needed to establish a collecting point. Water for hygiene and drinking is provided at the central collecting point. EPW may have to remain at the central collecting point long enough to require food. MP at the central collecting point should be prepared to give EPW their first meal. Shelter from the elements may be needed. This is particularly important in hot, wet, and cold climates. Using existing structures may reduce the need for constructing shelter. Tents, blankets, and sleeping bags may be required, depending on the climate. SELECTING THE SITE The MP company commander tasks a platoon leader to establish a central EPW collecting point. The platoon leader is given a general location that is usually in or near the DSA. MP conduct a reconnaissance of the designated area before an exact location is determined. The central collecting point is established in or near the DSA where EPW cannot observe combat support and combat service support activities. MP consider the following characteristics of a central EPW collecting point before selecting a specific location. Characteristics of a central EPW collecting point includes location-o o o o in the DSA near the division support command (DISCOM). out of sight of DISCOM activities. close to MSR. near a reliable source of water.
The division central collecting point provides protection from friendly and enemy weapons. Locating the collecting point in terrain that provides defilade may reduce the amount of effort needed to protect prisoners. A
MP2011
2-12
series of trenches, holes, earthen berms, and sandbags may have to be established when it is not possible to use terrain features to provide protection. Overhead cover should also be provided to protect EPW from secondary missiles and fragments. Obtaining supplies, transportation, and medical support is easier when collecting points are near sources in the DSA. MP coordinate with the DISCOM, normally the S3, before selecting a specific site. MP report the exact location of the collecting point according to local SOP. Normally, this includes notifying the DISCOM TOC, the PM operations section, and the MP company commander. ESTABLISHING THE CENTRAL COLLECTING POINT The central collecting point must be in an enclosed area. Using an existing structure may reduce the amount of time and quantity of material needed. A secure perimeter should be established around the collecting point. Triple standard concertina wire fences are used if possible. Details and procedures for constructing triple standard concertina wire fences are found in FM 5-34. The size of the collecting point and the type of construction will vary depending on METT-T. In urban terrain, a building or other structure is well suited for an EPW collecting point. Engineer tape may be used to temporarily mark the limits of the collecting point when an enclosed area or existing structure is not readily available. A secure perimeter is established as soon as possible. When an existing structure is not available, MP request assistance from engineers. If engineer support is not available, MP may have to construct the collecting point. An example of how a collecting point may be constructed is shown in Figure 2-5. This illustrates how a collecting point may be constructed. It does not necessarily show how one must be built. Three fighting positions are shown in Figure 2-5. Each position is occupied by two MP. A position on one corner of the enclosure provides a primary direction of fire on two sides of the enclosure. A position on the opposite corner provides the same coverage of the remaining two sides. A third fighting position is shown near the entrance to the collecting point. MP that occupy this position may be used to search EPW before they are put inside a compound. Also, these MP may be used for escort and for security outside the enclosure when necessary. This is one method of establishing a collecting point. Some collecting points may be more developed or more austere, depending upon the situation. Latrine facilities and drinking water may be provided inside each compound, or at a single location for the entire collecting point. Providing facilities inside each compound reduces the number of MP needed to escort prisoners.
2-13
MP2011
Figure 2-5. EPW Collecting Point (Division Central).
MP2011
2-14
MP are responsible for preventive medicine countermeasures against communicable diseases, nonbattle injuries, and heat and cold injuries at the central collecting point. Preventive medicine countermeasures include-o o o disinfecting or monitoring disinfection of drinking water. controlling animals and insects that may carry disease. monitoring preventive countermeasures practiced by EPW.
Training and support may be provided to collecting points by the division preventive medicine section. MP at collecting points request assistance when necessary. FIELD PROCESSING EPW are field processed when they arrive at the central collecting point before they are put inside the facility. Field processing consists of the same procedures (STRESS) used by MP when EPW are evacuated from the forward collecting point. After prisoners have been field processed they are placed inside the collecting point. When possible, MI interrogators are notified before the EPW arrive.. EPW EVACUATION The number of MP needed to evacuate EPW depends on the mission (number of EPW, distance and means of evacuation, and physical condition and morale of EPW); enemy (probability of attack or ambush during evacuation); terrain (climate and geography); troops available (number of MP available for the mission after meeting other commitments); and time (when the mission must be completed). Planning MP use the troop leading process to plan evacuation of EPW. After receiving the mission, a warning order is issued by the MP in charge. This allows subordinates time to make necessary preparations. The warning order should include where and when prisoners are being moved, how many prisoners are being moved, and what kind of transportation will be used to transport the prisoners. The MP in charge makes an estimate of the situation. The mission, situation, and courses of action are considered. Each course of action is analyzed and compared to the other courses of action.
2-15
MP2011
The last step in estimating the situation is to decide on a concept for the operation and make a tentative plan. Movement of subordinates is started, if necessary, while a reconnaissance of the evacuation route is done. The reconnaissance should include locating other units along the route that may be able to provide assistance. Fire support and indirect fire support is planned along the route, if possible, where enemy ambush is likely. A final plan is made after the reconnaissance is completed. The plan should include specific instructions concerning use of force and reaction to escape attempts and air attacks. Food, water, medical care, and rest stops are also covered in the plan. When the evacuation plan is completed, an operations order (OPORD) is issued. Coordination MP at the central collecting point coordinate with DISCOM movement control officer (MCO) for transportation to evacuate EPW into the corps. Prisoners are evacuated to a US corps holding area when the division is assigned to a US corps. When US divisions are assigned to an allied corps, US captured prisoners may be evacuated to a US EPW processing and reception station in the COMMZ. Actions Taken by Guards MP who are evacuating EPW to the central collecting point check to see that prisoners are ready for movement. Prisoners are searched for weapons, contraband, and items of intelligence value. Confiscated and impounded property is accounted for on DA Form 4137. The capture tag on each new EPW is checked. Tags should state the date and time of capture, name of capturing unit, location of capture, and circumstances of capture. Confiscated and impounded property is also tagged. Part "C" of the standard capture tag is used by MP in the NATO theater. Equipment and document tags have a preprinted control number on all detachable pieces so they can be matched with the prisoner's captive tag at a later time. MP sign DD Form 629 to accept custody of EPW from the forward collecting point. If food and water is needed for EPW during the evacuation, it is provided by MP at the forward collecting point. Special Considerations Communication with EPW is limited to issuing instructions and orders. An MI interrogator at the forward collecting point should instruct prisoners on march discipline, actions during an emergency, and the meaning of the word "halt" before prisoners are moved. When possible, EPW are segregated during movement in the same manner that they are at collecting points. If an MP sees a prisoner escaping, he shouts "halt!" If the prisoner fails to halt immediately, the MP shouts "halt!" a
MP2011
2-16
second time, and, if necessary, he shouts it a third time. Thereafter, he may open fire if he has no other way of stopping the EPW from escaping. In any case, if guards have to open fire on an escaping prisoner, they should first aim to avoid inflicting fatal wounds. The remaining guards halt and secure the other EPW. Evacuating EPW by Vehicle MP load prisoners on vehicles keeping them segregated as best as possible. Several methods may be used to guard EPW in vehicles. One method is to have an MP ride in the cab and watch prisoners in the vehicle ahead. Another way is to place an MP vehicle behind the truck to watch prisoners. A combination of these methods may be used when there is a large column of trucks. A minimum of one MP is used for each vehicle. MP do not ride with prisoners. MP ensure the vehicle drivers are briefed on the route, the march schedule, and the rate of movement. Each driver is also briefed on what to do in case of ambush or air attack. At an ambush, drivers move their vehicle out of the kill zone. Drivers of vehicles not yet in the kill zone stop. MP designated in the evacuation plan and OPORD engage the enemy. Remaining MNP prevent EPW from escaping. If needed, the MP call for indirect fire support or request help from a nearby unit or other MP operating in the area. For air attacks, the vehicles move off the road to the best covered and concealed location. MP designated in the evacuation plan, OPORD, or unit SOP engage the aircraft. Remaining MP prevent EPW from escaping. Prisoners attempting to escape must be recaptured by the closest guard using the least force necessary. During an escape attempt, the convoy stops and security on EPW is increased. As soon as the prisoner is recaptured, the MP increase security around him and continue the march. Passenger capacities for selected vehicles are listed below to aid in planning: o o o o 1-1/4-ton cargo truck: 8 EPW. 2-1/2-ton truck: 16 EPW. 5-ton cargo truck: 18 EPW. 12-ton stake semitrailer: 50 EPW.
Evacuating EPW by Helicopter Helicopters may be available to transport EPW from the collecting points. MP supervise the loading of prisoners on the aircraft. They use the MI interpreter at the forward collecting point to help explain how each prisoner boards and leaves the aircraft. Guards are placed on each aircraft according to instructions issued by a member of the aircraft crew to keep the prisoners under control. Load limits for helicopters may change with the weather and the expected altitude.
2-17
MP2011
Prisoner capacities and guard requirements for selected helicopters are listed below: o o o UH1H Iroquois: 6 EPW, 2 guards. H60 Blackhawk: 10 EPW, 2 guards. CH47D Chinook: 35 EPW, 4 guards.
Evacuating EPW by Rail If rail lines are operating in the area, MP may evacuate EPW by train. MP supervise the loading of prisoners on the rail cars. They keep the prisoners segregated by categories as much as possible. Some passenger cars can carry up to 120 EPW, depending on the type of car. A minimum of four guards are required, two at each end of the car. Box cars can carry about 40 EPW. Two guards are required in each car. When they can, MP build a barbed-wire lane across the box car from door to door to isolate prisoners at each end of the car. The guards ride in the middle of the car. Box cars must be adequately ventilated when carrying EPW. The person in charge of the evacuation mission coordinates with the train's personnel to learn the number of planned stops and expected danger areas. He briefs subordinates to be especially alert at unscheduled stops. Evacuating EPW by Foot Evacuating small numbers of EPW by foot is the least desirable method. It requires more time, personnel, food, water, and coordination. Units evacuating a large number of EPW may have to resort to this method as a first option. Security is provided at the front, rear, and sides of EPW formations during evacuation. Evacuating Sick or Wounded EPW Medical personnel determine whether sick and wounded EPW are evacuated in MP channels or medical channels. The same criteria used to decide if US and allied soldiers are returned to duty or medically evacuated (MEDEVACed) applies. US, allied soldiers, and EPW are evacuated in the same manner. They receive the same medical treatment without regard to status. In the brigade rear MEDEVAC channels consist of company aid posts, battalion aid stations, and forward medical companies. In the division rear area MEDEVAC channels are the medical support companies. Prisoners may be evacuated to or between any of these medical facilities by manual liter and ground or air ambulance. Company aid posts and battalion aid stations are the forward most medical treatment facilities where emergency treatment, sorting, and disposition of
MP2011
2-18
sick and wounded EPW may occur. Sick and wounded prisoners may be brought to one of these facilities directly from the battlefield. EPW with minor illnesses or wounds may be treated and declared fit for evacuation in MP channels. Prisoners who are seriously wounded or ill may be stabilized and MEDEVACed to a forward medical company in the brigade area. Prisoners are treated and returned to MP channels or MEDEVACed to a medical support company in the division rear area. Prisoners who can be returned to MP channels for evacuation within the current holding policy may be admitted to the medical support company clearing station for treatment. Current holding policy is established by the echelon commander. Prisoners who require hospital medical or surgical attention are provided initial resuscitative care to permit Immediate evacuation. Medical Liaison Forward and central collecting points should establish procedures to check medical facilities in their areas of responsibility to determine if EPW are being evacuated in medical channels. PART C - CORPS HOLDING AREA Combat support MP in the corps area establish holding areas. The corps holding area accepts prisoners evacuated from central collecting points and from units that capture EPW in the corps area. Prisoners are interrogated and detained until they can be evacuated to the COMMZ. The number of corps holding areas depends on the size of the corps area, the terrain, the length of MSRs, and the number of EPW captured in supported divisions. The corps MP brigade commander (provost marshal) is responsible for establishing corps holding areas and providing MP to guard EPW during movement from division central collecting points; This is accomplished by tasking an MP battalion to establish the holding area and provide guards to move forward into the division to accept custody of EPW. The number of MP needed to establish a holding area is based in part, on the actual or protected number of EPW needing evacuation. A combat support company can guard up to 2,000 EPW at a holding area; a platoon can guard up to 500 EPW. Evacuation of EPW in the corps area includes arranging and coordinating transportation and medical care. If required, it may include providing rations and water for prisoners. TRANSPORTATION MP in the corps do not have organic transportation to evacuate EPW from division central collecting points. MP use their vehicles for security purposes only.
2-19
MP2011
The division PM, normally the PM operations section, reports the number of EPW at the central collecting point who are ready for evacuation to corps MP responsible for the corps EPW holding area. The division PM requests transportation to evacuate EPW through the DISCOM MCO. The MCO processes requests and arranges transportation to move EPW from the division central collecting points to the corps EPW holding area. Information concerning transportation arrangements to evacuate EPW from the division to the corps is provided by the division PM to the MP responsible for evacuation. FLOW OF EPW IN THE CORPS MP from a combat support company in the corps evacuate EPW from division central collecting points. Security for EPW being evacuated to a processing and reception station in the COMMZ is provided by TA MP. The number of MP needed to evacuate EPW depends on the distance, type of transportation, number and condition of prisoners, and other missions that must also be accomplished. A combat support MP company is capable of evacuating the following numbers of EPW: o o o Foot ..................................1,900 Vehicle ..............................2,500 Rail ...................................3,000
MEDICAL CARE, RATIONS, AND WATER Sick and wounded EPW are afforded the same medical treatment as US and allied soldiers. MP administer first aid and, if necessary, evacuate wounded prisoners to the nearest medical treatment facility. Medical treatment facilities along the evacuation route from division to corps should be identified in the planning process. Medical personnel will determine whether or not sick and wounded prisoners can be treated and returned to MP, or if they require evacuation for definitive treatment. Although sick and wounded prisoners should have already been afforded medical treatment by the capturing unit, it is possible that MP may have to cope with the problem. Medical personnel in the division who determine that EPW must be evacuated in medical channels will request disposition instructions from the corps medical regulating officer (MRO). The MRO coordinates transportation and identifies which treatment facility in the corps the wounded prisoners are evacuated to. MP responsible for the corps holding areas should maintain contact with the corps MRO so that EPW in medical channels can be accounted for. Under normal circumstances, wounded EPW will not remain in the corps for more than 96 hours, if at all. Sick and wounded prisoners may be evacuated from corps medical facilities (combat support hospitals (CSH), mobile Army surgical hospitals (MASH), and evacuation hospitals) to field hospitals in the COMMZ for long-term treatment.
MP2011
2-20
Sick and wounded prisoners at corps medical treatment facilities should not present a security problem. Prisoners who are healthy enough to be a security risk will be returned to MP channels as quickly as possible by medical personnel. It may, however, be possible that medical personnel request security for EPW at a medical treatment facility in the corps area. The corps commander is responsible for security of EPW in medical treatment facilities. If tasked by the corps commander, the provost marshal uses combat support MP on a task, rather than permanent, basis. MP do not have the assets to provide long-term security at corps medical facilities. Sick and wounded prisoners who require out-patient medical care should be evacuated to EPW facilities in the COMMZ as quickly as possible. LOGISTICS CONSIDERATIONS Corps EPW holding areas are more developed than collecting points in the division. However, they are still temporary facilities capable of being moved when necessary. Basic considerations for establishing a corps EPW holding area are barrier material, food and water, shelter, transportation, limited medical care, communications, and field sanitation. Barrier material (concertina and barbed wire, pickets, sandbags, and lumber) is used to establish a secure enclosure and provide protection to EPW from friendly and enemy weapons fire. NP use existing structures when possible to reduce the amount of barrier material and manpower needed to establish a holding area. Food and water is provided to EPW at the corps holding area. Rations are provided to EPW by MP responsible for the holding area. Shower and delousing facilities are provided, if possible, as part of the preventive medicine effort. MP operating the enclosure coordinate with the Preventive Medicine Command and Control Team assigned to the corps. Preventive medicine support available in the corps includes-o o survey and control of disease-carrying insects and animals. sanitary engineering (water treatment and waste disposal).
The number of EPW evacuated into the corps and availability of transportation affects the prisoner population. It also affects how long they remain at corps facilities. Field sanitation must be carefully planned and monitored by MP at corps holding areas. SELECTING THE SITE The size of the corps area, terrain factors, MSRs, transportation available, and the tactical situation affect where corps holding areas are established. The situation may require more than one holding area. The corps PM determines
2-21
MP2011
the general location and number of holding areas needed to support the corps commander's mission. The specific site for a holding area is chosen by the platoon leader or company commander, depending upon the size of the operation. MP establish holding areas that are mutually supporting as an economy of force. They are careful not to overburden the guard force or put too many EPW at one location. MP responsible for establishing the holding area conduct a reconnaissance before an exact location is determined. The holding area is established near transportation on MSRs to facilitate movement of prisoners. Other considerations for locating a holding area includes proximity to medical facilities, a reliable source of water, and rations. Locating the holding area in terrain features that provides defilade may reduce the amount of effort needed to protect EPW from friendly and enemy weapons fire. MP coordinate with the commander responsible for the area where a holding area is going to be established before selecting a specific site. MP report the exact location of the holding area according to local SOP and OPORD. Normally, this includes notifying the next higher MP headquarters and local commanders responsible for the area where the holding area is operating. ESTABLISHING A CORPS HOLDING AREA A secure perimeter is established around the corps holding area. MP request assistance from engineers to establish the enclosure. Corps holding areas may be considerably larger and more developed than collecting points in the division. Prisoner population and the number of guards available must be considered. The layout of corps holding areas depends upon several things including the terrain and the size of the enclosure. A single enclosure operated by one company should not be used to hold more than 2,000 EPW. Each compound within the enclosure should not hold more than 500 prisoners. Figure 2-6 illustrates how a temporary holding compound may be established for up to 500 EPW. The facilities shown in Figure 2-6 can be expanded or reduced as needed, based on the situation. Evacuating EPW to the Corps Holding Areas MP evacuating EPW to corps holding areas follow the same procedures used by division MP to evacuate prisoners from forward to division collecting points. DD Form 629 is used to account for prisoners. DA Form 4137 is used to account for property taken from EPW. Processing EPW MP use the same procedures (STRESS) to handle EPW at a corps holding area that division MP use at collecting points. Prisoners are searched when they arrive.
MP2011
2-22
Property taken from EPW is accounted for on DA Form 4137. Previously seized property that is evacuated in the possession of guards is inventoried against the DA Form 4137 prepared at the division collecting point. When all seized property is accounted for, it is stored separate from EPW in a secured area with limited access. Positive control of seized property must be maintained at all times. MP use DD Form 629 to account for prisoners received. Copies of previous DD Forms 629 and DA Forms 4137 that establish accountability for EPW and their property are maintained as the prisoners are evacuated to higher echelons. MP at the corps holding area notify MI interrogators before EPW arrive, if possible.. Evacuating EPW From the Corps Holding Area MP at the corps holding area submit EPW status reports through their chain of command according to local SOP or OPORD. MP responsible for a corps holding area request transportation to move EPW to a processing and reception station in the COMMZ. This is accomplished by contacting a movement control team (MCT). MCT are assigned to the corps movement control center (MCC) as needed. The corps MCC coordinates movement into the COMMZ with the Theater Army Movement Control Agency (TAMCA). The TAMCA handles requests for transportation that exceed the capability of corps. TA MP go forward and guard EPW during evacuation from corps holding areas to processing and reception stations in the COMMZ. MP plan and use the same procedures to evacuate EPW from the corps holding area that they did to evacuate prisoners from division collecting points. Evacuating Sick and Wounded EPW Medical facilities in the corps include CSH, evacuation hospitals, and MASH. Sick and wounded EPW who are MEDEVACed from the division are stabilized at corps medical facilities and returned to MP channels. Or, they are MEDEVACed to facilities in the COMMZ.
2-23
MP2011
Figure 2-6. Temporary Holding Compound (Corps) for up to 500 Prisoners.
MP2011
2-24
LESSON 2 PRACTICE EXERCISE The following items will test your knowledge of the material covered in this lesson. There is only one correct answer for each item. When you have completed the exercise, check your answers with the answer key that follows: 1. In reference to handling enemy prisoners of war, what does the last S in the key word "STRESS" stand for? A. Silence. B. Speed. C. Safeguard. 2. What are the features of a forward collecting point? A. In defilade, near division support command, out of sight of support command activities, close to MSR, near a water source, and in the brigade trains area. B. In defilade, close to MSR, near water source, far enough back to avoid fluctuations in the FEBA, and near division support command. C. In defilade, far enough back to avoid minor fluctuations in the FEBA, in brigade trains area, close to MSR. 3. The three types of PW property identified during processing are: A. confiscated, returned, retained. B. confiscated, impounded, receipted. C. confiscated, impounded, retained.
2-25
MP2011
LESSON 2 PRACTICAL EXERCISE ANSWER KEY AND FEEDBACK Item 1. 2. C. C. Correct Answer and Feedback Safeguard. EPW are evacuated . . . (page 2-5, para 2). In defilade, far enough back to avoid minor fluctuations, in the FEBA, in brigade trains area, close to MSR. The platoon leader is . . . (page 2-4, para 2). Confiscated, impounded, retained. Accounting for EPW property . . . (page 2-5, para 5 & 6).
3.
C.
MP2011
2-26
LESSON 3 OPERATIONS AT ECHELONS ABOVE CORPS MQS Manual Tasks: 01.3759.01-9003 01.3759.01-9004 01.3753.01-3001
OVERVIEW LESSON DESCRIPTION: In this lesson you will learn EPW and CI operations at EAC. TERMINAL LEARNING OBJECTIVE: ACTION: CONDITION: STANDARD: REFERENCES: Describe how EPW and CI operations are conducted by MP at EAC. You are a lieutenant or captain in the MP Corps and are involved in EPW and CI operations at EAC. MP EPW and CI organizations are based on the "L" series TOE. To demonstrate competency of this task, you must achieve a minimum score of 70 percent on the subcourse examination. The material in this lesson was derived from the following publications: AR 37-36, Pay, AR 40-5, AR 190-8, AR 190-57, and JCS Pub 3(C).
INTRODUCTION According to international law, the capturing force is responsible for the care, security and repatriation of all EPW/CI that it captures. The U.S. Army has established organizations, accounting systems and procedures in order to ensure that these responsibilities are met for all prisoners captured or controlled by U.S. Forces. PART A - THEATER ARMY EPW AND CI ORGANIZATIONS The TA commander is responsible for evacuating EPW from supported corps. This responsibility includes providing guards. It also includes establishing and operating transient facilities when overnight stops are required. The TA commander is also responsible for EPW and CI facilities in the COMMZ. Permanent internment camps are not normally established in an overseas theater if EPW are evacuated to CONUS.
3-1
MP2011
Processing and initial disposition of EPW and CI may vary by theater. MP units that accomplish the EPW and CI mission at TA level are organized and assigned based on the internment plan and the size of theater. MP EPW BRIGADE (TOE 19-762L) The highest command and control element for MP EPW and CI units at TA level is the MP EPW brigade. The MP EPW brigade is assigned to the TA Personnel Command (PERSCOM) when a PERSCOM exists. When a TA does not have a PERSCOM assigned, the MP EPW brigade is assigned to the Theater Army Headquarters Area Command (TAACOM). At level one, the MP EPW brigade o o o commands and controls two to seven EPW and CI battalions (TOE 19-646L). plans and supervises TA EPW and CI collection and evacuation operations. coordinates with host-nation authorities on matters pertaining to EPW and CI.
The MP EPW brigade depends upon TAACOM units for health services; finance, personnel, and administrative services; transportation; interpreters; and maintenance. An organizational diagram of the MP EPW brigade is shown at Figure 3-1. The EPW brigade is authorized at one per TAACOM or one per contingency command, as required. COMMAND AND CONTROL DETACHMENT (TOE 19-543-LH) The highest command and control element for MP EPW and CI units at TA level when an MP EPW brigade is not authorized is the command and control detachment. The command and control detachment is assigned to the TA MP brigade (TOE 19-172L). At level one, the command and control detachment o o commands and controls two or three EPW and CI battalions (TOE 19-646L) processing and interning less than 12,000 EPW and CI. plans and supervises TA EPW and CI collection, evacuation, and internment operations.
The command and control detachment depends upon TAACOM units for health services; finance, personnel, and administrative services; transportation; interpreters; and maintenance. An organizational diagram of the command and control detachment is shown at Figure 3-2. The command and control detachment is not authorized in a theater when less than two EPW and CI battalions are assigned.
MP2011
3-2
MP EPW AND CI BATTALION (TOE 19-646L) The EPW and CI battalion accomplishes the same mission and replaces the MP EPW processing company (TOE 19237H) and the MP EPW camp (19-256H). The EPW and CI battalion is assigned to the MP EPW brigade, the command and control detachment, or the TA MP brigade.
3-3
MP2011
19762L000 HHC, MP EPW BDE OBJECTIVE
Figure 3-1. MP EPW Brigade (TDE 19 - 762L).
MP2011
3-4
19543LH00 ENEMY PRISONER OF WAR
Figure 3-2. Command and Control Detachment (TOE 19-543LH). 3-5 MP2011
The EPW and CI battalion is assigned to the MP EPW brigade in large, mature theaters when more than 12,000 EPW and CI are processed and evacuated. It is assigned to the command and control detachment when two or three battalions process, evacuate, and intern less than 12,000 EPW and CI. It is assigned to the TA MP brigade when only one EPW and CI battalion is assigned to process and evacuate prisoners. The MP EPW and CI battalion is structured at four levels: TOE 19-646L100, L200, L300, and L400. Each TOE is capable of handling a different number of EPW and CI. The L100 battalion is capable of handling 1,000 EPW or CI. The L200 battalion can handle 2,000 EPW or CI. The L300 can handle 3,000 EPW and CI. The L400 battalion can handle 4,000 EPW or CI. At level one, the EPW and CI battalion provides the following: o o o o o o o o o o o Command and control, and administrative and logistic support for one enclosure holding 1,000 to 4,000 EPW and CI. Command, staff planning, and supervision of assigned and attached units. Food, clothing, religious, and recreational support to EPW and CI. Utilities including heat, light, water, cooking, and sanitation facilities. Supervision of EPW and CI work projects. Provides limited health services support and preventive medicine services to EPW and CI. This includes supervising qualified retained personnel in providing medical care and preventive medicine for EPW and CI. Supervision of the subordinate unit's organization supply, communications, and unit maintenance. Supervision and assistance to subordinate units in training and administration. Staff supervision of EPW collection and evacuation operations. Internal frequency modulated (FM) and wire communications net. Personnel and finance sections capable of in-processing eight EPW and CI records per hour and maintaining up to 4,000 records per month.
The MP EPW and CI battalion depends on TAACOM units for o o health service support and medical equipment maintenance. engineer, finance, and personnel and administrative services.
MP2011
3-6
o o
transportation of EPW and CI. water purification services and supply classes I through IX.
The organization and structure of MP EPW and CI battalions is shown in Figures 3-3 through 3-6. MP GUARD COMPANY (TOE 19-667L) The MP guard company provides guards for EPW and CI, or for U.S. military prisoners, installations, and facilities. The MP guard company is assigned to a EPW camp, MP confinement battalion, MP battalion or railway operations battalion depending on operational requirements. At level one, the MP guard company can provide guards for enclosures with up to 2,000 EPW and CI. The MP guard company assists in the defense of the EPW and CI battalion area. It provides maintenance on organic equipment (except power generators). The MP guard company depends on the EPW and CI battalion and other TAACOM units for-o o o health services. religious, legal, finance, and personnel and administrative services. maintenance of power generators.
The organization and structure of the MP guard company is shown in Figure 3-7. MP ESCORT GUARD COMPANY (TOE 19-647L) The MP escort guard company provides personnel to supervise and secure EPW and CI during movement in the COMMZ. The MP escort guard company is assigned to the EPW and CI brigade (when a brigade is authorized). Or it is assigned to an EPW and CI battalion in theaters where an EPW and CI brigade is not authorized or to the PW command. At level one, the MP escort guard company provides security for the following number of EPW and CI during movement: o o o Foot .................................1,000 to 1,500 Vehicle.............................1,500 to 2,000 Rail...................................2.000 to 3,000
3-7
MP2011
Figure 3-3. EPW and CI Battalion (TOE 19-646L100).
MP2011
3-8
Figure 3-4. EPW and CI Battalion (TOE 19-646L200). 3-9 MP2011
Figure 3-5. EPW and CI Battalion (TOE 19-646L300).
MP2011
3-10
Figure 3-6. EPW and CI Battalion (TOE 19-646L400).
3-11
MP2011
Figure 3-7. MP Guard Company (TOE 19-667L). MP2011 3-12
The MP escort guard company does not have organic transportation for guard squads or teams. Transportation for movement of MP for evacuation of EPW and CI must be arranged as it is for prisoners. The unit assists in defense of the unit area and is capable of maintaining organic equipment. The MP escort guard company depends upon the unit that it is assigned to and other TAACOM units for health, religious, finance, food, and personnel and administrative services. It also depends on the other units for transportation to move guards and EPW and CI. The organization and structure of the MP escort guard company is shown in Figure 3-8. PRISONER OF WAR INFORMATION CENTER (TOE 19-643L) The branch Prisoner of War Information Center (PWIC) accounts for US captured EPW and CI. It maintains information about US soldiers and civilians who are captured by the enemy or are missing in action. The branch PWIC is assigned to the MP EPW brigade or the TA NMP brigade (when an EPW brigade is not authorized). At level one, the branch PWIC-o o o collects, processes, and reports information received from US EPW and CI facilities. This includes information about US captured EPW who were transferred to the host nation (HN) or allied forces for internment. receives, documents, and reports information about US soldiers and civilians who are missing or held by the enemy. receives, stores, and disposes of personal property belonging to EPW and CI who have died, escaped, or been repatriated. This includes property belonging to enemy soldiers killed in action that was not disposed of in graves registration channels. maintains a central location system for US captured EPW and CI who were transferred to the HN or allied forces for internment.
o
The branch PWIC depends on the unit it is assigned to and other TA units for-o o o o food services and water. health, legal, religious, finance, and personnel and administrative services. supply support (classes I through IX). transportation of organic equipment and assigned and attached personnel.
3-13
MP2011
Figure 3-8. MP Escort Guard Company (TOE 19-647L). MP2011 3-14
o
maintenance of automatic data processing equipment, power generators, and communications-electronic equipment.
The organization and structure of a branch PWIC is shown in Figure 3-9. EPW AND CI COMMAND LIAISON DETACHMENT (TOE 19-543LD) The EPW and CI command liaison detachment provides command, control, and supervision of MP EPW and CI liaison teams. It provides staff planning and a coordinating link from the TA commander to the HN or allied prisoner of war command and staff. The EPW and CI command liaison detachment is assigned to the TA MP brigade. EPW AND CI CAMP LIAISON TEAM (TOE 19-543LE) The EPW and CI camp liaison team provides a coordinating link between the TA commander and HN or allied EPW facilities holding US captured EPW. The branch camp liaison teams ensure that US captured EPW transferred to HN or allied forces for internment are treated according to US policy and the Geneva Conventions and protocols. One branch camp liaison team is authorized for each HN or allied camp holding US captured EPW. The branch camp liaison team is assigned to the EPW and CI command and control detachment. EPW AND CI INFORMATION CENTER LIAISON TEAM (TOE 19-543LF) The information center liaison team provides a coordinating link between the TA commander and a HN or allied force national prisoner of war information center. The information center liaison team monitors the records of US captured EPW and CI interned by HN or allied forces. The information center liaison team is assigned to the EPW and CI command liaison detachment. EPW AND CI PROCESSING AND LIAISON TEAM (TOE 19-543LG) The processing liaison team provides a coordinating link between the TA commander and a HN or allied force processing point. One processing point liaison team is authorized for every HN or allied force processing point designated to accept custody of US captured EPW and CI for internment. Processing point liaison teams are assigned to the EPW and CI command liaison detachment. SUPPORT REQUIRED BY LIAISON TEAMS AND DETACHMENTS Liaison detachments and teams depend on other TAACOM units for-o o health, legal, religious, finance, and personnel and administrative services. supply support (classes I through IX).
3-15
MP2011
Figure 3-9. Branch PWIC (TOE 19-643L). MP2011 3-16
o o o
transportation of personnel and equipment. maintenance of organic equipment. food service support.
The organization of the liaison detachment and teams is shown in Figure 3-10. PART B - PROCESSING US CAPTURED EPW AND CI EPW evacuated from corps holding areas or captured in the COMMZ are evacuated to the nearest EPW and CI battalion in the COMMZ for processing. Depending on the theater and the EPW internment plan, prisoners may be— o o o processed and interned by US forces within the theater. processed and evacuated out of the theater for internment. processed and evacuated to HN or allied forces for internment.
CI who are ordered into confinement for imperative reasons of security to US forces in the territory or for conviction of an offense in violation of penal provisions issued by the US forces occupying the territory are-o o processed and interned by US forces in the theater, or processed and transferred to HN or allied forces for internment in the theater.
CI may not be evacuated out of the territory in which they were ordered into confinement. PROCESSING GUIDELINES The extent of processing done for EPW may vary by theater. EPW are processed completely in theaters where they are transferred to HN or allied forces. When EPW are evacuated out of theater, only a limited amount of processing may be done before they are evacuated. Processing of EPW in theaters where they are interned by US forces may consist of initial processing before internment, with complete processing done after EPW are placed into permanent camps. The TA commander or provost marshal may specify to what extent EPW are processed based on the situation. Guidance on the extent of processing done in the COMMZ may also be provided by the MP commander. CI are processed in the same manner as EPW, but are not evacuated out of theater.
3-17
MP2011
Figure 3-10. EPW and CI Liaison Detachments. MP2011 3-18
Uncooperative EPW may be placed into maximum security area until they become cooperative. No more than one EPW should be processed by any one team at a time. A minimum of two armed guards should be inside the processing structure at all times. Specific requirements and recommended procedures for processing EPW, CI, and RP are discussed in the following paragraphs. PROCESSING Processing EPW and CI by the EPW and CI battalion in the COMMZ may be accomplished by teams. Other methods may be used as long as processing requirements established by US policy and local commands are met. A suggested method for organizing processing operations is shown in Figure 3-11. It consists of eight stations. Each of the following stations is operated by a team of US personnel: Receiving 1. 2. 3. 4. Receiving Station 1: Search All EPW are searched before they enter the processing area. They are searched by persons of the same sex when possible. Assign a temporary control number to each captive to link them to their property until an ISN number is assigned. The search team uses minimum force to control EPW. They must show respect for EPW and their property. The search team removes all of the EPW's property and places it in a container. Once the search has been completed, a guard escorts the EPW and his or her property to Station 2. Station 2: Clean up The EPW are allowed to shower, shave, and get haircuts for hygiene. Station 3: Medical Evaluation The EPW are inspected for signs of illness or injury. Problems which are identified are handled by the processing facility, if possible, and if not identified, a determination on evacuation and what facilities they will be taken to is made. Search. Clean up. Medical Evaluation. Issue Personal Items. 1. 2. 3. 4. Processing Administrative Accountability. Photography. Property Inventory. Records review.
3-19
MP2011
Treatment and immunization records are initiated and EPW are disinfected and immunized. Station 4: Issue Personal Items The EPW are provided with personal comfort items (toilet paper, soap, toothbrush, and toothpaste). They are then issued clean clothing obtained from the captives at station one which are procured through normal supply channels or from captured enemy supplies. This clothing will be marked with PW, CI, or RP and should be distinctive if possible. Processing Station 1: Administrative Accountability The administrative accountability station involves the most paperwork and can create the biggest slowdown and backlog. The personnel handling this station should be proficient in the EPW's language or have an interpreter present in order to interview the EPW and prepare the paperwork.
Figure 3-11. EPW Processing Operation.
MP2011
3-20
Figure 3-11. EPW Processing Operation (continued). 3-21 MP2011
EPW are required to give their name, rank, service number, and date of birth. Before questioning begins, an interpreter informs the EPW of the following statement in the EPW's native language: "You are a prisoner of war. In accordance with the Geneva Conventions relative to the treatment of prisoners of war, you are required to give your full name, rank, service number, and date of birth. If you refuse to give this information, you may not receive the privileges due your rank. Are you willing to answer my questions?" If the EPW says "yes," complete the forms and continue processing. If the EPW replies "no," complete the forms as much as possible with the known information. EPW who refuse to give the required information are liable to a restriction of the privileges accorded to their rank or status. Information required to complete the forms will come from the EPW captive tag, ID card, interviews, or other source documents. Internment Serial Number. The first stage of the processing done by the receiving station is the assignment of an internment serial number (ISN) to every EPW or CI. The ISN consists of two components separated by a dash as described below. The first component contains three symbols. The first symbol will be the letters "US". The second will be a letter or number standing for the command under which the EPW came into the US custody. This symbol will be assigned by the branch PWIC or the TA command. The third symbol will be two letters standing for the enemy country in whose armed force the EPW served. Codes for the third symbol, extracted from AR 18-12-10, are shown in Figure 3-12. The second component will be a number followed by the letters "EPW" (if the person being processed qualifies for status as an enemy prisoner of war), "RP" (if the person being processed qualifies for status as a retained person), or "CI" (if the person being processed qualifies for status as a civilian internee). Numbers will occupy five positions. These numbers will be assigned consecutively to all EPW, RP, and CI processed in a command. Blocks of numbers are assigned by the branch PWIC or TA command. Guidance for handling captured personnel whose category is not readily apparent is found in AR 190-8. "Questions may arise as to whether enemy personnel captured by the US Armed Forces belong to any of the categories. If so, such persons will receive the same treatment to which EPW are entitled until such time as their status has been decided by a military tribunal convened by competent military authority." For example, the first EPW processed by the US Army in a theater that was designated "9" and whose country was designated as "AB" will be assigned the following ISN: US9AB-0001EPW. The tenth such EPW processed by the same command will be assigned: US9AB-00010EPW.
MP2011
3-22
Figure 3-12. Country Codes for Internment Serial Number.
3-23
MP2011
Figure 3-12. (Continued). MP2011 3-24
Figure 3-12. (Continued). 3-25 MP2011
If an EPW has been transferred from an allied power, the ISN assigned by that power will be retained. If the EPW has not been assigned an ISN, a number will be assigned. Adding a prefix or suffix to an ISN assigned by an allied power is not authorized (AR 190-8). If, for any reason, an EPW is assigned two ISNs, the last assigned number will be used. The letters "RP" will follow the second component of an ISN issued for a retained person; for example: US4CN-00001RP. The letters "CI" will follow the second component of an ISN issued for a civilian internee; for example: US5AB-00325CI. Detainee Personnel Record. The receiving station continues processing EPW and CI after an ISN is issued. An original plus at least one copy of DA Form 4237-R (Detainee Personnel Record) is prepared for every EPW, CI, and RP. (See Figure 3-13.) The form may be typed or handwritten. DA Form 4237-R is reproduced locally on 8-1/2-inch by 11-inch paper. A copy for reproduction purposes is located in AR 190-8. The minimum information that should be entered on DA form 4237-R includes the following: o o o o o o o o o o o o o o o Block 1: The EPW's assigned ISN. Block 2: The EPW's name. Block 3: The EPW's rank. Block 4: The EPW's service number. Block 6: Date of capture. Block 7: The EPW's date of birth. Block 8: The EPW's nationality. Block 13: The date the EPW is being processed. Block 14: The EPW's sex. Block 15: The EPW's language(s). Block 17: The EPW's physical condition. Block 23: The unit identification code (UIC) of the capturing unit. Block 25: Place of capture. Block 26: Power served by the EPW. Block 33: Other particulars from the ID card.
MP2011
3-26
Block 35 of DA Form 4237-R must be completed with the EPW's impounded property. The items are entered with the same description and quantities as they appear on the DA Form 1132. Only impounded property is entered on DA Form 4237-R. Inventoried property is placed into secure storage and transferred with the prisoner (in possession of the guards). When processing at Station 3 is completed, the EPW is escorted to Station 4.
Figure 3-13. DA Form 4237-R (Detainee Personnel Record). 3-27 MP2011
Figure 3-13. (Continued). MP2011 3-28
o o
Block 37: Photograph (front view only). Blocks 38, 39, 40, and 41.
Officer and enlisted EPW who claim status as RP must also complete a classification questionnaire. Officer RP complete DA Form 2672-R (Classification Questionnaire for Officer Retained Personnel); enlisted RP complete DA form 2673-R (Classification Questionnaire for Enlisted Retained Personnel). See Figures 3-14 and 3-15. DA Forms 2672-R and 2673-R are locally reproduced on 8 1/2 by 11-inch paper. Copies for reproduction purposes are located in AR 190-8. DA Form 4237-R is prepared for every EPW, CI, and RP processed. The original form accompanies the EPW, CI, or RP throughout internment. A copy of the form is provided by the EPW and CI battalion that processed the prisoner to the branch PWIC. Other copies of the form may be used to satisfy local SOPs. Sources of information used to complete the DA Form 4237-R include the EPW capture tag, identification documents found on the EPW, CI, and RP, or interviews by the interpreters. Wrist Bands. After completing DA Form 4237-R, the receiving station team prepares a wrist band for the EPW that will list the EPW's ISN and last name. Wrist bands come in a kit that may be requisitioned (NSN: 8465-00-143-0928, LIN: N83206, SOURCE: Chapter 2, SB 700-20). The wrist band is fastened on one of the EPW's wrists (left or right according to local SOP). If possible, the EPW should be instructed not to tamper with, or remove the wrist band. The band may be color coded to identify EPW, CI, and RP by category: o o o o o o Blue: Officers. Red: Noncommissioned officers. Yellow: Enlisted soldiers. Black: Retained persons. Green: Civilian internees. White (uncolored): Other.
When all tasks have been completed, have the guard escort the EPW to Station 2.
3-29
MP2011
Figure 3-14. DA Form 2672-R (Classification Questionnaire for Officer Retained Personnel). MP2011 3-30
Figure 3-14. (Continued). 3-31 MP2011
Figure 3-15. DA Form 2673-R (Classification Questionnaire for Enlisted Retained Personnel). MP2011 3-32
Station 2: Photography Photograph The photograph section prepares a name board listing the ISN, last name, first name, and middle initial of the prisoner. The photographer will position the EPW in front of the camera and position the name board under the prisoner's chin. Two photographs are taken. One photograph is attached to the DA Form 4237-R. The second photograph is attached to the identification card (if necessary). After the photographs are taken, the EPW is taken to a weight scale. The prisoner's height and weight is recorded on DA Form 2664-R (EPW Weight Register). A sample of DA Form 2664-R is shown at Figure 3-16. One copy of DA Form 2664-R is required for each prisoner. The weight register contains the EPW's name (last, first, middle initial), ISN, height in inches, weight in pounds, and date. DA Form 2664-R is locally reproduced in 8-inch by 5-inch cards. A copy for reproduction purposes is located in AR 190-8. The photograph team is responsible for maintaining accountability of all prisoners as they pass through the photograph section. A method of controlling prisoners is to log each EPW by name and ISN chronologically as they are processed. For example-NAME BOGDANOV, NIKI YURI ANDROPOV, VLADAMIR ALEX Process If the EPW does not have an ID card issued by his or her government, a DA Form 2662-R (EPW Identity Card) will be issued. A sample of DA Form 2662-R is shown in Figure 3-17. Retained persons are issued DA Form 2662-R when necessary. DA Form 2662-R and 2677-R are reproduced locally on 5-inch by 3-inch cards, printed head to foot. Copies for reproduction purposes are located in AR 190-8 and AR 190-57. If the EPW is identified as a civilian internee, a CI Identity Card, DA Form 2677-R, will be issued. A sample of DA Form 2677-R is shown at Figure 3-18. When completing DA Form 2662-R and DA Form 2677-R, the ISN is entered on the rear of the identification cards in the box marked "Other Marks of Identification." A notation that an identification card has been issued to an EPW or CI is made in block 36 (Remarks) of DA Form 4237-R. ISN USTUR-13675EPW USTUR-13676EPW
3-33
MP2011
Figure 3-16. DA Form 2664-R (EPW Weight Register). MP2011 3-34
Figure 3-17. DA Form 2662-R (EPW Identity Card). 3-35 MP2011
Figure 3-18. DA Form 2677-R (CI Identity Card). MP2011 3-36
If known, the blood type of the EPW or CI is listed on identification cards and in block 33 of DA Form 4237-R. When processing at Station 4 is completed, the EPW is escorted to Station 5. Fingerprint Two copies of DA Form 2663-R (EPW Fingerprint Card) are prepared for each EPW and CI. The following entries must be made: o o o o o o o o o Internment serial number. Name (last, first). Grade. Power served. Nationality. Sex. Age. Color of eyes. Color of hair.
A sample DA Form 2663-R is shown at Figure 3-19. If possible, all entries are typed. The fingerprint section is responsible for ensuring that entries on the DA 2663-R are legible and complete. DA Form 2663-R is reproduced locally on 8-inch by 8-inch cards. A copy for reproduction is located in AR 190-8. Fingerprints are taken from each EPW. The prints are made by rolling each finger from fingernail edge to fingernail edge. The fingerprint section personnel will check both cards upon completion to ensure that all impressions are clear. If any print is not clear, the prints will have to be retaken and the cards completely redone, without exception. EPW who are uncooperative and will not allow their fingerprints to be taken are placed in maximum security isolation. Processing paperwork is held until the prisoner becomes cooperative. Force cannot be used to induce cooperation from EPW. When legible fingerprints have been taken, a member of the fingerprint team signs DA Form 2663-R, then has the EPW sign the card directly below his or her signature. When all tasks have been completed, the EPW is escorted to Station 3.
3-37
MP2011
Station 3: Property Inventory The property station team completes two copies of DA form 1132 (Prisoner's Personal Property List - Personal Deposit Fund). An example of DA Form 1132 is shown in Figure 3-20. DA Form 1132 may be typewritten or printed legibly.
Figure 3-19. DA Form 2663-R (Fingerprint Card). MP2011 3-38
Figure 3-20. DA Form 1132 (Prisoner's Personal Property List - Personal Deposit Fund). 3-39 MP2011
The property station enters the following information in the block marked "Register of Service Number/SSN": date, last name, first name, middle initial of the EPW, and the ISN. Property taken from EPW is inspected and inventoried against DA Forms 4137 (when used). Impounded and retained property is listed on DA Form 1132. Confiscated property is not listed on DA Form 1132. The following items are segregated from retained and impounded property and disposed of in accordance with command directives: o o Weapons and knives, signal devices, ammunition, military documents, codes, ciphers, pictures, and map sketches of military installations. Any EPW found to have large quantities of US or foreign currency will be segregated for interrogation by MI and the currency will be confiscated until the investigation is completed and the disposition of currency is approved by higher authorities. Tag any item of intelligence value with the EPW's name and ISN.
o
Prepare two copies of DA Form 1132 (Receipt for Impounded Property). Describe the impounded property. For example, write "A ring of gold metal with a clear white faceted inset," not "A gold ring with diamond inset." When listing currency, put the denomination, country origin, and serial number for each bill. If more than one bill of the same denomination is received, list all serial numbers in one entry. Be sure to describe appearance rather than list items. Write "Last Entry" and draw lines from the words to the left and right margins. List the quantity of each impounded item in the disposition column marked "Stored, Container Valuable." Have the EPW initial each column used and sign the DA Form 1132. If the EPW refuses to sign, print "REFUSED TO SIGN" in the signature block. Have a witness initial each column used, sign in the spaces provided, and enter the witness' organization on DA Form 1132. The person preparing the DA Form 1132 places initials in the appropriate columns and signs the form. Place impounded items in a container (envelope or cardboard box), making sure the EPW sees each item being placed in the container. o o o o Place the original copy of the completed DA Form 1132 in the container. Seal the property container. Print the EPW's last name, first name, middle initial, and ISN on the front of the container. Give the EPW the duplicate copy of DA Form 1132 as a receipt.
MP2011
3-40
A member of the photograph section enters the height and weight of EPW on identification cards (if required) and fingerprint cards. Station 4: Records Review The records section is the last step in the administrative processing before the EPW is placed in an enclosure. The records team is responsible for ensuring all paperwork is complete and readable. The team ensures that all copies of DA Form 4237 and DA Form 1132 have been signed by the EPW and US personnel responsible for the property. The records team laminates the identification cards (when required). Prisoners are instructed to carry identification cards at all times. The records team assembles completed records as follows: o o o Original copy of DA Form 4237-R with photograph and capture tag attached. Copies of DA Form 4137 and DD Form 629 used to account for prisoners and their property are also attached if available. Two copies of DA Form 2663-R. One copy of DA Form 2664-R.
EPW and RP complete DA Form 2665-R (Capture Card for Prisoner of War) and DA Form 2666-R (Prisoner of War -Notification of Address) after processing has been completed. The records team forwards DA Form 2665-R (Figure 3-21) to the Central Prisoners of War Agency. The prisoner is allowed to send DA Form 2666-R (Figure 3-22) to a relative or next of kin. DA Form 2665-R and 2666-R are locally reproduced on 6-inch by 4-inch cards. Copies for reproduction purposes are located in AR 190-8. CI complete two copies of DA Form 2678-R (Civilian Internee - Notification of Address) (Figure 3-23) after processing has been completed. The records team sends one copy to the Central Prisoners of War Agency. The CI is allowed to send the second DA 2678-R to a relative or next of kin. DA Form 2678 is locally produced on 6-inch by 4-inch cards. A copy for reproduction purposes is located in AR 190-57. PART C - INITIAL DISPOSITION OF EPW EPW may be evacuated out of theater for internment by US forces, transferred to HN or allied forces for internment, or interned in theater by US forces. EVACUATION OF US CAPTURED EPW TO CONUS US captured EPW may be evacuated to continental US (CONUS). After EPW have been processed, they are held in US EPW enclosures until transportation is available to evacuate them out of theater. EPW may be evacuated by aircraft or ship.
3-41
MP2011
Figure 3-21. DA Form 2665-R (Captive Card for Prisoner of War). MP2011 3-42
Figure 3-22. DA Form 2666-R (Prisoner of War - Notification of Address). 3-43 MP2011
Figure 3-23. DA Form 2678-R (Civilian Internee - Notification of Address). MP2011 3-44
The commander, Forces Command (FORSCOM) has responsibility for evacuating US captured EPW to CONUS and for operating EPW facilities. MP units that accomplish this mission are assigned to or attached to the MP Prisoner of War (PW) Command. The PW Command provides command and control over MP EPW battalions, MP guard companies, MP escort guard companies, and branch PWICs, as assigned. The organization and function of MP units assigned or attached to the CONUS PW Command are the same or similar to EPW and CI units at TA level. TA MP responsible for evacuating EPW out of theater, request transportation through a Regional Movement Control Team (RMCT). RMCTs are assigned to the TAMCA as needed. The TAMCA coordinates transportation to CONUS through a representative from the Military Traffic Management Command (MTMC) in theater. The MTMC representative arranges movement with a representative from the US Air Force Military Airlift Command (MAC) or the US Navy Military Sealift Command (MSC) in theater. The most probable means of evacuating EPW out of theater to CONUS will be by air. The MP manifest EPW for evacuation to CONUS based on the transportation allocated. The CONUS PW Command maintains liaison with the TA MP responsible for EPW. The PW Command liaison officer in theater coordinates movement of escort guards from CONUS to evacuate EPW. Transportation to move EPW and escort guards from point of arrival in CONUS to internment camps is arranged by the CONUS PW Command through MTMC. Escort guards from CONUS go forward to theater and accept custody of EPW from TA escort guards. EPW are evacuated from theater to a designated point in CONUS and further evacuated by CONUS escort guards to the final destination. Movement of EPW in CONUS is planned and coordinated by the CONUS PW Command. Internment facilities in CONUS are operated by the same type of EPW battalions and MP guard companies operating in theater. Processing that was not accomplished by TA MP is done when EPW arrive in CONUS. INTERNATIONAL TRANSFER US captured EPW may be transferred to HN or allied forces for internment. EPW are held at a US processing and reception camp operated by an EPW and CI battalion until processing is complete and transfer to HN or allied forces can be arranged. Specific procedures for transferring US captured EPW to HN or allied forces are governed by treaty or agreement between the US and the HN or allied forces. US policy requires that accountability be maintained for US captured EPW who are transferred to HN or allied forces for internment. The EPW and CI processing point liaison teams locate at HN and allied processing points. US captured EPW and CI are evacuated from the US processing points by MP from an escort guard company assigned to the TA EPW command and control headquarters (EPW brigade or battalion, depending on the theater).
3-45
MP2011
HN or allied forces accept custody of US captured EPW and CI according to treaties or agreements in force. The processing point liaison team ensures US interests are maintained at the point of transfer and documents the transfer of custody. US captured EPW and CI may be transferred only to another party to the Geneva Convention. Prisoners are transferred after the US is satisfied that the HN or allied force is willing and able to supply the provisions of the Geneva Conventions. The HN or allied force accepting US captured EPW and CI is responsible for applying the provisions of the Geneva Conventions while the EPW and CI are in their custody. If the accepting HN or allied force fails to carry out the provisions of the Conventions in any important respect, the US is responsible for taking effective measures to correct the situation. INTERNMENT IN THEATER BY US FORCES EPW may be interned in theater by US forces. The procedures and policies for internment in theater will be covered in the same part of this lesson as internment operations in CONUS. PART D - ACCOUNTING FOR EPW AND CI EPW and CI for whom a DA Form 4237-R has been completed, must be accounted for on DA Form 2674-R (Enemy Prisoner of War/Civilian Internee Strength Report). DA Form 2674-R (Figure 3-24) is the basic record of the official status of the reporting organization and each EPW and CI assigned to a camp or hospital. EPW and CI are considered assigned if they have been processed (DA Form 4237-R has been prepared). STRENGTH REPORTS DA Form 2674-R is the official source of data for EPW and CI. It meets the requirements for reporting and accounting for EPW and CI under the Geneva Conventions. DA Form 2674-R is locally reproduced on 8-1/2-inch by 11-inch paper. A copy for reproduction purposes is found in AR 190-8 and AR 190-57. Commanders of EPW camps (battalions) and hospitals to which EPW and CI are assigned are responsible for the accuracy and preparation of strength reports. The Commanding General, Forces Command (CGFORSCOM) or the TA commander may authorize branch camp commanders to prepare and submit separate reports. Strength reports may be prepared by typewriter if available. Entries may be printed legibly in block capital letters using indelible pencil or blue-black ink. Copy 1 will be dark enough to permit microfilming: The preparation of strength reports in ordinary script handwriting is not authorized. Carbon copies will be clear and distinct. Ensure that entries are not obscured by stamping, folding, perforation, or other markings. Abbreviations authorized by AR 310-50 may be used.
MP2011
3-46
Figure 3-24. DA Form 2674-R (Enemy Prisoner of War/Civilian Internee Strength Report). 3-47 MP2011
Figure 3-24. (Continued). MP2011 3-48
Accuracy is extremely important when preparing strength reports. When there is not enough space in "Section B Gains/Losses/Changes" to complete required entries, use additional DA Forms 2674-R. Erasures may not be made on a strength report. Correct errors or incorrect entries on the strength report or any attachments as follows: o o Prepare an entire new report or attachment (destroy the incorrect report). Delete the incorrect entry by typing or drawing a line through the entry. The person authenticating the report must initial changes and deletions.
Once a strength report has been forwarded to the branch PWIC, make corrections by a corrective remark on a later report. Under no circumstances is the submission of a corrected copy of a strength report authorized to replace a report previously submitted. The branch PWIC may return strength reports to preparing agencies when obvious errors or omissions are discovered in the heading or authentication sections of the report. Errors will be corrected and the report will be returned immediately. Such reports will not be considered "corrected copies." The reporting organization may have to prepare true copies of strength reports for those reports submitted with faint type, with obliterated data, or with mutilated forms that prevent microfilming. True copies of strength reports may also be prepared for reports that are lost en route to the branch PWIC. If a printed heading of the strength report does not apply, leave the space blank. The word "no" or "none" will not be used except as prescribed in Appendix B, AR 190-8. Reporting Periods Reports are prepared not later than 0900 after the close of each strength report day. A strength report day is the 24 hour period beginning at 0001 hour and ending at 2400 hours of any given day. Exceptions may be granted if it is absolutely impossible to meet this deadline. Reports are submitted to the branch PWIC on the same day they are prepared. When a branch PWIC is not assigned or operational (low intensity conflict and contingency force operations, for example) reports are submitted through command channels to the Personnel Contingency Cell, HQDA. Classification Strength reports are not normally classified. When classification is necessary, AR 380-5 applies. classified order as the authority for a remark does not require the strength report to be classified. Reference to a
3-49
MP2011
Types of Reports An initial strength report is prepared for each organization as of the date detainees are assigned or attached, whether or not they have arrived. Final strength reports are prepared when an EPW internment facility is inactivated, disbanded, discontinued, or redesignated. Final strength reports for internment facilities that are being redesignated are prepared with an "as of strength" for the day preceding redesignation. All other final strength reports are prepared with an "as of strength" for the day of inactivation or disbandment. PREPARATION OF STRENGTH REPORTS Section A Section A reflects the assigned and attached EPW and CI as of 2400 hours for the strength report day. It also shows gains and losses during the report period. Totals in section A are based on additions and deletions supported by entries in "Section G - Gains/Losses/Changes." If additional pages are required for Section B, enter strength figures on page 1. Record the strength of EPW and CI when assigned to a reporting organization by proper Army orders under the proper columns. Previous strength (line 1): Enter the total from line 19 of the report for the previous 24-hour period. Gains (lines 2 - 6): o o o o "Initial" gains are those EPW and CI who are assigned to the camp for the first time. "Return from escape" gains are those detainees who have been previously accounted for under "initial" and are returned to the camp after escape. "Assigned from another power EPW camp" gains are those EPW and CI who have been transferred from another power under a transfer agreement between the power and the US. "Other" gains are any other EPW and CI who arrive at a camp and are not covered by one of the categories listed above.
Losses (lines 7 - 14): o o "Transferred to another power EPW camp" losses are those EPW and CI who are transferred under a transfer agreement to HN or allied forces. "Escape" losses are those EPW and CI who are missing from the reporting organization because of escape.
MP2011
3-50
o o
"Repatriation" losses are those EPW and CI who have departed the reporting organization because they have been repatriated to their native power. "International transfer" losses are those EPW and CI who are transferred to an international organization (International Committee of the Red Cross (ICRC), neutral country, etc.) for care. These EPW and CI are not those transferred under a host-nation support agreement or those transferred to a country allied with the US in the ongoing hostilities. "Release in place" losses are those EPW and CI who are released from the custody of the camp, but who will remain within a US-controlled territory. An example of release in place losses are EPW and CI who refuse repatriation. "Transferred to another US EPW camp" losses are those EPW and CI who are transferred to another US camp. "Death" losses are those EPW and CI who die while in the custody of the reporting organization. "Other" losses are all other EPW and CI who are lost from the reporting organization and are not covered by one of the categories above.
o
o o o
Accountable, not present (lines 15 -18): o o "Transfer to hospital" are EPW and CI who are ill or injured and who have been removed from the custody of the camp to the hospital. "In transit" are EPW and CI who are in a transit status to join a reporting organization to which assigned if departing before the effective date of change of strength accountability (EDCSA). The EDSCA is the date upon which a losing organization drops a detainee from its strength accountability. On the EDCSA date, a gaining organization assumes strength accountability for the same prisoner. EPW and CI who depart before EDCSA or on EDCSA are reported as "assigned - not joined" by the gaining organization, and will be shown as being "in transit." "Unprocessed" are EPW and CI who depart after or arrive before their EDCSA. "Other" are EPW and CI who are accounted for but are absent from the reporting organization.
o o
Total (line 19): The total is the sum of the gains (lines 2 -6) and the accountable, not present (lines 15 -18), minus the losses (lines 7 -14). Section B Section B of the strength report is the section in which a concise remark describing the change is reported. The section is compiled from data shown in
3-51
MP2011
personnel records, orders, and other sources that affect the status of the new EPW and CI. Entries made in section B form a permanent record of change in the status of EPW and CI. Entries will be made according to the instructions in AR 190-8 and AR 190-57. Status changes that affect the overall strength of the reporting organization or distribution of strength will be substantiated by corresponding entries in the strength section of the report. When reference is made to "basic personnel data," such reference will mean that the entry being described will reflect the name and internment serial number. Each time a remark is entered on the strength report that will require a change in status, the present or absent status that existed before the change being reported will be indicated. Any more data required in the entry or any exceptions to the foregoing will be explained in the text pertinent to that particular entry. Personnel Data. Basic personnel data entered in Section B depends upon the type of status change being made. Basic personnel data consists of-o o name (last name, first name(s), and grade), entered in that order. ISN with the proper prefix and suffix, if appropriate.
Disposition. The disposition is the statement, comment, or text used in reporting a change in the status of a detainee or a group of detainees. It should be clear and concise and should leave no doubt as to its intent. For certain dispositions, the authority for the change is required to be indicated. In order to avoid confusion with any other text, the date of the authority (special order or general order) will be indicated only when the year of the date of the authority is different from the current year. For example, if any given authority dated in 1980 is indicated in Section B of a strength reported dated in 1981, the date of the authority will be indicated. Other directives, such as unnumbered orders, require the full date. Dispositions will be reported in three separate groups. Each grouping will be identified, in capital letters, as follows: o o o "Gains." Include all gains to the assigned strength of the reporting organization. "Losses." Include all losses to the assigned strength of the reporting organization. "Accountable, not present." Include all dispositions other than those prescribed for reporting by gains and losses that pertain to detainees assigned or attached to the reporting organization.
Normally, the group will be entered in the sequence of gains, losses, and accountable not present. When a disposition has been inadvertently omitted from the proper group, such remark will be entered between the last remark and the record of events section, preceded by the proper group designation. The
MP2011
3-52
designation of a group will not be entered on the strength report when no entries are reported pertinent to that group. All gains are supported by narratives immediately under the names reported (indented two spaces). Disposition will follow the basic personnel data of detainees for other individual changes. One space is left between groups of gains, losses, and accountable, not present categories. Record of Events Section The record of events section contains instructions for recording historical data by the reporting organization, changes affecting the status of the reporting organization, certain strength and personnel data, and other miscellaneous data. The record of events section is not a preprinted part of the form. It is created by typing or printing, in capital letters, the words "RECORD OF EVENTS SECTION" after the last entry made in Section B of the strength report. This section is used to compile the history of the reporting organization. It is also used to record data not reported elsewhere on the strength report. Changes, events, and other required information will be reported as they occur. An entry is made on the strength report prepared for the 10th, 20th, and last day of each month, even though no change or event occurs on such date. The entry will indicate that "usual organizational duties" were performed during the period covered since the last "records of events" entry. For activations of camps and hospitals, an entry is made on the initial strength report prepared for the camp or hospital that is activated. The entry will reflect the effective date of activation and designation of the organization, a statement that the internment facility is activated and the authority, the authorized prisoner capacity as of the date of activation, and the words "initial strength report." For inactivations of camps and hospitals, an entry is made on the final strength report prepared for the camp or hospital. The entry will reflect the effective date of inactivation, a statement that the internment facility being reported is inactivated and the authority, the words "no detainees assigned", and the words "final strength report." Activations and inactivations of branch camps are reported by the parent camp. When the authorized capacity of an EPW and CI facility is revised, an entry is made indicating the new capacity and the authority. An entry is also made when an internment facility is temporarily depleted of prisoners but continues to remain on record for reporting. This applies to an internment facility that has not been inactivated, discontinued, or disbanded. The initial entry is made on the strength report prepared for the day on which all prisoners actually depart from the facility. Thereafter, no strength reports are submitted until prisoners are reassigned to the facility or until the facility is inactivated, discontinued, or disbanded.
3-53
MP2011
Section C The authentication section of the daily strength report is used to record the name, grade, branch, and signature of the authenticating officer. The commander of the facility normally signs the strength report. An officer or warrant officer authorized by the commander may also sign the report. INSTRUCTIONS FOR ENTRIES Entries for Initial Reports EPW and CI are picked up as assigned and initially reported on the strength report of the camp or hospital at which they are processed. The entries are made on the date the DA Form 4237-R is prepared for the EPW or CI as indicated in block 41 of the form. The data required in the entry includes basic personnel data, the notation "DA Form 4237-R," and the date form was prepared (if other than the date of strength report). Entries for Return from Escape These entries are made on the strength report prepared for the date the EPW or CI is returned to military control or when notice of return to military control is received. The data required in the entry includes basic personnel data, statement that detainee is returned, hour and circumstances of return and whether surrendered or apprehended, and place and date of return (if at a station other than the station where the escape was initiated). When reporting detainees as assigned from an escape status, take care to ensure that the person is not being reported as assigned by any other reporting organization. Entries for Assigned from Another Organization These entries are made on the strength report prepared for the EDCSA specified in orders directing the reassignment whether or not the individual has joined. The data required in EDCSA entry includes basic personnel data, statement that the EPW or CI is assigned, whether EPW or CI has joined or reason for not joining organization and station from which assigned, authority, and EDCSA when different from date of strength report. When an EPW or CI joins either before the EDCSA or after the EDCSA, two entries are made. If the prisoner reports before the EDCSA, one entry is made on the day of arrival indicating that the prisoner is "gains - other (pending EDCSA)." The other entry will be made on the EDCSA as described above. If the prisoner has not arrived on the EDCSA, the detainee will be reported on that date as "accountable - other (not yet joined)," according to instructions above. Another entry is made on the day the prisoner arrives as described below under "entries made for assignment and arrival after EDCSA."
MP2011
3-54
Entries for Erroneous Losses A corrective entry that voids a previous disposition reporting a prisoner as an assigned loss, or otherwise represents a gain to the assigned strength of the reporting organization, is reported in "gains - other" of the strength section. Entries for Losses A prisoner is reported as a loss to the assigned strength of the reporting organization when reassigned to another reporting organization, when in an escape status, when repatriated or released, or as the result of an international transfer or death. Entries Made by Losing Organization A prisoner is reported as relieved from assigned according to valid orders providing relief from such assignment. The entries are made on the strength report prepared for the EDCSA specified in the orders whether or not the prisoner has departed. The data required in EDCSA entry includes basic personnel data, statement that the detainee is relieved from assigned, organization and station to which reassigned, authority, statement that the detainee has departed or reason for not departing, and date of departure (if applicable) or reference to strength report if detainee departed before EDCSA. If the prisoner departs before the EDCSA, an entry is also made on the strength report for the day the prisoner physically departs from the losing organization. Prisoners that depart after the EDCSA are accounted for in the "accountable - other" line of the strength section. Entries for Escapes These entries are made on the strength report prepared for the date the prisoner is determined to be in escape status. The data required in the entry includes basic personnel data and a statement that the prisoner is in escape status. Entries for Repatriation A prisoner is reported as repatriated when movement for repatriation is directed by competent authority. These entries are made on the strength report prepared for the day on which the prisoner departs. The data required in the entry includes basic personnel data, the name of the country to which repatriated, and a statement of the reason and authority for repatriation. Entries for International Transfer A prisoner is reported as relieved from assigned according to a valid order providing relief from such assignment and directing movement for transfer to
3-55
MP2011
another power. These entries are made on the strength report prepared for the day on which the detainee departs. The data required in the entry includes basic personnel data, the name of the country to which transferred, and the authority for transfer. Entries for Death of a Prisoner These entries are made on the strength report prepared for the day on which the death occurred or when notice of death is received. The data required includes basic personnel data, a brief description of cause and place of death, and the date of death if other than date of strength report. Entries for Erroneous Gains A corrective entry that voids a previous disposition reporting a prisoner as a gain, or otherwise represents a loss to the assigned strength of the reporting organization, will be reported in "losses - other" portion of the strength section. Entries for Approval After EDCSA These entries are made on the strength report prepared for the day the prisoner physically joins the reporting organization. The data required in the entry includes basic personnel data, a statement that the prisoner has joined, reference to the strength report reporting the prisoner as assigned, and the reason for delay in reporting. Entries for Departure Before EDCSA These entries are made on the strength report prepared for the day the prisoner physically departs from the losing organization. The data required in the entry includes basic personnel data, statement that the status of the prisoner is changed from present to accountable (in transit) pending EDCSA, organization and station to which assigned, and the EDCSA. Entries for Other Changes Prisoners who depart after EDCSA or arrive before the EDCSA are accounted for in the "accountable - other" line of the strength section. Entries are made on the strength report prepared for the date of arrival of the prisoner and for the date of attaching the prisoner who departs after EDCSA. The data required includes basic personnel data, statement that the prisoner is attached, and the purpose of attachment. Entries for Sickness or Injury A prisoner will be reported as sick or injured only if admitted to a hospital located outside of the confines of the internment facility being operated by the reporting organization. These entries are made on strength reports prepared by organizations other than hospitals.
MP2011
3-56
The data required includes basic personnel data; a statement indicating sickness or injury; if an injury, a brief description and date of injury; the name and location of hospital; and the date of departure from the reporting organization. Entries for Prisoner Riots and Riot Casualties An entry is made for all prisoner riots which occur within any part of the facility. The entry includes the date, place or places, the hour the riot began, the hour the riot ceased, separate totals of the number of officer and enlisted prisoners killed or missing, the total number of rioters confined for disciplinary action (if any), and any other data of interest. Entries for Correcting a Strength Report A corrective entry will be made when an entry reported on a previous strength report is determined to be in error. The entry is made on the strength report prepared for the date on which it is determined that a prior entry was erroneous. Comments correcting an error, revoking a previous comment, or amending a disposition of a previous day will be preceded by the word "CORRECTION" in capital letters followed by the date of the strength report on which the erroneous text appeared. Basic personnel data and the part of the text that was erroneous will then he stated, followed by the correct information prefaced by the words "SHOULD BE" in capital letters. The incorrect and the corrected item or items will be underlined. Corrective strength report comments will be grouped as follows: o o o o Corrective comments that void previous texts reporting detainees as losses to assigned strength will be entered under the caption "GAINS." Corrective comments that void previous texts reporting detainees as gains to assigned strength will be entered under the caption "LOSSES." Corrective comments on miscellaneous changes of personnel data will be entered under the caption "MISCELLANEOUS CHANGES." Corrective comments on the strength section of the previous strength report will be preceded by the caption "STRENGTH" in capital letters.
DISPOSITION OF STRENGTH REPORTS The reporting organization forwards the original copy of all strength reports to the branch PWIC using airmail or equally expeditious means. This copy is used as the source document for changes on DA Form 4237-R and any other personnel records or data maintained by the branch PWIC on prisoners. Reports are submitted through normal command channels to the Personnel Contingency Cell, HQDA, when a branch PWIC is not assigned or operational.
3-57
MP2011
The personnel officer of other officer designated at the branch PWIC will initial the copy to show that the entries on the form have been accurately transcribed and, when applicable, posted in the proper personnel records, this copy is a record of the official status of the internment facility and of each prisoner assigned on any given date. It is used as a source document for changes to DA Form 4237-R. Reports are disposed of according to instructions issued by HQDA. PART E - INTERNMENT OF EPW AND CI The last three major conflicts that the US has been involved in resulted in more than 695,000 EPW and CI. Internment of EPW and CI is an important mission for MP on the AirLand Battlefield. More than 425,000 German, Italian, and Japanese EPW were interned in CONUS during World War II. There were more than 140 major German prisoner of war camps located in 40 states. More than 500 branch camps managed prisoner of war laborers across America. During the Korean War, more than 170,000 North Korean and Chinese Communist prisoners were held in the United Nations prisoner of war camps in the Republic of South Korea. A slightly smaller number of prisoners of war and civilian internees, numbering more than 100,000, were transferred from US custody to the South Vietnamese for internment. RESPONSIBILITIES The Deputy Chief of Staff for Personnel (DCSPER) has the primary staff responsibility for the EPW and CI program. The DCSPERo o o develops and coordinates policies and programs on EPW and CI, including programs for reporting and investigating alleged violations of the law committed by or against EPW and CI. provides reports, coordination, technical advice, and staff assistance to the Office of the Secretary of Defense, the Joint Chiefs of Staff, the military departments, and other federal agencies. provides reports and coordination through the Department of State to the ICRC and the protecting power, if one is designated.
The Assistant Comptroller of the Army (Finance and Accounting) is responsible for the policies and procedures that govern pay, allowance, and personal funds of the EPW and CI. The Commander, FORSCOM is responsible foro EPW camps and branch camps located in the US.
MP2011
3-58
o o
training US forces in the proper administration and operation of EPW camps to include processing, accountability, internment, care, treatment, discipline, safeguarding, use, education, and repatriation. security matters connected with custody and use of EPW.
The MP Prisoner of War Command is responsible for accomplishing the CONUS EPW mission for the Commander, FORSCOM. EPW and CI battalion (camp) commanders are responsible for-o o employing assigned EPW (CONUS). operating, administering, and securing the camp (CONUS and TA).
EPW CAMPS EPW camps may be established in the COMMZ of each theater of operations and in CONUS to receive, process, and intern EPW. US CI internment camps may be established in the theater where CI are ordered into confinement. CI cannot be evacuated out of the theater in which they were taken into custody. Administrative processing of EPW and CI is accomplished by the MP EPW and CI battalion (TOE 19-646L). Location The Commander, FORSCOM and TA commanders are responsible within their respective areas for the location of EPW and CI internment facilities. Details for construction and material requirements to establish EPW and CI facilities are found in TM 5-300, TM 5-301, TM 5-302, and TM 5-303. Support for construction of EPW and CI facilities may be provided upon request through the Engineering Command (TA) or the Corps of Engineers (CONUS). When military conditions permit, internment facilities will be marked with the letters EPW placed so as to be clearly visible from the air. Construction of EPW and CI camps will vary according to climate, anticipated permanency of the camp, number of camps to be established, availability of labor and materials, and the conditions under which the forces of the detaining power are billeted in the same area. Existing structures may be used if practicable to reduce construction and guard requirements. When possible, modification of existing facilities or construction should be accomplished by EPW and local sources of material under close supervision of qualified US construction personnel. Minimum Construction Requirements The minimum requirements for constructing EPW and CI facilities includes--
3-59
MP2011
o o o o
a double barbed wire or chain link fence around the perimeter of each enclosure. barbed wire top guards. a 12-foot clear zone, free of any vegetation between the inner and outer fences. guard towers on the perimeter of each enclosure. Towers should be high enough to permit unobstructed observation of the enclosure. Guard towers may be placed outside the perimeter or in the free zone formed by the inner and outer fences. Towers must also be low enough to provide an adequate field of fire. Platforms on guard towers should be wide enough to mount crew-served weapons. Access to the tower platform should be retractable. enclosure lighting illuminating fences and walls. Protection against breakage for light bulbs and reflectors should be provided by wire mesh covers where needed. Lighting should not interfere with the vision of guards. Secondary emergency lighting and power should also be provided. roads constructed around the outer perimeter of enclosures to facilitate foot and vehicle patrols.
o
o
Communication, preferably telephone, should be established between towers, entrances to the enclosure, the enclosure operations (administrative) area, the battalion (camp) headquarters, and guard response forces. Response forces are located outside the enclosure. Enclosures should be separated as much as possible to reduce the ability of EPW and CI to communicate. Advantage should be taken of terrain features that isolate enclosures from each other. On level, flat terrain, distances up to one mile may be required between enclosures. Facility Plan An EPW or CI camp may consist of one or more enclosures. Each enclosure is subdivided into compounds (Figure 3-25). Facilities incorporated into each enclosure include-o o command post and administrative area. religious and education area.
MP2011
3-60
Figure 3-25. Subdivisions of EPW and CI Camps. 3-61 MP2011
o o o o o o o o o o
supply area. dispensary and infirmary for EPW and CI. latrine. bath house. barracks (living area). kitchen facilities (area). in-processing and records area. warehouse space. temporary holding facilities. supply distribution area.
An initial enclosure capable of holding 500 prisoners (Figure 3-26) requires 1,503,684 square feet (29 acres). This amounts to 4,920 feet of perimeter; 28 vehicle gates; 12 pedestrian gates; 31, 40-foot utility poles; 76 GP medium tents; two GP large tents; 16,412 linear feet of interior fencing; and 4,223 linear feet of exterior fencing. For plans of a facility capable of holding 2,000 EPW or CI camp see Figure 3-27. For plans of a facility capable of holding 4,000 EPW or CI see Figure 3-28. For plans of a facility capable of holding 12,000 EPW or CI see Figure 3-29. These plans reveal the magnitude and complexity of EPW and CI internment facilities. A 12,000 prisoner camp requires almost 264 acres of land (more than one grid square of a 1:50,000 scale map); has 204 vehicle gates, 261 personnel gates; 9.2 miles of paved roadway; over 214,000 linear feet of interior fencing, over 23,000 linear feet of exterior fencing; and 60 guard towers. EPW BRANCH CAMPS EPW branch camps are organized for 50 to 250 EPW or CI (TOE 19-543LA) or for 251 to 500 EPW or CI (TOE 19543LB). Branch camps are established in response to specific requirements for EPW and CI labor at areas or locations removed beyond a reasonable daily travel distance from the nearest EPW or CI camp. Branch camps are located near or within the immediate vicinity of the supply or other facility being supported. Branch camps may not be established in theaters where EPW are evacuated to CONUS for internment, or in theaters where EPW and CI are transferred to HN or allied forces. The most common application of branch camps may be in CONUS and in theaters where EPW and CI are interned on a long-term basis by US forces. Branch camps do not accept
MP2011
3-62
Figure 3-26. Enclosure for 500 Prisoners.
3-63
MP2011
custody of EPW captured by units in the area. Branch camps do not have the resources or mission of processing EPW and CI. Branch camps are operated on an austere basis using existing facilities when possible. Otherwise, tents are used for shelter to permit rapid displacement to a new work site. Requirements for administrative and security personnel are minimal. EPW and CI assigned to branch camps are skilled in the work to be performed and are screened and selected on the basis of cooperative attitude and pro-US sympathy. Branch camps may be assigned to an EPW brigade (TOE 19-762L) or an EPW and CI battalion (TOE 19-646L). The organization of EPW and CI branch camp TOE 19-543LA and branch camp TOE 19-543LB are shown in Figure 3-10, page 3-68. Branch camps depend on the unit they are assigned to for-o o o o o health, legal, religious, finance, and personnel and administrative services. supply support (class I through X). transportation of US personnel, equipment, and EPW and CI. maintenance of organic equipment. food service support.
Branch camp TOE 19-543LA depends on the EPW unit to which it is assigned for one platoon from an MP guard company for security at the branch camp site. Branch camp TOE 19-543LB requires two platoons from an MP guard company for security. The installation or facility commander using EPW or CI labor from a branch camp is responsible for-o o o o o o providing guards and technical supervision for work details. MP do not have resources to provide guards at work sites. providing logistical support (subsistence and transportation). providing medical and religious services. maintaining an "on-call" security alert force to respond to the branch camp or work site. controlling EPW and CI while they are on work details. providing material to construct and maintain the branch camp as specified by the parent EPW command and control headquarters (brigade or battalion).
MP2011
3-64
Figure 3-27. Enclosure for 2,000 Prisoners.
3-65
MP2011
Figure 3-28. Enclosure for 4,000 Prisoners. MP2011 3-66
Figure 3-29. Enclosure for 12,000 Prisoners. 3-67 MP2011
GENERAL ADMINISTRATIVE POLICY EPW and CI are interned in camps according to their nationality and language. EPW may not be separated from other prisoners belonging to the armed forces they were serving with at the time of their capture, except with their consent. Females will be separated from males and receive treatment as favorable as males. Officers are sheltered and messed in camps, or in enclosures or compounds of camps, separate from enlisted EPW. Officer EPW are provided quarters and facilities equal to their grade. CI may request compassionate internment of their dependent children who are at liberty without parental care in the occupied territory. Requests for compassionate internment of family members are normally approved when both parents or the only surviving parent is interned. The US is also responsible for financial support of dependents of CI who are at liberty in an occupied territory and are without adequate means of support, or are unable to earn a living. Regulations and other guidance concerning administration, employment, and compensation of EPW and CI are found in-o o o o o o AR 190-8. AR 190-57. AR 37-36. Joint Chiefs of Staff Publication 3 (C). Memorandums of Agreement or Understanding Between US Forces and HN and Allied Forces. Standardization Agreements (STANAGs).
Initial processing of EPW and CI (AR 190-8 and AR 190-57) is accomplished according to local SOP at a designated reception and procession station (EPW and CI battalion) before they are assigned to a camp for internment. General administrative principles for EPW and CI camps include the following: o o o EPW and CI are used for internal administration and operation of camps as much as possible. Captured enemy supplies and equipment are used whenever possible. Camp commanders have authority to punish EPW and CI according to AR 190-8 and AR 190-57.
MP2011
3-68
Figure 3-30. EPW and CI Branch Camp TOEs.
3-69
MP2011
o o
Copies of the Geneva Conventions of 1949 written in a language that EPW and CI understand must bc conspicuously posted in every camp where EPW and CI can read them. Regulations, orders, and notices relating to the conduct and activities of EPW and CI must be similarly posted in places where EPW and CI can read them.
To protect persons from acts of violence, bodily injury, and threats of reprisals at the hands of fellow prisoners, a copy of the following notice in the prisoner's language must be posted in every compound: "Detainees, despite faith or political belief, who fear that their lives are in danger or that they may suffer physical injury at the hands of other prisoners will immediately report the fact personally to any US Army officer of this camp without consulting the detainee's representative. From that time on, the camp commander will assure adequate protection to such detainees by segregation, transfer, or other means. Detainees who mistreat fellow detainees will be punished." Administrative rights guaranteed EPW and CI under the Geneva Conventions include-o o o protesting conditions of confinement. electing their own representatives. sending and receiving correspondence.
Representatives Enlisted EPW may elect a representative in every camp except when an EPW officer is interned in the camp. Representatives are elected by secret ballot and serve for a term of six months. EPW are permitted to consult freely with their elected representative. In turn, their representative is allowed to represent them before-o o o o US military authorities. the protecting power. the ICRC. other relief or aid organizations authorized to represent EPW.
The senior EPW officer assigned to each camp is recognized as the EPW representative, unless incapacitated or incompetent (as determined by US authorities). In officer EPW camps, one or more advisors chosen by officers assists the prisoner representative. In mixed camps (enlisted and officer EPW assigned), one or more advisors elected by and from the EPW assigned to the camp may assist the prisoner representative.
MP2011
3-70
The US EPW and CI camp (battalion) commander approves prisoner representatives before they are allowed to perform their duties. If the camp (battalion) commander refuses to approve or dismisses an elected representative, a notice to that effect must be forwarded to HQDA, ODCSPER. Reasons for the refusal will be included. EPW are permitted to elect another representative. Detained medical personnel and chaplains are not considered EPW and may not be represented by prisoner representatives. The senior EPW medical officer in each camp is responsible to US military authorities for the activities of retained medical personnel. EPW chaplains have direct access to camp (battalion) authorities. Elected EPW representatives may appoint assistants. Assistants must be approved by the camp (battalion) commander. Elected and appointed representatives must have the same nationality and customs and speak the same language as the prisoners they represent. Groups of EPW interned in separate compounds or enclosures because of differences in nationality, language, customs, or ideology, are permitted to have an elected representative. EPW representatives are responsible for furthering the physical, spiritual, and intellectual well-being of the prisoners they represent. Representatives are not required to work if doing work makes their job more difficult. Representatives are given freedom of movement (consistent with security requirements) to accomplish their duties. These duties include inspecting labor detachments, receiving supplies, and communicating with US camp (battalion) authorities. Postal and telegraph facilities will be made available to prisoner representatives to communicate with US military authorities, protecting powers, ICRC and its delegates, mixed medical commissions, and other organizations authorized to assist EPW. CI are elected by secret ballot to the internee committee at every CI camp (battalion) and branch camp. The committee is allowed to represent the camp to the protecting powers, the ICRC or other authorized relief or aid organizations, and US military authorities. The internee committee consists of not less than two and not more than three members. Elections are held every six months or upon the existence of a vacancy. Committee representatives are eligible for reelection.. Correspondence EPW and CI are allowed to send and receive letters and cards. There are no restrictions on the number or length of cards and letters received. Parcels cannot be sent by EPW and CI. If it is necessary to limit the number of
3-71
MP2011
letters and cards EPW and CI send, the number will not be less than two letters and four cards monthly. This quota, if necessary, does not include the prisoner of war - notification of address card (DA Form 2666-R); the capture card for prisoner of war (DA Form 2665-R); or the civilian internee notification of address card (DA Form 2678-R). EPW use DA Form 2668-R (Post Card), Figure 3-31; and DA Form 2667-R (Letter), Figure 3-32. CI use DA Form 2679-R (Letter), Figure 3-33; and DA Form 2680-R (Civilian Internee Post Card), Figure 3-34. EPW and CI mail may be examined and read by the camp (battalion) commander. EPW and CI correspondence that contains obvious deviation from regulations is returned to the prisoner, uncensored. Obvious deviation from regulations includes letters or cards addressed to other than representatives of a protecting power or US military authority that-o o o o o o criticizes or complains about any government official or agency. refers to events of capture. compares camps. contains quotations from books or other writings. contains numbers, cyphers, codes, music symbols, shorthand, marks, or signs other than those used for normal punctuation. contains military information on numbers of EPW and CI.
Letters and cards received by the camp commander that appear to comply with regulation are forwarded to a censorship center designated by the TA or FORSCOM commander. Prisoners may receive parcels. The parcels will be opened by a US officer at the camp in the presence of the addressee. When considered necessary, the camp commander may request that parcels be examined by censors. Public Access EPW and CI may not be photographed except in support of medical documentation and official identification. Interviews of EPW and CI by news media is not permitted. This includes still or motion pictures; telephone, radio, and television interviews; and mail material. Requests by the media for exceptions to this policy may be forwarded to HQDA for consideration. Medical Care and Sanitation EPW are furnished dental, surgical, and medical treatment as required. A medical examination is given to EPW and CI upon arrival at an internment
MP2011
3-72
Figure 3-31. DA Form 2668-R (Post Card). 3-73 MP2011
Figure 3-32. DA Form 2667-R (Letter). MP2011 3-74
Figure 3-32. (Continued). 3-75 MP2011
Figure 3-33. DA Form 2679-R (Letter). MP2011 3-76
Figure 3-33. (Continued). 3-77 MP2011
Figure 3-34. DA Form 2680-R (Civilian Internee Post Card). MP2011 3-78
camp. EPW and CI are also given a medical inspection once a month by a medical officer. Monthly medical inspections are conducted o o to detect vermin infestation and communicable diseases, especially tuberculosis, malaria, and venereal diseases. to determine the general state of health, nutrition, and cleanliness of EPW and CI.
EPW and CI are also weighed. The results are posted to individual weight registers (DA Form 2664-R) maintained for each prisoner. EPW and CI are vaccinated against smallpox, and inoculated against typhoid and paratyphoid fevers. Immunizations against other diseases are given as recommended by medical personnel. Vaccinations and inoculations are given as soon as possible after capture, or after evacuation to an EPW and CI battalion (camp). EPW and CI are given a certificate upon request that shows illnesses, injuries, and treatment. Copies of certificates given to EPW and CI are forwarded to the branch PWIC. Medical records and forms used for treatment and hospitalization of EPW and CI are stamped "EPW," "RP," or "CI" at the top and bottom of each form. Medical and dental records accompany EPW and CI when they are transferred. EPW and CI are not hospitalized in the same wards as US or allied soldiers. The camp or hospital commander notifies the branch PWIC when EPW or CI are seriously ill because of injury or disease. Notification includes a brief diagnosis of the condition. Follow up reports are submitted weekly until the prisoner is removed from the seriously ill list. Hygiene and sanitation measures for EPW and CI are found in AR 40-5. EPW and CI battalion (camp) commanders conduct Inspections to ensure compliance. Sanitary orders published in a language that EPW and CI understand are posted in each compound and explained to EPW and CI when they arrive. EPW and CI will have latrine facilities available, day and night. Latrines for prisoners must conform to the sanitary rules governing latrines for US and allied soldiers. Females are provided separate, but equal, facilities. Medical facilities are established by the EPW and CI battalion medical section (TOE 19-646L). Medical support for EPW and CI that is beyond the limited capability of the medical section is arranged through medical facilities in the area. Other medical support for EPY and CI facilities that must be planned and coordinated includeso veterinary service.
3-79
MP2011
o o o o
dental support. blood bank services. optometric and optical services. pharmaceutical services.
Death and Burial The commander responsible for EPW and CI in US custody is notified immediately by a medical officer when a prisoner dies. The notification includes the prisoner's full name; ISN; date, place, and cause of death; and a statement that the prisoner's death was or was not the result of the prisoner's own misconduct. The commander responsible for custody of EPW and CI who die notifies the branch PWIC and provides the information received from medical personnel. The attending medical officer and camp (EPW and CI battalion) commander are responsible for preparing DA Form 2669R (Certificate of Death). (See Figure 3-25.) DA Form 2669-R is reproduced locally on 8 1/2-inch by 11-inch paper. A copy for reproduction purposes is found in AR 190-8. DA Form 2669-R is prepared and distributedo o o o o to the national PWIC (original). to the branch PWIC (copy). to the Surgeon General (copy). to the EPW and CI personnel file (copy). to US civil authorities responsible for recording deaths if the death occurred in the United States.
The commander responsible for custody of EPW may appoint an officer to investigate and report-o o death or serious injury to prisoners that may have been caused by guards, another prisoner, or any other person. suicide and death resulting from unnatural or unknown causes.
A copy of the investigating officer's report is forwarded to HQDA (Office of the Deputy Chief of Staff for Personnel). Military and civilian law enforcement agencies may be notified when criminal conduct is suspected.
MP2011
3-80
Figure 3-35. DA Form 2669-R (Certificate of Death). 3-81 MP2011
EPW and CI are buried honorably in cemeteries established according to AR 638-30. Prisoners are buried according to the rights of their religion and customs of their military forces if possible. Prisoners are buried in separate graves unless unavoidable circumstances require mass graves. Graves registration services will record information on burials. A copy of DD Form 551 (Record of Internment), prepared by graves registration personnel, is forwarded to the branch PWIC. Prisoners may be cremated only because of imperative hygiene reasons, the prisoner's religion, or at the prisoner's request. The reason for cremation must be recorded on the death certificate (DA Form 2669-R). The personnel file of deceased prisoners is forwarded to the branch PWIC. Repatriation of Sick and Wounded EPW Sick and wounded EPW may be processed for repatriation or accommodation in a neutral country during hostilities. A mixed medical commission is established by HQDA to determine cases eligible for repatriation. Sick and wounded prisoners may not be repatriated against their will. The mixed medical commission is composed of three members. Two of the members appointed by the ICRC and approved by the parties to the conflict will be from a neutral country. When possible, one of the members is a physician and the other a surgeon. The third member of the mixed medical commission is a medical officer appointed by HQDA. One of the members appointed by the ICRC acts as a chairman of the commission. The mixed medical commission acts upon applications for repatriation submitted by-o o o o camp, hospital, and retained medical personnel. elected prisoner representative made on behalf of EPW. authorized organizations that give assistance to EPW. individual EIN.
The commander responsible for EPW being considered by a mixed medical commission is notified where and when the commission will convene. The commission examines EPW before rendering a decision. The commander is responsible for preparing DA Form 2670-R (Mixed Medical Commission Certificate For EPW). DA Form 2670-R (Figure 3-36) is reproduced locally on 8 1/2-inch by 11-inch paper. A copy for reproduction purposes is located in AR 190-8. DA Form 2671-R (Certificate for Direct Repatriation for EPW) is submitted to HQDA, which is responsible for forwarding the recommendation to a mixed medical commission. (See Figure 3-37.) Mixed medical commissions may approve repatriation based on DA Form 2671-R without examining the EPW concerned.
MP2011
3-82
Figure 3-36. DA Form 2670-R (Mixed Medical Commission Certificate for EPW). 3-83 MP2011
Figure 3-37. DA Form 2671-R (Certificate for Direct Repatriation for EPW).
MP2011
3-84
LESSON 3 PRACTICE EXERCISE The following items will test your knowledge of the material covered in this lesson. There is only one correct answer for each item. When you have completed the exercise, check your answers with the answer key that follows: 1. If necessary what mailing limit on letters and postcards can be placed on EPW/CIs per month? A. Three letters and four postcards. B. One letter and one postcard. C. Two letters and four postcards. 2. Who is responsible for establishing permanent internment camps in an overseas theater of operation when EPW are evacuated to CONUS? A. The theater commander. B. The TAACOM HP brigade commander. C. Permanent internment camps aren't normally established in-theater when EPW are evacuated to CONUS. 3. Who ensures that US captured EPW transferred to host nation forces receive treatment according to US policy and the Geneva Convention? A. B. C. D. The theater Prisoner of War Information Center. The Red Cross is responsible for how US captured EPW are treated in internment camps. An EPW and CI branch camp liaison team. A neutral third country appointed by the International Red Cross.
4. An enemy doctor was captured by US forces. The doctor was the 488th Afghanistan captive processed. Which of the following internment serial numbers should be used? A. B. C. D. USAB-00488RP. US9AF-00488EPW. US-00488EPWRP. 9USAB-00488RP.
5. How are civilian internees evacuated from an occupied territory to a permanent internment camp in the United States? A. B. C. D. In the same manner that EPW are evacuated. Evacuating civilian internees is prohibited. Separate from EPW. Civilian internees are evacuated to a neutral third country for permanent internment.
3-85
MP2011
LESSON 3 PRACTICE EXERCISE ANSWER KEY AND FEEDBACK Item 1. 2. C. C. Correct Answer and Feedback Two letters and four postcards. If necessary . . (page 3-71, para 9). Permanent internment camps aren’t normally established in-theater when EPW are evacuated to CONUS. Permanent internment camps . . . (page 3-1, para 2) An EPW and CI camp liaison team. The EPW and CI . . . (page 3-15, para 3). US9AF-00488EPW. For example, the first . . . (para 3-22, para 5). Evacuating civilian internees if prohibited. CI are processed in . . . (page 3-17, para 7).
3. 4. 5.
C. B B.
MP2011
3-86
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/8167429/US-Army-Photography-Course-SS0515-D-Organizational-Maintenance-of-Laboratory-Equipment | CC-MAIN-2017-04 | refinedweb | 29,072 | 55.24 |
In C programming, it is also possible to pass addresses as arguments to functions.
To accept these addresses in the function definition, we can use pointers. It's because pointers are used to store addresses. Let's take an example:
Example: Call by reference
#include <stdio.h> void swap(int *n1, int *n2); int main() { int num1 = 5, num2 = 10; // address of num1 and num2 is passed swap( &num1, &num2); printf("num1 = %d\n", num1); printf("num2 = %d", num2); return 0; } void swap(int* n1, int* n2) { int temp; temp = *n1; *n1 = *n2; *n2 = temp; }
When you run the program, the output will be:
num1 = 10 num2 = 5
The address of num1 and num2 are passed to the
swap() function using
swap(&num1, &num2);.
Pointers n1 and n2 accept these arguments in the function definition.
void swap(int* n1, int* n2) { ... .. }
When *n1 and *n2 are changed inside the
swap() function, num1 and num2 inside the main() function are also changed.
Inside the
swap() function,
*n1 and
*n2 swapped. Hence, num1 and num2 are also swapped.
Notice that,
swap() is not returning anything; its return type is
void.
This technique is known as call by reference in C programming.
Example 2: Passing Pointers to Functions
#include <stdio.h> void addOne(int* ptr) { (*ptr)++; // adding 1 to *ptr } int main() { int* p, i = 10; p = &i; addOne(p); printf("%d", *p); // 11 return 0; }
Here, the value stored at p,
*p, is 10 initially.
We then passed the pointer p to the
addOne() function. The ptr pointer gets this address in the
addOne() function.
Inside the function, we increased the value stored at ptr by 1 using
(*ptr)++;. Since ptr and p pointers both have the same address,
*p inside
main() is also 11. | https://cdn.programiz.com/c-programming/c-pointer-functions | CC-MAIN-2020-24 | refinedweb | 293 | 81.12 |
Hi Its MamaTax Here!
Thanks for asking the question.
Yes you can correct your tax return online or send HM Revenue & Customs (HMRC) the pages of your paper tax return (mark them ‘amendment’). The deadline for this is 12 months after the tax return deadline (eg 31 January).If the period you refer to relates to a period more than 12 months ago then you need to write to HMRC and: confirm the tax year you’re correcting explain why you’ve paid too much tax include proof you’ve paid the tax in that year confirm how you want the refundThe deadline for writing to HMRC is 4 years after the end of the tax year you’re correcting for such corrections. If you have any refunds, HMRC will let you know how much your refund is (including interest). If you owe more tax, HMRC will let you know how much tax you owe (including interest) and when to pay them.You can use this address below to post your documents:HM Revenue & CustomsSelf AssessmentPO Box 4000CardiffCF14 8HR Hope this helps. | http://www.justanswer.com/uk-tax/7xv9e-hi-i-self-employed-2009-completed.html | CC-MAIN-2014-15 | refinedweb | 182 | 66.17 |
The idea is to categorise nodes into 3 piles on each recursion: the values equal to, smaller than, and larger than the head one. Then sort the small and large piles, and finally connect them together with the equal one accordingly.
def sortList(self, head): if head == None or head.next == None: return head pa=head pE=head pL=None pS=None while pa != None: if pa.val == head.val: pE,pE.next=pa,pE elif pa.val > head.val: pL,pL.next=,pa,pL else: pS,pS.next=pa,pS pa=pa.next pL=self.sortList(pL) pS=self.sortList(pS) head.next=pL if pS: pa=pS while pa.next: pa=pa.next pa.next=pE return pS or pE | https://discuss.leetcode.com/topic/107914/easy-recursive-solution-beat-92 | CC-MAIN-2018-05 | refinedweb | 122 | 78.96 |
In another water-cooler argument today, a couple of coworkers didn’t like my extension method example. One main problem is that it violates instance semantics, where you expect that a method call off an instance won’t work if the instance is null. However, extension methods break that convention, leading the developer to question every method call and wonder if it’s an extension method or not. For example, you can run into these types of scenarios:
string nullString = null; bool isNull = nullString.IsNullOrEmpty();
In normal circumstances, the call to IsNullOrEmpty would throw a NullReferenceException. Since we’re using an extension method, we leave it up to the developer of the extension method to determine what to do with null references.
Since there’s no way to describe to the user of the API whether or not the extension method handles nulls, or how it handles null references, this can lead to quite a bit of confusion to clients of that API, or later, those maintaining code using extension methods.
In addition to problems with dealing with null references (which Elton pointed out, could be better handled with design-by-contract), some examples of extension methods online propose examples that show more than a whiff of the “Primitive Obsession” code smell:
-
-
Dealing with primitive obsession
In both of the examples above (Scott cites David’s example), an extension method is used to determine if a string is an email:
string email = txtEmailAddress.Text; if (! email.IsValidEmailAddress()) { // oh noes! }
It’s something I’ve done a hundred times, taking raw text from user input and performing some validation to make sure it’s the “right” kind of string I want. But where do you stop with validation? Do you assume all throughout the application that this string is the correct kind of string, or do you duplicate the validation?
An alternative approach is accept that classes are your friend, and create a small class to represent your “special” primitive. Convert back and forth at the boundaries between your system and customer-facing layers. Here’s the new Email class:
public class Email { private readonly string _value; private static readonly Regex _regex = new Regex(@"^[w-.]+@([w-]+.)+[w-]{2,4}$"); public Email(string value) { if (!_regex.IsMatch(value)) throw new ArgumentException("Invalid email format.", "value"); _value = value; } public string Value { get { return _value; } } public static implicit operator string(Email email) { return email.Value; } public static explicit operator Email(string value) { return new Email(value); } public static Email Parse(string email) { if (email == null) throw new ArgumentNullException("email"); Email result = null; if (!TryParse(email, out result)) throw new FormatException("Invalid email format."); return result; } public static bool TryParse(string email, out Email result) { if (!_regex.IsMatch(email)) { result = null; return false; } result = new Email(email); return true; } }
I do a few things to make it easy on developers to use an email class that can play well with strings as well as other use cases:
- Made Email immutable
- Defined conversion operators to and from string
- Added the Try-Parse pattern
The usage of the Email class closely resembles usage for other string-friendly types, such as DateTime:
string inputEmail = txtEmailAddress.Text; Email email; if (! Email.TryParse(inputEmail, out email)) { // oh noes! } txtEmailAddress.Text = email;
Now I can go back and forth from strings and my Email class, plus I provided a way to convert without throwing exceptions. This looks very similar to code dealing with textual date representations.
Yes, but
The final Email class takes more code to write than the original extension method. However, now that we have a single class that plays nice with primitives, additional Email behavior has a nice home. With a class in place, I can now model more expressive emails, such as ones that include names like “Ricky Bobby <ricky.bobby@rb.com>”. Once the home is created, behavior can start moving in. Otherwise, validation would be sprinkled throughout the system at each user boundary, such as importing data, GUIs, etc.
If you find yourself adding logic to primitives to the point of obsession, it’s a strong indicator you’re suffering from primitive obsession and a nice, small, specialized class can help eliminate a lot of the duplication primitive obsession tends to create. | http://lostechies.com/jimmybogard/2007/12/18/extension-methods-and-primitive-obsession/ | crawl-003 | refinedweb | 707 | 50.57 |
Cryptogram: AES Broken? 277
bcrowell writes "The latest CryptoGram reports that AES (Rijndael) and Serpent may have been broken. The good news is that when cryptographers say 'broken' they don't necessarily mean broken in a way that is practical to exploit right now. Still, maybe we need to assume that any given type of crypto is only temporary. All of cryptography depends on a small number of problems that are believed to be hard. And all bets are definitely off when quantum computers arrive on the scene. Maybe someday we'll look back fondly on the golden age of privacy."
Quantum computing for white hats (Score:3, Insightful)
Re:Quantum computing for white hats (Score:2)
Yes, you're missing something. (Score:2)
Quantum computing changes this balance. So your white hats won't be able to multiply a billion times faster even if the black hats can factor a billion times faster.
Kjella
Re:Quantum computing for white hats (Score:3, Informative)
Slightly different quantum computation will do though, quantum crypto allows transmission of entirely secure messages, that is an unbreakable system. It isn't guarenteed by the hardness of a couple of mathematical challenges but by the actaul laws of physics themselves. Not only can a quantum crypto stream not be broken, but it can detect when somebody attempts to listen in. This has been demonstarted using both air and fibre as a transmission system (can't be arsed to google for a link but there should be plenty out there, it was textbook for our quantum computation course).
On the other hand, a quantum computer would break all the old cryptosystems quite easily (not sure about eliptic curves), however they are years away from being feasible and there are a lot of hard problems to solve first.
Re:Quantum computing for white hats (Score:2)
More complex algorithms for encryption do not necessarily mean more security. If factoring, for example, is only linearly hard in the number of digits of the large pseudoprime, then you could theoretically generate absolutely massive pseudoprimes, but at some point the method becomes useless.
The problem is that our notion of "hard" and "one-way" problems is all based on the concept of NP-completeness. This concept is broken by quantum computation. We would need to find new problems that are QC-complete, in other words, problems that are hard (exponential time and # qubits) even with quantum computation, and base new encryption methods around such problems.
I'm not up to date on the literature of computational complexity, but I'm fairly sure such problems should be possible to find, a class of harder problems than NP-complete problems. But since functional QC is so far off, this is not really a practical issue yet, but I'm sure it's of theoretical interest to many.
Re:Quantum computing for white hats (Score:2)
The theories that posit QC as the solution to P=NP tend to involve poorly-defined "oracle machines" based on QC hand-waving, rather than any actual well-defined algorithms. It is very much an open question whether QC has anything interesting to say about P=NP.
-Graham
Re:Quantum computing for white hats (Score:2)
However, you are not 100% correct either. Most practical experience does not suggest a polynomial time solution exists (with the exception of Shor's Algorithm and the quantum factoring algorithm - this does mean the problem is in BQP, the new class of bounded-error, quantum polynomial time). If you look up the modern definitions of P, it seems to refer only to deterministic sequential machines, and I don't think that includes quantum computers strictly speaking.
I guess this is a quibble, but a worthwhile one, since the whole point is that there is a class of problems that _seems_ to be made easier (i.e. polynomial) by quantum computing, though it's not clear that those same problems don't have non-quantum polynomial solutions, it certainly appears that way based on everything we know.
You seem to be correct though that there is no proof of anything about P=NP from QC (I just looked it up and I'll be damned if I could make any sense of the articles I found without some serious study).
Re:Quantum computing for white hats (Score:2)
According to the cryptographer's panel at RSA quantum computing is much less of a threat than many assume. In the first place quantum computing tends not to be effective against symetric algorithms. Secondly RSA turns out to be based on a problem that is very very hard with conventional computing and very very easy with quantum computing. It is not clear that all possible public key algorithms are susceptable to attack using quantum techniques.
In other words don't get the idea that quantum computing immediately means the end of cryptography.
On the actual topic of AES I would not call this a 'break', in fact nothing less than breaking the cipher for real should count as a break. There are plenty of 'breaks' of DES but none of them are easier than brute force when applied in practice. What we have is a theoretical compromise that is way outside the capabilities of any current technology.
Or consider it this way, given the known problems with 3DES (limited block size, severe limitation on safe amount of ciphertext generation) I don't feel like sticking with 3DES as a result of the article.
Re:Quantum computing for white hats (Score:2)
Quantum computing appears to allow the user to cheat, by picking the correct n-bit value out of 2**n bits, for a class of problems that mostly look like the factoring problems that make RSA public key crypto work. This doesn't appear to make things faster for the white hats, because the white hats already knew what the correct bits were for data that was intended for them or data that they were trying to sign.
quantum computers (Score:2, Funny)
couldn't these be described as "weapons of mass decryption"? [visions of 'sneakers' all over again]
Quantum (Score:2, Interesting)
Is it really back to XORing our messages with random data known to both ends?
That sucks.
And the cry went up - make quantum computers illegal. Only terrorists want quantum computers...
Golden Age of Privacy (Score:2)
Quantum computing =/= no privacy (Score:3, Interesting)
That, is untrue. Yes, when quantum computers arrive, they will decode encryptions done today in polynomial time.
But arrival of quantum computers does not mean an end to privacy. "Quantum Cryptography" invokes the fundamental laws of QM to guarantee that there's absolutely no way to decode a message thus encoded (without alerting the sender of a "wiretap"). It stands to reason that by the time Quantum Computers arrive bigtime on the scene, Quantum Cryptography will arrive as well.
The theories of the two ideas are well worked out, but the tech remains in its infancy.
Re:Quantum computing =/= no privacy (Score:5, Informative)
Quantum computing would break a range of encryption techniques, especially most public-key techniques, but nothing known today rules out new and more robust digital encryption technologies being developed that Quantum Computers could not break, and I imagine plenty of people are working on them.
Addendum (Score:4, Informative)
(See here) [ibm.com]
Re:Quantum computing =/= no privacy (Score:3, Insightful)
Re:Quantum computing =/= no privacy (Score:4, Insightful)
on computational intractability rather than a
demonstrable computational impossibility will always
be open to some future innovation rendering it
trivial to crack. Elliptic curve crypto seems to
have the best prospects for the future right now,
and you can use it right now: El Gamal is
implemented in GPG.
But to say that QC will render effective crypto a
historical artifact is clearly mistaken. If it
were true, it would imply that there are *no*
hard problems any more, once QC techniques are
employed. All that QC can do is compute functions
over a finite field with effectively infinite
parallelism. It's unfortunate that most crypto
systems today rely upon functions over a finite
field, but there are plenty of hard problems that
are only valid over function spaces, for example.
Re:Quantum computing =/= no privacy (Score:2)
Re:Quantum computing =/= no privacy (Score:2, Funny)
I have a Quantum hard drive, but I didn't know they were getting in the PC business.
Hmmm... now that I think about it, I thought they got bought out by Maxtor. I think you're just bluffing about "Quantum computers" and this power they will supposedly have.
Re:Quantum computing =/= no privacy (Score:2)
No - Mix + Mash ciphers aren't automagically immune, see quote from Schneier:
"While quantum computation can make problems such as factoring and discrete logarithms (which public-key cryptography are based on) easy, they can only reduce the complexity of any arbitrary computation by a square root. So, for example, if a 128-bit key length was long enough pre quantum computation, then a 256-bit key will be long enough post quantum computation."
-- B.Schneier, 10th Oct 1998 posting to soc.history.what-if USENET group
E.g. 3DES will be pretty much toast, as will AES128.
The end of privacy (Score:5, Insightful)
That is quite a funny statement. 99% of all email is being sent in clear text, often passing through gateways which have permanent wiretaps installed. Phone tapping is at an all time high in the west and there are cameras on nearly every street corner around where I live.
Privacy.... I had a lot more privacy 20 years ago, that is for certain.
Re:The end of privacy (Score:2, Insightful)
Re:The end of privacy (Score:2, Interesting)
I doubt that. 20 years ago, your neighbour, your baker and your butcher knew more about you than any mass e-mail marketing company does nowadays. The only difference is that they didn't send you spam, but for sure your butcher knew that you didn't know the difference between a normal and an excellent steak, and sold you the first one for the price of the second one. So you were f*cked even then, only you didn't know it.
In order to provide some on-topic content also: I thought the basis of all (public-key) encryption was based on one "hard to solve" problem only, namely the "factoring in prime numbers" problem -- are there any problems that I missed?
Re:The end of privacy (Score:2)
Re:The end of privacy (Score:2)
I doubt that. 20 years ago, your neighbour, your baker and your butcher knew more about you than any mass e-mail marketing company does nowadays.
Uhh... sorry to be literal, but 20 years ago I didn't know my butcher personally and I still don't. I buy mostly buy meat at the supermarket. I think it was more like 60 years ago that everyone bought meat from the local butcher.
On the other hand, my credit card issuer knows far more about me than any e-mail marketer. They know that I play golf once a week, how much I spend in the grocery store, when and where I go on vacation, etc. All the average e-mail marketer (thinks they) know about me is that I like rape pr0n.
-a
Re:The end of privacy (Score:2)
are there any problems that I missed?
RSA is based upon the Integer Factorization Problem (IFP) (*).
Elgamal / DH are based upon the Discrete Log Problem (*)
Eliptic Curves Cryptography is based upon, erm, Eliptic Curves.
(*) Note: there are no proofs that RSA or DH/Elgamal are actually as hard as the underlying IFP or DLP - e.g. they could be broken even if factoring is "hard".
quantum crypto (Score:2)
we can not assume that either side of the crypto equation will remain dormant, using only today's technologies. The next Bruce Schneier [amazon.com] will happen along (or maybe the man himself) and we'll be dealing with the golden age of quantum cryptography.
gross oversimplification (Score:3, Funny)
Uhm. emm. EZ?
:)
Re:gross oversimplification (Score:2)
Re:gross oversimplification (Score:2)
The fact that my and your sense of humour did not intersect.
:) To me, it seemed that the clip was like straight from Dilbert mission statement generator [dilbert.com]. Anyway, the description is very good, but to one like me, who does not use math terms actively, it takes some time to understand what it means in practise and how to use the information. And at first sight it seems like the recipe for Energy Bolt spell. But then again, I am mathematic moron :)
Nice article... (Score:2, Funny)
...I love the first line:
AES may have been broken. Serpent, too. Or maybe not. In either case, there's no need to panic. Yet. But there might be soon. Maybe.
Lovely summary, guys
:-)
Quantum Computing and Privacy (Score:4, Insightful)
The focus of international intelligence gathering would shift radically back to human intelligence (which is already happening for other reasons) and the new basis for security would become that of access cintrol through discontinuity - if you network is not connected to your neighbor's, then he can't get access to it regardless of his technical sophistocation.
The days of the NSA Sneaker-Net would return (picture NSA computer geeks running from one terminal to another with DLTs in order to keep the systems in communication, such that data could only flow in one direction.
Disclaimer: IANAF - I Am Not A Futurist
--CTH
Re:Quantum Computing and Privacy (Score:5, Insightful)
How would this technology work against one-time pads? Besides, historically technologies have always tended to balance. Someone makes a better tank, then someone makes a better tank-killer, then the cycle repeats. If today's sophisticated encryption can in the future be defeated with cheap devices, then the crypto that this future society considers sophisticated would be well beyond ours. Consider the relative computational power of Bletchly Park and the sophistication of Engima of the early 40s and the power and sophistication of a 21st Century desktop PC.
International politics would be forever changed.
Not really. It would simply switch from broadcast and ciphers to the diplomatic bag and codes - which is how it worked for centuries. Complexity in international affairs is nothing new.
Re:Quantum Computing and Privacy (Score:2)
Where of course, Numbers Stations [ibmpcug.co.uk] come in.
For all the advances in asymetric cryptography, Numbers Stations / OTP has remained the system of choice for many organizations. This says something about asymetric cryptography; either that it isnt trusted, that its impractical for espionage, or something else...
Re:Quantum Computing and Privacy (Score:2)
Someone makes a better tank, then someone makes a better tank-killer, then the cycle repeats.
Very true, but it bears pointing out that the direction of the advantage seems to be different.
In the battle between warhead and armor, the warhead tends to retain the lead most all of the time, because it's basically easier to blow holes in something than to withstand massive force.
In the battle between cipher and cryptanalyst, the ciphers have tended to retain the lead, which seems to imply that creating an algorithm that munges data in complex ways is basically easier than unraveling said munging. This is *not* to say that good cipher design is easy or that anyone can do it; it's fiendishly difficult, because the attackers are fiendishly clever. Still, over the last 50 years or so, history shows us that the ciphers have tended to stay ahead of the cryptanalysts. DES is a shining example, given that it has been the #1 target for 30 years and has stood essentially undamaged.
Re:Quantum Computing and Privacy (Score:2)
So yes, I agree that DES is the granddaddy of Feistel network ciphers. Few of the cryptanalytic attacks work without ungodly amounts of chosen plaintext or artificially reduced round counts. But code breaks have generally occurred within months or years of implementation, not decades. As Gwido Langer, the chief of Poland's Biuro Szyfrow, said about breaking the German Enigma (when the British were unable to) "You don't have the same motivation as we do." Until after World War II, most code systems were broken during the same wars they were supposed to be protecting, and for that same reason.
The wartime and/or government codebreakers also have one more advantage: they don't typically announce their breaks to their enemies du jour. The recently released Venona papers show how Soviet spies who were given (faulty) one-time pads in 1942-1946 had them broken between 1948 and 1980.
Yes, it's a constant struggle. Yes, DES looks pretty good. But I wouldn't want to trust ALL of the national eggs to any single one of the currently commercially-available baskets.
Re:Quantum Computing and Privacy (Score:2)
It has been known for some time that there are many weak keys in DES.
Depending on your definition of "many", this is true. The complementation property of DES is a small weakness as well. These issues reduce the strength of DES by a miniscule amount.
Near this time, I recall seeing a paper claiming to reduce DES to a 2^48 problem, but I'm unable to find a citation tonight.
You're probably thinking of Matsui's Linear Cryptanalysis.
In 1998, Wiener's machine was built by the Paul Kocher and EFF for about $250,000, and it breaks DES keys in about three days, on average.
The DES key size was always too small, but that was what the NSA wanted. Remember that the original cipher (Lucifer) had a 128-bit key. 3DES addresses this problem handily, if not elegantly.
Also, if you'll permit me to pick nits, Deep Crack recovers a key in about 5 days, on average.
Overall, though, all of the concerted effort focused on DES has reduced its effective key size very little. That effective key size was too small to begin with, but 3DES has an effective key size that is adequate for a few years yet, particularly since linear and differential attacks cannot be used to speed up the "meet-in-the-middle" attack.
This is a pretty impressive record, in my opinion.
Re:Quantum Computing and Privacy (Score:2)
Yes, that is the tradition. But the mind and effect of man is finite. Eventually we will end up with "The Answer" -- and no further cycle.
Only if there is a "The Answer". It's certainly not clear that there will be such an answer in the case of either the warhead/armor evolution or the cryptographer/cryptanalyst evolution, mainly because what often happens to allow one side to take the lead over the other is a changing of the rules.
To use some examples from the armor/warhead debate, consider that the debate started as armor vs. blade, progressed to armor vs. projectile, then to armor vs. explosive warhead, then to armor vs. shaped charge, then to armor vs. ultradense projectile backed by shaped charge. Further consider that armor changed radically a few times along the way as well, changing materials, thickness and composition (especially lately, with highly-engineered composites). Not only that but armor has even acquired the ability to actively "fight back" -- reactive armor. In tanks it takes the form of explosives attached to the tank's armor plate that explode to slow penetrators and distort shaped charges. In naval warfare, you can even view anti-missile defensive systems as part of the "armor". A set of Aegis-equipped ships with high-speed data links and layered missile defensive systems creates a sort of a "virtual" armor that encloses all of the ships. Now consider how far removed that is from a bronze sword hitting a boiled leather cuirass, and tell me that man's imagination is limited.
Things like quantum computing are based on the fundimental physics of the universe. While I can't say if they are, or are not, the end-game cycle -- they're surely isn't much left to work with.
Which does not mean that quantum computers will be able to solved all problems. In fact, there are already plenty of problems known that quantum computers will not be able to solve, and there is only a tiny set of problems for which it's clear that quantum computers will be any help at all.
Plus, QM does not give us a "fundamental" understanding of the universe; it has numerous well-known flaws (mainly in terms of what it does not explain), and who knows what else we may be able to do given a deeper understanding of How Things Work.
AC is Troll or Clueless on OTP vs QC (Score:2)
In normal public/private hybrid systems, you use the public-key algorithm to encrypt a random secret session key, and then use the session key with a symmetric-key algorithm to encrypt the message. For some popular categories of factoring-based public-key algorithms, a hypothetical QC can instantly factor the key, and you can do the polynomial-time validation to show that the decryption key you've pulled out of the quanta matches the public encryption key. (OTPs don't let you do that.) You can then use the session key to crank the symmetric-key algorithm. The lead article that this discussion is about deals with weaknesses in the new symmetric-key algorithms that all of us were hoping we could use to replace Triple-DES, which appears to be very strong but is slow and clumsy (and neither 3DES nor AES appears to be attackable using QCs.)
Since widespread use of OTPs means a return to lots of couriers with briefcases attached to their arms, I suppose there are some non-mathematical ways to use QC to attack OTPs - hit the courier on the head with the QC, and then use the liquid helium from the qc to help shatter the handcuff chain, or offer the courier a Quantum Computer as bribe, or whatever
:-)
Re:Quantum Computing and Privacy (Score:2)
Do not fear the Quantum Age (Score:2)
Quantum computing is a *good* thing.
Strictly Speaking (Score:2, Insightful)
This is not true; The "One Time Pad" does not rely on a difficult problem like factoring for its basis.
And all bets are definitely off when quantum computers arrive on the scene. Maybe someday we'll look back fondly on the golden age of privacy.
OTP is unbreakable, and so "the golden age of privacy" will not end because of quantum computers.
Now legislation ending the golden age of privacy is another matter entirely.
Re:Strictly Speaking (Score:2)
Only if your pad is truly random. There's a scene in Cryptonomicon in which they realize the vicar's wife is looking at the letters as she draws them out of the tombola used to randomize; being a native English speaker she is subconsciously biased to prefer certain letters over others, and this is enough to open a chink in the armor.
Re:Strictly Speaking (Score:2)
It would be a little crazy to say "OTP, when it is deployed improperly, cannot be broken", now wouldnt it?
Re:Strictly Speaking (Score:2)
The big problem is that once you've encrypted something with an OTP, the security (and secrecy) of the OTP is *everything*. If anyone gets the OTP, your encryption is done for.
So, managing the OTPs becomes the biggest challenge in using them. First, you have to have an OTP about the same size as the file you're encrypting, to ensure that no statistical games can be played to re-build the key, and you have to have a seperate OTP for every message you encrypt. Also, getting an OTP to someone else you want to encrypt a message to is not an easy matter. You have to be sure that no one else can see the transaction that shares the OTP, since that would immediately destroy the security of the system.
Compare this to any symmetric-key system: Yeah, you've also got a key that's central to the cipher. But, the key does not need to be approximately the same size as the file encrypted (as is the case with OTPs), which, for big files, is a huge deal.
Basically, there's a reason we like symmetric-key algorithms, and it's mostly to do with usability. If an encryption system is such a pain in the ass that no one uses it, then its impact in the real world will be zero.
Re:Strictly Speaking (Score:2)
I totally agree with you about OTP being a pain in the ass to manage, but as far as its impact in the real world, you could not be more wrong.
Numbers Stations [ibmpcug.co.uk] have relied on OTP for decades, and continue to do so till today.
Like anything, it depends how much you want a to protect your communications. If OTP is going to save your life, as in espionage, it becomes much less of a pain in the ass. If you want to encrypt your daily email with the 20 people in your Mozilla address book, then things get very messy very quickly, and of course, you can forget stuff like ecommerce, which instantly become "unwieldly" to put it mildly.
Re:Strictly Speaking (Score:2, Informative)
OTP is not unbreakable.
While the algorithm itself isn't breakable, the strenght of a OTP based solution will still depend on several weak points, like the random number source.
There's plenty of room for trying to attack it, using statistical analysis and estimates to try to be able to break it.
Re:Strictly Speaking (Score:2)
OTP *is* unbreakable. [std.com]
This is a well established fact. Like I said elewhere in this thread, its clear that I mean "when it is implimented correctly", as it would make absolutely no sense to imply "OTP is unbreakable when it is implimented poorly".
Well... yeah! (Score:2)
Well that's a serious problem if you ever, ever thought cryptography had any sort of permanence!
For one thing, an encrypted message is of no use to the receiver if they can't DE-crypt it, *poof* crypto is not permanent.
I'd recommend reading "The Code Book" by Simon Singh as the first two-thirds of the book are a history lesson that demonstates to me how cryptography endagers the lives/way of life of those who rely on it to protect themselves (in particular, Mary Queen of Scots and Enigma).
Old data is the problem (Score:5, Insightful)
Imagine some black hat just archived all encrypted data he could get (bank transactions, private conversations, you name it) then decrypts them in 10 years when he can buy his brand new quantum computer. All this old data may prove very valuable for him.
Perhaps very sensitive data shouldn't even transit on the net because you can't tell if it'll be decryptable in the future.
So use one-time pads (Score:2, Insightful)
Once you have the list of numbers, get the list of words and phrases to encode. Put one random number next to each word or phrase (watch for duplicate codes here!)
Put the pad on a cd, send it to whoever you want to communicate with. Doing this last part is the only large potential insecurity, plus it's inefficient. But the one time pad is theoretically unbreakable.
Re:So use one-time pads (Score:2, Informative)
Here it's fitting to note the words of Steve Bellowin:
In operation, there are many 'gotchas' to watch out for, never reuse a pad for example.
Google for 'Venona' and 'one time pad' for a good example of even the experts (KGB et al) getting one time pads wrong.
That's the wrong way to use them. (Score:2, Informative)
If you number individual words and phrases, then you can only use each word or phrase once, otherwise it's not a one-time pad anymore. Think about it... how long would it take a cryptanalyst to figure out the code for "the" or "you"?
The pad should simply be a chunk of random bits, and both sides need to keep track of which bits have been used. Then encrypt your messages by xoring them with an unused stretch of bits.
'the' or 'you' (Score:2)
I used one time pads in the army. You wouldn't use one to transmit War and Peace. But you would use it to send "Attack is Tomorrow, sell IBM". Or to send "Name of agent in NSA is CowboyNeal."
Those would be encoded as the phrases "attack is tomorrow", "sell IBM", "name of agent","in NSA". The word "is" could be encoded, or dropped (the sentence would be parseable without it.) Only "CowboyNeal" might have to be encoded letter by letter. Or it could be encoded as "cowboy"+"n"+"e"+"a"+"l".
Generally, using a one time pad to encode letter by letter is a bad idea. Done only when there is no alternative.
Re:'the' or 'you' (Score:3, Insightful)
Re:'the' or 'you' (Score:2)
I would say that he is correct: in practice, you would want to drop unnecessary or redundant info out of the message. Since OTPs rely so heavily on securely sharing the pad, you want to maximize the use you can get out of the pad you have without re-use. This means dropping redundant words. In common computer practice, we'd just zip the damn thing before sending it, hopefully greatly increasing the entropy (and decreasing the length) of the message before even bothering to encrypt it, but that's a whole other topic for discussion.
Re:'the' or 'you' (Score:2)
Re:'the' or 'you' (Score:2)
And as for XOR; you're right. However, it's almost always used because you need a reversible bitwise function. I can only think of four, two of which are trivial, leaving only XOR and (not XOR) as your possibilities. If you do operations on chunks larger than bits, you have more options, though.
Different implementations (Score:2)
The Army still uses one time pads that way.
Re:Different implementations (Score:2)
Re:Different implementations (Score:2)
Re:So use one-time pads (Score:2)
Re:So use one-time pads (Score:2)
Re:So use one-time pads (Score:2)
-l
MAYBE? (Score:2, Insightful)
If I'm not mistaken, this is rule #1 of cryptography. Doesn't really matter what algorithm you use or how secure everyone or anyone thinks it is, they're always able to be cracked. Which cryptosystem you use is more a measure of reasonable security -- do you want your messages secured for years, decades, etc., with an assumed increase of computing power?
What Schneier really meant to say... (Score:4, Interesting)
Seriously, though - any approach that manages to reduce the difficulty of cracking these algorithms by a factor of 2^100 is impressive, and Schneier at least simplifies it enough that us folks with very rusty number theory can appreciate the achievement.
His comment later in Cryptogram about his name appearing on a list of banned words is much, much scarier - looks like he's upset someone in the content censorship Gestapo. That same content filter would deny access to today's Slashdot front page - nasty.
Re:What Schneier really meant to say... (Score:2)
Recall that "breaking" an algorithm means finding a method of attack with a work factor less than 2^k where k is the key length in bits. "Breaking" in this context doesn't mean recovering plaintext of encrypted communications. Since they have demonstrated an attack with a work factor of 2^200, that means that 256-bit AES was "broken" but 128-bit AES was not.
One Time Pad != Encryption (Score:4, Insightful)
The typical idea about cryptography is to use a secure medium to provide the key, while using the insecure medium to send the data, because the insecure medium is much faster/better/easier to use. So I can meet you in person and get the key, or call you on the phone and verify your PGP (or GPG if you please) fingerprint (assuming you're not being wiretapped as well), and then use the Internet as a medium from then on.
The OTP "solution" would be to say a random sequence of 1s and 0s, then use those to decrypt the irc converation later, not really an option. You'd "run out" of pad rather quickly. Oh, and quantum computing does as far as I know not affect encryptions based on elliptic integrals (which by theorem can't be solved analytically, but I suppose there could be approximations).
Kjella
Kjella
Re:One Time Pad != Encryption (Score:2)
You get a source of random data, there are plenty available and prepare a harddrive or harddrives that contain that data. Being that harddrive capacities are now in the 200GB range, you can get quite a lot of data. You then duplicate the drives once and only once. A secure, trusted courier then takes the drives to the other location where they are installed. You now have a good OTP system setup. Your encrypting/decrypting stations just need to be physically secure and keep track of what part of the pad has been used. When it's all used up, you destroy the drives.
Now I can't think of anything that needs this level of security today, but it's not so impratical that a company couldn't or wouldn't do it if there was a reason. Under this system you just have an encryption channel that will give you X total GB of data (half duplex) transfer before it has to be refreshed. This would give you teh capacity to use this to secure a company intranet or something, but it wouldn't be unreasonab;e to get a years worth of small messages that require the utmost secearcy to be transfered this way.
Re:One Time Pad != Encryption (Score:2)
Because I only get to see my brother once a year in Cuba. And he has a problem carrying back CD-Rs of random pad material through customs.
verify your PGP (or GPG if you please) fingerprint (assuming you're not being wiretapped as well),
Passive evesdropping (aka wiretapping) does not interefere while verifing a public key fingerprint. So you can verify fingerprints of a public key in a public place.
OTP has other problems, beyond the typical key distribution problem. If a non-random source is used for generating the key material, or if the key pad is accidential reused, then trouble stikes, like it did with Venoma [nsa.gov].
OTP also lacks message integerity, so if an attack could cut and paste blocks of encrypted ciphertext, Bob would not be able to detect the altered message if the decrypted text make sense (deposit $1000 to account #1233335632 rather than the modified message of deposit $4950292.95 to #1233335632)
encryptions based on elliptic integrals (which by theorem can't be solved analytically, but I suppose there could be approximations).
Now what methods are you referring to here? Elliptic Curve Cryptography [certicom.ca] normally is used as a faster version of the Discrete Logarithm Problem [rsasecurity.com] (DLP) where it is faster and easier to Exponentiate (x^y) than it is to calculate its discrete logarithm (x such that g^x = h) which is the inverse operation and is much harder to calculate.
So I would be interested in this method of using elliptic integrals.
Quantum computing changes the games of cryptography, but it does not end the struggle of cryptographer vs. cryptanalysis. AES when used with a 256-bit key is expected to withstand a bruce force key search using quantum computing within the near future (less than 10-20 years). Of course quantum computing being a young field there is a chance that a radical discovery may ruin our present best estimates for future capabilitities.
Re:One Time Pad != Encryption (Score:2)
1 personal meeting yields a lifetime of secure communication.
Re:One Time Pad != Encryption (Score:3, Insightful)
The definitive text on cryptography, The Handbook of Applied Cryptography [uwaterloo.ca], defines the OTP as a type of encryption...I know this is Slashdot but I don't think your arbitrary definitions help here.
Sending a CD worth of random data via a secure channel in advance and then sending an encrypted message with the knowledge that it will be unbreakable is sometimes worth prior thought. Sure, it may not be usefull for the masses who require this kind of security or don't know their going to communicate in the future but to claim that this cipher "isn't encryption" is bull.
As Bruce says, relax...for now. (Score:2)
So, ten years or more. Heck, at that point, shouldn't quantum computers be breaking this stuff anyhow?
The XSL attack (Score:2, Interesting)
All you "so is GPG broken?" put your pants back on.
Summary of attack:
XSL stands for three of the basic operations in Rijndael and Serpent. The reason why this attack works is because the substitution layer of Rijndael/AES and Serpent can be expressed very neatly as the same domain as the Linear layers.
Now when I say 'neatly' I mean 'it would be possible' not no one's shown us this monster set of equations relatnig the (128+128/192/256) bit inputs to the 128 bit outputs. The Rijndael/AES and Serpent ciphers may be what we call "over defined".
Think back to high school when you have N liniearly independent linear equations and N-1 unknowns. You had an infinate number of posibilities for solutions. If you had N eqns and N unkn's you had 1 sol'n. If you had N eqns and N+1 unkn's you were in a funny place.
The authors suggest Rijndael/AES Serpent is in the latter catagory of the differential nature (and not the linear nature you learned in high school).
So what does this mean? The possibility HAS NOT BE EXCLUDED that this attack is possible. It really proves demostrates nothing that it's at all possible. Which is best anyone's been able to do in the past 6 years.
JLC
See sci.crypt thread:.
Bruce Schneier on the subject (Score:2)
Some corrections to the article (Score:2)
All of cryptography depends on a small number of problems that are believed to be hard.
This is really only true of public key cryptography. Symmetric ciphers (like AES, Twofish, Serpent and DES) depend upon really complex applications of transposition and substitution, not on mathematical problems hoped to be NP-hard (when recast as a decision problem, blah, blah, blah).
The breakthrough in the mentioned papers is just a collection of techniques used to try to create usable mathematical models of these complex mishmashes of operations, thereby allowing them to be analyzed and attacked. This is fundamentally different from public key encryption algorithms which are very simple and trivially easy to model, but reduce to models of (we hope) intractable problems.
Whether or not it is possible to create a symmetric cipher of sufficient complexity and non-linearity that it is impossible to cryptanalyze is, of course, an open question that will probably never be fully answered. In the arms race between cipher designers and cryptanalysts the top cipher designers have always managed to stay substantially ahead of the top cryptanalysts, however. Witness the fact that DES has withstood 30 years of concerted attack, without a significant attack being found (other than the built-in small key size, of course).
Speaking of DES, what I'm most interesting in knowing right now is if these new attacks can be applied to it. 3DES is still the most important cipher in the real world and will be for some time.
And all bets are definitely off when quantum computers arrive on the scene.
Again, bcrowell doesn't know the difference between symmetric and asymmetric ciphers. No one has devised a way to use quantum computers to attack symmetric ciphers. That's not to say it won't happen, but, as I understand it (not much), modelling complex problems in a QC is very, very difficult. QCs are good at simple problems with a large solution space.
Maybe someday we'll look back fondly on the golden age of privacy.
If we lose all of our privacy it will be because we choose to, not because of any lack of technological tools.
Re:Some corrections to the article (Score:2)
"Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer"
Given that all modern crypto protocols use some variant of asymmetric crypto to transmit their symmetric key, a break in the asymmetry eventually breaks the symmetry too.
That being said, I'm personally suspicious of quantum computing. Naive students learning about lossless compression algorithms inevitably believe they can apply the same algorithm multiple times, each time shrinking the data further and further. This actually works for some algorithms, for one or two runs. Eventually Shannon takes hold; the system refuses to compress below the level of entropy in the data (indeed, it starts to expand).
I suspect there's an analogous limit on the quantum scale, past which entropic capacity will prevent computation at high qubit levels. I could be wrong, and I'll be suitably impressed when the hardware shows up on my doorstep. But computationally relevant quantum logic hasn't been shown yet, and we shouldn't act like "it's only a matter of time".
It'll be something to be excited about if quantum computing proves feasible. I'd hate to see that achievement spoiled by "We've known it was possible for a decade."
Of course, I'm just being mildly cranky. I ain't a fan of quantum crypto either -- entanglement and crypto don't work too well in the same conceptual universe. A little skepticism is useful
Yours Truly,
Dan Kaminsky
DoxPara Research
Re:Some corrections to the article (Score:2)
Given that all modern crypto protocols use some variant of asymmetric crypto to transmit their symmetric key, a break in the asymmetry eventually breaks the symmetry too.
Actually, this is not always true.
It is true of most protocols used on the net, but other areas of secure computing rely pretty much exclusively on symmetric algorithms. For example, my work is primarily in security for systems involving smart cards. Given that:
...symmetric crypto makes more sense. Similarly, the banking system relies almost entirely on 3DES, more because of inertia than any security reasoning but, still, PK is rare.
In many, many real-world security scenarios, key distribution can be solved without PK, and in many cases the result is simpler and therefore more secure (complexity being the enemy of security).
I suspect there's an analogous limit on the quantum scale, past which entropic capacity will prevent computation at high qubit levels.
I'm not qualified to comment, but the more far-out claims for QC, even from those who understand it, seem much too good to be true.
Of course, I'm just being mildly cranky. I ain't a fan of quantum crypto either -- entanglement and crypto don't work too well in the same conceptual universe. A little skepticism is useful
:-)
Certainly is
:-)
I see quantum crypto as an interesting idea, but in practice I don't think it offers very much over what can be accomplished with standard cryptography. Sure, it's nicer to have a theory that allows you to say "I *know* no one is tapping this line", but unless your attacker is a major world government, a good symmetric stream cipher with a decent automatic rekeying system, plus some strong message authentication codes, is perfectly adequate. Actually, it's almost certainly adequate even if your attacker *is* a major world governnment.
Where my cryptogram? (Score:3)
quantum computing & one time pads (Score:5, Informative)
I'm a Ph.D student at Harvard. I've done cryptography research in the past. So listen up people.
As for public key cryptography, most but not all public key cryptosystems are completely broken by quantum computers. Luckily we still have some public key cryptosystems that have not yet been broken using quantum algorithms. Elliptic curve discrete log is one such example.
Some Clarifications (Score:3, Informative)
- ElGamal is not an elliptic curve algorithm. Its a classical public key encryption system based on the discrete logarithm problem. Most DL problems can be refactored as elliptic curve problems though, so perhaps the poster was referring to a possible EC ElGamal. At any rate, I'm pretty sure GPG uses classical ElGamal.
- Symmetric ciphers are rarely broken by raw computational power (brute force). In fact, algorithms above about 80 bits are impossible to break by brute force due to the laws of physics.
- Quantum Cryptography today involves means of transmitting data at very low bitrates over a channel in which eavesdropping is impossible. QC is pretty much only useful for exchanging keys for symmetric algorihms (like AES, Twofish) securely, as the data rate is to slow to be practical for anything else.
- Assymetric Cryptography (public key) is based on several hard problems. The two that are used widely today:
* The prime factoring problem (RSA)
* The Discrete Logarithm problem (DSA, ElGamal)
One will become widely available soon:
* The elliptic curve problem
- Yes, OTP is still perfectly secure, but its still perfectly useless, as w/ OTP you just shift the security to two other areas; truely random pad generation, and secure distribution of the pads.
Not exactly... (Score:2)
Well, except for quantum crypto, which IIRC is actualy as secure as one time pad.
Rijndael variant which should foil this attack (Score:2)
In fact, the Rijndael designers were considering changing Rijndael's S-box during the AES process. NIST, however, for not entirely known reasons, did not allow the Rijndael designers to do this.
Now, as it turns out, the Rijndael designers have designed some other ciphers after Rijndael. These ciphers have different S-Boxes. In fact, the Rijndael designers revised ("tweaked" as they call it) each cipher to have a representation which is easy to implement in hardware; most of the die space used when implementing Rijndael on an ASIC is implementing the S-box.
The ciphers in question are Whirlpool [terra.com.br] and Anubis [terra.com.br] (Anubis uses an involutional S-box which might possibly make it weaker). In fact, my software project [maradns.org] does not use Rijndael proper as a psudo-random-number-generator; it uses a Rijndael variant with the "tweaked" Whirlpool S-box.
- Sam
P.S. I should also mention Khazad [terra.com.br], named after the bridge Gandalf fights balrog at, which uses Anubis' S-box.
Re:I'm no mathematician, (Score:2)
Re:I'm no mathematician, (Score:2)
This is correct. But if you can show that a massively parallel computer the size of the Earth would take billions of years to crack your code, then you can feel reasonably secure. Factorisation of large primes is a task that (probably) falls into this category -- it hasn't yet been shown to be easier.
If, on the other hand, you're talking about trying every message against the encrypted text, then that doesn't work either, because (a) it takes even longer than cracking the code, and (b) any message is potentially the plain text.
Factorisation of large primes is easy (Score:5, Funny)
11196101758632245023844192896470191898640653514
Now we have to factor it. We step up to the main terminal of our quantum computer beowulf cluster and type in the question, "Of which numbers is this the product?". Qubits flip, waveforms collapse, a cat in a box somewhere dies (of radiation poisoning, strangely, or charmingly), and out pops the statement:
11196101758632245023844192896470191898640653514
Definition of "Broken" (Score:2, Informative)
Re:I'm no mathematician, (Score:2)
Breaking an encryption method is a different thing. This is done by analyzing the method in order to try to find a better method for breaking encrypted documents than the best method previously known. If this is sucessful, it means that all of the encryption that has been done with that method so far is weaker than had been thought. Of course, the initial method is just trying all possible keys, and it may actually be somewhat foolish to think that there is any cryptosystem which doesn't have a better method; that would mean that all of the key contributes to strength, rather than any of it being weak but necessary to get the algorithm to work. And, from the perspective of what you should use, even the strongest attack so far (if it even works) on AES is harder than brute force on triple DES.
... but crypto is mathematical (Score:2)
The way you crack these algorithms is to throw mathematicians at them until some of them stick. Once you've done that, if there's anything left to bother with, then you can figure out how many processor cycles you'd need to throw at the problem to break it, and decide whether that's feasible.
If your conclusion is that it's not feasible to break, then you need to decide whether to use rubber-hose cryptanalysis to get the key, or steal the target's computer where they saved the unencrypted version, or look for the yellow sticky notes next to their desk, or put a camera in their ceiling to watch them type in their keys.
Re:Maybe? (Score:3, Informative)
Since the one-time pad [std.com], that's when. This has been mathematically proven, as well, as early as 1910 or 1920, if I remember well.
OTOH, it is true that a one-time pad is symmetric (sp?) crypto. modern crypto, such as AES, DES, Serpent and others mentioned in Cryptogram are assymetric, and, as such, more susceptible to cracking methods.
Re:Maybe? (Score:2, Informative)
AES and DES are symmetric. Serpent probably is too, inasmuch as it was an AES finalist.
Re:Maybe? (Score:3, Informative)
Re:Maybe? (Score:3, Interesting)
Since these are all symmetric, key distribution must either happen over another channel, or through a public key exchange method, all of which (AFAIK) use asymmetric algorithms. I don't know that I'd say that asymmetric algorithms are more susceptible, though. The biggest disadvantage to those algorithms is that they tend to require a lot more computing power, and one of the goals of the NIST AES contest was to provide an algorithm that would be implementable on really small platforms, such as embedded devices and smart cards. In fact, one of the best traits of Rijndael is that it seemed just as secure as the other entries while remaining very simple. It has been implemented on a few small 8-bit microcontrollers, and, when optimized, can take as little as 32 bytes of state (RAM).
Re:Maybe? (Score:2)
OTOH, it is true that a one-time pad is symmetric (sp?) crypto. modern crypto, such as AES, DES, Serpent and others mentioned in Cryptogram are assymetric, and, as such, more susceptible to cracking methods.
In theory it may appear that asymmetric ciphers are easier to break, because the attacker has more information -- the public key, which has a known mathematical relationship with the private key. However, the relationships between the keys are based on hard problems in math. In most cases, finding a way to determine the private key from the public key would constitute a major advance in mathematics, and one that has defeated mathematicians for centuries.
Symmetric algorithms, on the other hand, are not based on unsolved math problems, but rather on piling up carefully-chosen linear and non-linear operations until the result is too complicated to unravel. The papers referenced describe some new tools for modeling such complex structures, and claim that the new tools can unravel them far enough to produce a better-than brute force attack with very few known plaintext/ciphertext pairs.
To sum up: asymmetric crypto systems rely on the fact that we have no known method of solving the mathematical problems on which they are based. Symmetric crypto systems rely on the fact that we have no mathematical tools known to be capable of unraveling such complex sequences of simple operations.
In both cases, the ciphers can fall to new mathematics, and there's really no reason to think one is more likely to fall than the other.
Re:Maybe? (Score:2)
Beyond that, all crypto is considered breakable - the question is the amount of computational effort required. A "perfect" cypher will require each possible key to be checked and each with have an equal chance of being correct (and of being wrong). A "broken" cypher allows a considerable shortcut in the process of discovering what it has been used to encrypt. This shortcut may cut the time required in half, it might make it happen only 5% faster. The question to be asked is: is the person who wrote the paper stating an insecurity correct? How much of a risk is it?
According to CryptoGram, this attack is expected to take a large nominal amount of known plaintext, and hence might not be that risky after all. I personally like Blowfish better anyway
:)
Re:Quantum cryptography (Score:2)
If we get quantum computers and quantum cryptography, it will be the end of public key cryptography. We will need to exchange the initial keys face to face. The keys will not be used for encryption but rather integrity, which is a requirement for quantum cryptography to work. Obviously we will need to use unconditionally secure message-authentication-codes on our messages. Luckily the key needed for integrity does not grow linearly with the key size like keys needed for confidentiality.
This means that once we have exchanged the initial keys, we can just append new key material to our messages whenever we send quantum encrypted data. This will provide us with a key for integrity the next time we need to communicate.
To be very secure, I would not like to use a fixed key size for all future communication. I'd rather increase the key size by a few bits whenever a message is being exchanged. With a fixed key size, the chance of breaking the system will converge toward 100% as the number of attempts converges toward infinity. With a growing key size, the chance of every breaking the system will converge toward a small number, that is exponentially small as a function of the initial key size.
This still leaves the DoS problem. An attacker might just keep messing up the messages until we run out of key material for signing messages. I see no solution other than keeping a lot of key material ready for the future, and not keep trying too many times in a short period of time if we keep seeing false signatures.
Even though we have no public keys, we can still build up trust networks. If Alice has already exchanged keys with Bob and Charlie, Alice can do the key exchange for Bob and Charlie. Of course this will only work if Bob and Charlie trust Alice. But if Bob and Charlie has exchanged keys with different middlemen, they can once send messages signed with all their keys. Unless all middlemen are corrupt, Bob and Charlie will discover any invalid key.
Re:Quantum cryptography (Score:2)
You missed the point.
Adding bits does help. And breaking the system is not about just trying the possible combinations. You cannot decrypt the quantum encrypted data, your only chance of breaking the system is forging a message from one of the two parties during the communication. You cannot just try all possible keys, you have only one try. If you send an invalid message, it will be discovered.
And when talking about adding a few bits every time, I talk about a few bits larger key. All the bits in the new key are brand new random bits. So basically you will have to start all over again every time you try. And every time you try, your chance of breaking the system is smaller than last time you tried.
Re:This was completely predicable because... (Score:4, Informative)
Umm... you might be a little confused as to how AES was selected. AES selection criteria were public, as were discussions on the strengths (and weaknesses) of finalist algorithms. In addition, I know two of the AES conference program committee personally, and believe that had the NSA attempted any shinanigans, they would have been resisted and/or reported loudly.
These knee-jerk reactions to the NSA being evil really are counter-productive. Of course there are evil people in the US Government; there are evil people in every walk of life. I just don't think there are enough evil people in the NSA to conspire against the "good" people in the NSA.
You might be too young to remember, but back in the 70's there was a big commotion about the NSA modifying IBM's original S-Boxes. Many people at that time claimed very loudly that the NSA was inserting a back door into the algorithm. The NSA was pretty tight-lipped about why they made these changes; I think they still are, BTW. As it turns out, the original IBM S-Boxes were more succeptable to differential cryptanalysis than the ones the NSA reccomended for use with DES.
Remember that the NSA has a dual mandate. First, it is supposed to intercept, decode, and/or decrypt foreign elint intercepts. This is one of the reasons why they're one of the largest employers of foreign language specialists. Second, they are supposed to develop technologies to protect US national interests. The two missions sometimes conflict, but ever since Herb Lin at the National Academy of Sciences published his report on why it is in the US' national interest to allow widespread use of strong crypto for domestic applications, most (if not all) of the NSA types I've encountered have supported the development and use of strong crypto.
Of course, there are federal groups that like to sneak into people's homes and install keyboard sniffers. But, if that is going to be your law-enforcement surveilance technique of choice, why bother forcing bad crypto on the populous?
Re:Sounds like sour grapes... (Score:3, Insightful)
I was in contact with the Twofish team during their candidacy concerning some work I had done on an improved instruction sequencing. One member of the team told me they figured rinjy was the most elegant proposal and that they would be very happy to see it prevail. Sure, they wanted to win. But more than that, they wanted the security industry to adopt a solid foundation.
There are times when Bruce has struck me as shrill or biased, but this isn't one of those times. What he's dealing with here is the very deep theme about whether the world's cryptographic fraternity is capable of sensing the right turn more often than not. If the wise men can't lead us to paradise, who can?
I'd say that's an issue worth talking about.
Re:Well... If AES isn't sufficient... (Score:2)
I suppose some of you *cough*the moderators*cough* didn't get it? | http://slashdot.org/story/02/09/16/0653224/cryptogram-aes-broken | CC-MAIN-2014-52 | refinedweb | 9,985 | 60.45 |
Error #2099 - Help Neededwhitt682 Oct 5, 2012 4:47 PM
Hello All,
This is my first post here at Adobe.com and I hope that I am posting in the right place.
I have been attending college and have been working out of a book "Adobe Flash CS5: The Professional Portfolio" by Against the Clock amd I seem to be running into trouble. Im on Project 8, which has a main "website" page and a "UILoader" that calls swf files from a directory within the project.
ALSO I am using CS5.5 Web Premium Programs.
Well this is the error:
Error: Error #2099: The loading object is not sufficiently loaded to provide this information.
at flash.display::LoaderInfo/get loader()
at fl.display::ProLoader/get realLoader()
at fl.display::ProLoaderInfo()
at fl.display::ProLoader()
at fl.containers::UILoader/initLoader()
at fl.containers::UILoader/load()
at fl.containers::UILoader/set source()
at seabreeze_fla::MainTimeline/__setProp_homeContent_Scene1_PageContents_0()
at seabreeze_fla::MainTimeline/frame1()
and this is the main page UILoader Pointing to the Directory
import flash.events.MouseEvent;
stop();
home_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
passes_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
plan_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
attractions_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
group_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
about_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
guests_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
join_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
contact_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
new_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
specials_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
calendar_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
group2_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
passes2_btn.addEventListener(MouseEvent.MOUSE_UP, browse);
function browse(event:MouseEvent):void {
switch (event.target.name) {
case "home_btn" : gotoAndStop("home");
break;
case "passes_btn" : gotoAndStop("passes");
PageHead.text = "Buy Passes Online";
break;
case "plan_btn" : gotoAndStop("plan");
PageHead.text = "Plan Your Visit";
break;
case "attractions_btn" : gotoAndStop("attractions");
PageHead.text = "Attractions";
break;
case "group_btn" : gotoAndStop("group");
PageHead.text = "Group Sales";
break;
case "about_btn" : gotoAndStop("about");
PageHead.text = "About the Park";
break;
case "guests_btn" : gotoAndStop("guests");
PageHead.text = "Our Guests Say...";
break;
case "join_btn" : gotoAndStop("join");
PageHead.text = "Join Club Seabreeze";
break;
case "contact_btn" : gotoAndStop("contact");
PageHead.text = "Contact Us";
break;
case "new_btn" : gotoAndStop("new");
PageHead.text = "What's New";
break;
case "specials_btn" : gotoAndStop("specials");
PageHead.text = "Special Offers";
break;
case "calendar_btn" : gotoAndStop("calendar");
PageHead.text = "Park Calendar";
break;
case "group2_btn" : gotoAndStop("group");
PageHead.text = "Group Sales";
break;
case "passes2_btn" : gotoAndStop("passes");
PageHead.text = "Buy Passes Online";
break;
}
}
The Main Code^ was working and still is, but the UILoader seems to not be gathering the nessesary files.
This is where the main "seabreeze" file is located and the Children Folder that I am Calling files from.
I hope that I am not going over board with this, and I have recieved a good grade for this even with this error, but I would like to actully "understand" what is going on here.
Thanks in advance,
Aaron W.
1. Re: Error #2099 - Help NeededNed Murphy Oct 5, 2012 5:34 PM (in response to whitt682)
What code are you using to load content into the UILoader?
2. Re: Error #2099 - Help Neededwhitt682 Oct 5, 2012 8:14 PM (in response to Ned Murphy)
Ned Murphy wrote:
What code are you using to load content into the UILoader?
Thanks for the quick reply.
From what I am understanding from this book I am working out of a UILoader componet is soposed to be a "quick route" for calling files to the Loader Field just by changing the Properties-->Componet Paramaters-->Source and that the Instance Name really does not matter in this setup. They describe the UILoader as a "shortcut", therefore I have no script to call this because of this, only the Source Parameter defined for each file I would like to display on my "Page Contents" layer every 5 frames.
So, is the Source Parameter how to call external files with out AS3?
Or is there code that goes with the UILoader that I am missing?
3. Re: Error #2099 - Help NeededNed Murphy Oct 6, 2012 5:05 AM (in response to whitt682)1 person found this helpful
Using the component you can have it load content without code, but if you have a string of that same component working its way down the timeline in that layer you might have a problem getting the ones after the first to load their own content. When you have the same objects in adjacent frames the latter ones will inherit the properties of the former ones, of which the source could well be one. What you should do instead of what you appear to have done is just use one instance of the UILoader in that layer, give it an instance name, and in all the frames where the source changes, use code to assign the source property.
But that is not likely to be related to the problem you have at the moment. It seems as if the component has lost some of its brains, which can happen if you were to go into the library and remove stuff that the component needs to function properly... it is a tempting thing to do when you drag a component in and then see a bunch of extra stuff end up there that you don't think you need.
You could try removing the current component altogether and then add a new one into the library by placing one on the stage again.
4. Re: Error #2099 - Help Neededwhitt682 Oct 6, 2012 6:50 AM (in response to Ned Murphy)
Reply to Ned Murphy:
Good Morning. I want to say agiain how much I do appreciate the help. I can assure you that I have not deleted anything but I will delete the UILoader and try to apply the source settings again to the a new loader.
**Went to Components --> User Interface --> UILoader and placed it on the stage, and still not working.**
Is there a way to declare these "missing" attributes in AS3? I know that this kind of defeats the purpose, but is it possiable?
5. Re: Error #2099 - Help NeededNed Murphy Oct 6, 2012 10:03 AM (in response to whitt682)1 person found this helpful...
6. Re: Error #2099 - Help Neededwhitt682 Oct 6, 2012 11:02 AM (in response to Ned Murphy)
Ned Murphy wrote:...
Wow, what a BIG help Mr. Murphy! I was thrilled to at least see the content on the Home page of the website. The only problem that I am having now is that the new Loader is not defined by a Width or a Height in AS3, so the flash file gathered is not resized down to the content area. I am guessing that I will just need to resize the *.swf files in the Children folder, BUT is there a way to define the .w and .h of the Loader? I have tried ldr.w and ldr.h but was not successful.
Although this is not the solution for the UILoader, the Loader AS3 is a great alternative to display content from a local directory. Thanks for the very Helpful Awnser and the code works like a charm, just need legenth and width parameters.
7. Re: Error #2099 - Help NeededNed Murphy Oct 6, 2012 11:56 AM (in response to whitt682)1 person found this helpful
A loader is a widthless/heightless entity without any content in it. One thing you can do if you know the proportions of the content you will load is to use the scaleX/scaleY properties to scale the loader.
I am saying this with crossed fingers, but I am pretty sure if you scale the loader before it holds anything that scaling will work. So you might want to just try the scaling instead if the content will allow for that approach. You just can't adjust the width and height until the loader actually holds some content.
If you want/need to adjust the width and height instead, then you need to wait until the loader content is loaded. You can assign an Event.COMPLETE event listener to the contentLoaderInfo property of the Loader and use that to call a function that changes the width/height after the loading is complete.
ldr.contentLoaderInfo.addEventListener(Event.COMPLETE, adjustStuff);
function adjustStuff(evt:Event):void {
// file is loaded - make adjustments here
}
8. Re: Error #2099 - Help Neededwhitt682 Oct 8, 2012 4:50 AM (in response to Ned Murphy)
Response to 7. Ned Murphy:
Hello again. I have been very busy for the last day and I hope you do not take me for rude but I just wanted to follow up on this topic. Thanks again for all the help, but I cannot seem to find the properties for the loader anywhere (the properties that effect the above code). This is okay because I do understand that from the X and Y coordinents the Loader will "Fill" the remaining screen with the content, and this works for my project but like I said I do like UNDERSTANDING what I am soposedly learning.
On another note, This has been a very plesent first experiance on adobe.com forums and Mr. Ned Murphy sir, you are a guru. Thanks again, and like I said this helped me and I hope this helps others in the future.
-Aaron W
9. Re: Error #2099 - Help NeededNed Murphy Oct 8, 2012 5:39 AM (in response to whitt682)
You're welcome Aaron. When it comes to learning and solving problems with Flash, the three tools I utilize most are the Flash Help documentation, Google, and trial and error. If you learn how to make the most of the Help documentation you can often solve alot of the "Whys" that you come across. | https://forums.adobe.com/thread/1077563 | CC-MAIN-2018-30 | refinedweb | 1,601 | 65.32 |
01-02-2022 08:11 AM - edited 01-02-2022 08:11 AM
After confirming MPLS L3VPNs and doing some internet breakout tasks. I wanted to have a look at leaking routes between VRFs away from L3VPNs.
I have been able to use a import map within the VRF to import specific routes using a route-map and prefix lists. However want to import routes based on their RT Value within BGP. As these are already on the routes it negates extra complexity with a second route-map so would like to do it that way. Also means less admin overhead on updating prefix lists rather than adding an RT.
I have routes in the IPv4 BGP RIB on a router with ‘RT:1:1’. Having set the ‘route-target import 1:1’ command within the VRF for IPv4 unicast. None of the routes are being imported imported using this method.
I had a feeling that this is because the routes are in the IPv4 Unicast BGP RIB and not the VPNv4 BGP RIB.
Is that a correct assumption? Or should what I’m doing work and I’m doing something wrong?
The routes I’m wanting to import into the VRF aren’t coming from within a VRF on the other side. I’m using a route map to set the extended community RT to 1:1 on export from a BGP neighbour command. However, looking at the routes in the BGP IPv4 RIB do have the RT set to 1:1.
I’m using standard Cisco IOS.
01-02-2022 08:33 AM
Hi @AvidPontoon1 ,
Happy New Year!
> Is that a correct assumption? Or should what I’m doing work and I’m doing something wrong?
That is correct. A route in address family ipv4 unicast will not be imported into a VRF, even though it has the proper RT set. The route leaking feature is the only way to do this.
Regards,
01-02-2022 08:36 AM
Thanks Harold,
Happy New Year to you too!
How would I go about getting this working then? Do I just have to use an import route-map? Or is it better to get the routes into the VPNv4 table and do it that way using the RTs? Not sure what best practice is here?
01-02-2022 10:33 AM - edited 01-02-2022 10:44 AM
Hi @AvidPontoon1 ,
If the goal is to mutually leak the routes from global to vrf and vice versa, you can do something like this.
vrf definition test
rd 2:1
!
address-family ipv4
import ipv4 unicast map Global2VRF
export ipv4 unicast map VRF2Global
Regards,
01-02-2022 01:19 PM
VRF leaking
leaking VRF and global.
01-02-2022 11:36 PM
Could you share configuration and bgp output here? A few questions are not clear.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: | https://community.cisco.com/t5/routing/clarification-of-vrf-route-leaking/m-p/4525678 | CC-MAIN-2022-40 | refinedweb | 500 | 74.79 |
Summary:
Using Callbacks
Conclusion
Additional Resources.
The Fluent UI is implemented in several applications in the 2007 Microsoft Office release, including Microsoft Office Access 2007, Microsoft Office Excel 2007, Microsoft Office PowerPoint 2007, and Microsoft Office Word 2007. The Ribbon is also available in Microsoft Office Outlook 2007 while you edit an Outlook item. You can customize the Fluent UI through a combination of XML markup and any Microsoft .NET Framework–based language that is supported in Microsoft Visual Studio. You can also customize the Fluent UI by using Microsoft Visual Basic for Applications (VBA), Microsoft Visual C++, and Microsoft Visual Basic 6.0.
Access 2007 and Outlook 2007 implement Ribbon customizations in slightly different ways than the other Office applications do!");
}
}
Depending on how you create your customization, you may need to add a reference to the System.Windows.Forms assembly to call the MessageBox.Show method.
The MyButtonOnAction procedure must be declared as public. The control parameter carries the unique id and tag properties of the control, which enables you to use the same callback procedure for multiple controls.
All attributes in the Ribbon XML customization markup use the camel-casing convention, which capitalizes the first character of each word except the first word—examples include onAction and insertBeforeMso.
Applications that support the Ribbon (except Access 2007, as described later in this article)X markup must include one of the following identifiers.
id
Specifies a unique identifier for the control. Used with custom controls. This identifier is passed as a property on an IRibbonControl to callback functions.
idMso
Specifies the identifier of a built-in control.
idQ
Specifies a qualified identifier, prefixed with a namespace abbreviation, as in the following example.
<customUI xmlns="" xmlns:
<button idQ="x:myButton" … />
The example uses the namespace x so that two different add-ins can add to the same custom group—they just need to refer to that custom group by its qualified name..
Use the id identifier attribute to create a custom item, such as a custom tab. Use the idMso identifier attribute to refer to a built-in item, such as the TabHome tab.
The sample adds a new SampleGroup group to the My Tab tab.
The sample adds a large-sized ToogleButton1 button to the My Group group. The markup specifies onAction and getPressed callbacks.
The sample adds a CheckBox1 check box to the My Group group with a custom screentip. It also specifies an onAction callback.
The sample adds an EditBox1 edit box to the My Group group and specifies an onChange callback.
The sample adds a Combo1 combo box to the My Group group with three items and specifies an onChange callback.
The sample adds a Launcher1 launcher to the My Group group with the onAction callback set.
A launcher normally displays a custom dialog box that offers more options to the user.
The sample adds a new My Group group to the custom tab.
The sample adds a large-sized Button1 button to the My Group group and specifies getText and onAction callbacks.
The sample adds a normal-sized Button2 button to the My Group group and specifies an onAction callback.
The easiest way to create RibbonX.
Although it is useful to know what is going on within the Office Open XML Formats structure, you may be able to bypass these steps. You can take advantage of the Custom Fluent UI Editor, available at OpenXMLDeveloper.org. This tool enables you to open a document, insert custom UI, and then save the document with the RibbonX markup in place. It performs for you the steps listed in this example. It also enables you to add custom icons to the customUI folder, and makes it easy to refer to these icons. You can download the Custom UI Editor from the OpenXMLDeveloper.org Custom UI Editor Tool Web page.X commands and controls.
You must save the document in macro-enabled format if you want to add code that reacts when the user interacts with the Ribbon customization. Documents with this functionality include the .docm, .xlsm, and .pptm formats. For all the examples in this article that include Microsoft Visual Basic for Applications (VBA) code, you must save the host document as one of these formats.).
<Relationship Type="
relationships/ui/extensibility" Target="/customUI/customUI.xml"
Id="customUIRelID" />.
You can follow these same basic steps when creating a macro-enabled Word or PowerPoint document.
Start Excel 2007.
Click the Developer tab, and then click Visual Basic.
If you do not see the Developer tab, you must identify yourself as a developer. To do this in your application, click the Microsoft Office Button, click Application Options, click Popular, and then select Show Developer Tab in the Ribbon. This is a global setting that identifies you as a developer in all Office applications that implement the Fluent UI..
If you save the document as a standard .xlsx document, you will not be able to run the macro code. When you save the document, you must explicitly select the Save As menu option, and then select Excel Macro-Enabled Workbook (*.xlsm).
Exit Excel..
Add the following text between the last <Relationship> element and the </Relationships> element, and then save and close the file.
<Relationship Id="customUIRelID" Type=""
Target="customUI/customUI.xml" />.
Depending on your security settings, you might see a security warning telling you that macros have been disabled. If you do, click the Options button that appears next to the warning, select Enable this content, and then click OK.
Click Large Button. Clicking the button triggers the onAction callback, which calls the macro in the workbook, which displays the "Hello World" message..
Because of the internal architecture of the Ribbon callback mechanism, it is important that you perform no initialization within the GetCustomUI method other than preparing and returning the XML markup for the Ribbon. Specifically, do not display dialog boxes or message windows from within this callback method.X.
Because these walkthroughs involve changes to the database, you might want to perform these steps in a non-production database, perhaps by using a backup copy of a sample database..
Customizations that you load by using the LoadCustomUI method are available only while the database is open. You need to call LoadCustomUI each time you open the database. This technique is useful for applications that need to assign custom UI programmatically later in this section.
The following procedure describes, in a generalized manner, how to add application-level customizations in Access. A later section includes a complete walkthrough..
The tabs displayed in the Fluent UI are additive. That is, unless you specifically hide the tabs or set the Start from Scratch attribute to True, the tabs displayed by a form or report's UI appear in addition to the existing tabs.
To explore this process further, work through the following examples.
The first part of the example sets an option that reports any errors that exist when you load custom UI (although you are performing these steps in Access, you can perform similar steps in other applications)..
ID
AutoNumber
RibbonName
Text
RibbonXml
Memo.
(AutoNumber)
HideData
<customUI xmlns="">
>.
To clean up, repeat the previous few steps to display the Access Options dialog box. Delete the contents of the Ribbon Name option, so that Access displays its default Fluent UI after you close and re-open the database.
You can also use a Ribbon from the USysRibbons table to supply the UI for a specific form or report. To do this, open the form or report in Design or Layout mode, and set the form's RibbonName property to the name of the Ribbon you want to use. You must select the form itself, rather than any control or section on the form, before you can set this property..
Public Sub HandleOnAction(control As IRibbonControl)
' Load the specified form, and set its
' RibbonName property so that it displays
' the custom UI.
DoCmd.OpenForm control.Tag
Forms(control.Tag).RibbonName = "FormNames"
End Sub. Current Database tab.
In the Application Options section, select your startup form in the Display Form list of forms, and then click OK.X functionality as a package without the need to add VBA code to each application. Add-ins are implemented in Access just as they are in other Office applications.
Although you can use the Visual Studio 2005 Shared Add-In template to create a COM add-in for Access, you cannot use Visual Studio 2005 Tools for Office Second Edition to create COM add-ins for Access. Access is not one of the supported applications in Visual Studio 2005 Tools for Office Second Edition.
Consider the following scenarios that illustrate ways to modify the Fluent UI to fit your needs.
To demonstrate the behavior of the Ribbon customizations in the following sections, you can use the techniques discussed earlier in this article, in the section titled Customizing the Fluent UI for Most Office Applications.
You can specify in the XML markup file that you want to hide the controls on the Microsoft Office menu. You must explicitly request these alterations in the XML markup by setting the Visible attribute for the particular control to False. Hiding these commands might put the application into an unrecoverable state that you can clear only by closing the application and uninstalling your solution..>
You can use the following code sample to add custom tabs.
<tab id="CustomTab" label="My Tab" />>
When you define the gallery, all item elements must appear before any button elements..
<dropDown id="HeadingsDropDown"
getItemCount="GetItemCount"
getItemID="GetItemID"
getItemLabel="GetItemLabel"/>.
The following example uses a managed COM add-in to add a custom UI to Word 2007. The add-in creates a custom tab, a group, and a button. When you click the button, the add-in inserts a company name at the location of the cursor. Word = Microsoft.Office.Interop.Word
using Microsoft.Office.Core;
using Word = Microsoft.Office.Interop.Word;.
;
}
GetCustomUI should be implemented to return the XML string for your Ribbon customization, and should not be used for initialization. In particular, you should not attempt to display any dialog boxes or message windows in your GetCustomUI implementation. The more appropriate place to do initialization is in the OnConnection method (for shared add-ins) or in the ThisAddIn_Startup method (for add-ins created by using Visual Studio 2005 Tools for Office Second Edition).
Add the);
}.
This example shows how to create the same add-in as in the previous example, but this time by using Visual Studio Tools 2005 Tools for Office Second Edition.
Start Visual Studio 2005 Tools for Office Second Edition..
On the Project menu, click InsertCompanyAddIn1 Properties..
By default, the RequestService method appears in a comment because your add-in might already include an override for this method (perhaps for the FormRegionStartup interface, or one of the other new extensibility interfaces). If that is the case, you can copy the if block from the commented code into your existing override for this method.
In the Ribbon1 class, modify the GetCustomUI procedure so that it returns the XML from the Ribbon1 resource, rather than calling the add-in's GetResourceText procedure.
Return My.Resources.Ribbon1
return Properties.Resources.Ribbon1;
The add-in template includes, within the Helpers hidden region, a procedure named GetResourceText that retrieves the contents of the XML file for you. Although this procedure does the job you need, it requires you to specify the name of the resource as a string. This technique is somewhat brittle (changes to the resource name would still allow the code to compile, but the add-in would fail at run time), so it is better to add the XML content to the resource file and use the language-specific support for retrieving resources, as shown in these steps. The template includes the GetResourceText procedure so that it can work "out of the box"—when you create and build an add-in, you have a working Ribbon customization without any changes.);
}X:
.
Invalidate()
callback
Causes all of your custom controls to re-initialize.
InvalidateControl(string controlID)
Causes a particular control to re-initialize..
<customUI xmlns=""
loadImage="GetImage">
<!-- Later in the markup -->
<button id="myButton" image="mypic.jpg" />.);
}
Your project must have a reference set to the stdole assembly to run this code. This reference is automatically included in projects created by using Visual Studio 2005 Tools for Office Second Edition. | http://msdn.microsoft.com/en-us/library/aa338202.aspx | crawl-002 | refinedweb | 2,062 | 54.93 |
Auxiliary class aiding in the handling of block structures like in BlockVector or FESystem. More...
#include <deal.II/lac/block_indices.h>
Auxiliary class aiding in the handling of block structures like in BlockVector or FESystem.
The information obtained from this class falls into two groups. First, it is possible to obtain the number of blocks, namely size(), the block_size() for each block and the total_size() of the object described by the block indices, namely the length of the whole index set. These functions do not make any assumption on the ordering of the index set.
If on the other hand the index set is ordered "by blocks", such that each block forms a consecutive set of indices, this class that manages the conversion of global indices into a block vector or matrix to the local indices within this block. This is required, for example, when you address a global element in a block vector and want to know which element within which block this is. It is also useful if a matrix is composed of several blocks, where you have to translate global row and column indices to local ones.
Definition at line 54 of file block_indices.h.
Declare the type for container size.
Definition at line 60 of file block_indices.h.
Default constructor. Initialize for zero blocks.
Definition at line 367 of file block_indices.h.
Constructor. Initialize the number of entries in each block
i as
n[i]. The number of blocks will be the size of the vector
Definition at line 390 of file block_indices.h.
Specialized constructor for a structure with blocks of equal size.
Definition at line 376 of file block_indices.h.
Reinitialize the number of blocks and assign each block the same number of elements.
Definition at line 340 of file block_indices.h.
Reinitialize the number of indices within each block from the given argument. The number of blocks will be adjusted to the size of
n and the size of block
i is set to
n[i].
Definition at line 353 of file block_indices.h.
Add another block of given size to the end of the block structure.
Definition at line 401 of file block_indices.h.
Number of blocks in index field.
Definition at line 440 of file block_indices.h.
Return the total number of indices accumulated over all blocks, that is, the dimension of the vector space of the block vector.
Definition at line 449 of file block_indices.h.
The size of the
ith block.
Definition at line 459 of file block_indices.h.
Return the block and the index within that block for the global index
i. The first element of the pair is the block, the second the index within it.
Definition at line 411 of file block_indices.h.
Return the global index of
index in block
block.
Definition at line 427 of file block_indices.h.
The start index of the ith block.
Definition at line 469 of file block_indices.h.
Copy operator.
Definition at line 479 of file block_indices.h.
Compare whether two objects are the same, i.e. whether the number of blocks and the sizes of all blocks are equal.
Definition at line 490 of file block_indices.h.
Swap the contents of these two objects.
Definition at line 506 of file block_indices.h.
Determine an estimate for the memory consumption (in bytes) of this object.
Definition at line 519 of file block_indices.h.
Global function
swap which overloads the default implementation of the C++ standard library which uses a temporary object. The function simply exchanges the data of the two objects.
Definition at line 539 of file block_indices.h.
Number of blocks. While this value could be obtained through
start_indices.size()-1, we cache this value for faster access.
Definition at line 210 of file block_indices.h.
Global starting index of each vector. The last and redundant value is the total number of entries.
Definition at line 218 of file block_indices.h. | http://www.dealii.org/developer/doxygen/deal.II/classBlockIndices.html | CC-MAIN-2014-42 | refinedweb | 653 | 68.47 |
var options = {
uri: '',
auth: {
user: 'test',
pass: 'test',
sendImmediately: false
}
};
request(options, function(error, response, body){
})
craigsheppard wrote:
I'm hoping to interact with Indigo via a node.js app
samb wrote:howartp wrote:Then set the client action to "Go to external URL" and set the url to
- Code:
indigo://controlpage/PageName?hideNavBar=1&vieUntilQuit=1
Am I missing something here?
howartp wrote:
hideNavBar and viewUntilQuit are self explanatory as to their purpose.
I add a "static image/caption" to the control panel and set the caption to " " (ie loads of spaces).
Then set the client action to "Go to external URL" and set the url to
- Code:
indigo://controlpage/PageName?hideNavBar=1&vieUntilQuit=1
Peter
indigo://controlpage/PageName?hideNavBar=1&viewUntilQuit=1
TheJensen wrote:
Compose most of my control panels in the Indigo interface then export/render them to HTML and then tear them apart and hack the rendered HTML back together again
**Disclaimer: this is scientifically proven and anyone trying to disprove this just further cements it all as fact.
i might need to look at the Domopad idea.. of course that will make me feel dirty.. tinkering with Android related things
Domopad runs great under arc welder!!
And no monitoring going on but they are used for different tasks, I prefer to separate work and home as best as possible (although long term goal is to have scripts on the machine to activate when I log out to go home that it'll preheat/cool my car and get things ready at home for me.. (Like boil the kettle
It can't be Firefox, MSIE or Chrome
## 2014/11/24
## Karl Wachs
## this script will rotate files copied to a destiantion file each time it is executed
## it uses a variable "fileCopySequence" to store the sequence number
## it is usefull if you like to eg roatate a png file on your itouch page.
## copy this into a scheduled action
## and replace the source and destination directories and files to be rotated with your settings
## you should set the repeat time in the schedule and also conditions to enable / disbale in the schedule setup
##
import shutil
sourceDir="/Users/karlwachs/Documents/INDIGOplotD/" # this is the source directory f the files to like to rotate
source=[
"new device-minute-S2.png" ## these are the files that should be rotated
,"new device-minute-S1.png"
,"new device-day-S2.png"
,"new device-day-S1.png"
]
destination= "/Users/karlwachs/Documents/INDIGOplotD/FileToShow.png" ## this is the destination filename use this in your itouch page
try:
seqNumber=0
indigo.variable.create("fileCopySequence")
except:
try:
seqNumber = int(indigo.variables["fileCopySequence"].value)
except:
seqNumber=0
if seqNumber <len(source)-1: seqNumber+=1
else: seqNumber=0
indigo.variable.updateValue("fileCopySequence",str(seqNumber))
shutil.copyfile(sourceDir+source[seqNumber], destination)
-Have a whitelist of IP address [ranges] that do not require authentication at all (ie on the trusted / non-guest zone of LAN).
if value of variable "Shades_CurrentState" is "down" then
execute group "Shades UP"
else if value of variable "Shades_CurrentState" is "up" then
execute group "Shades DOWN"
end if
jay (support) wrote:
LOL -thanks for making my morning... | https://forums.indigodomo.com/feed.php?f=105 | CC-MAIN-2018-22 | refinedweb | 522 | 51.78 |
The Java EE 7 Tutorial
31.4 Integrating JAX-RS with EJB Technology and CDI
JAX-RS works with Enterprise JavaBeans technology (enterprise beans) and Contexts and Dependency Injection for Java EE (CDI).
In general, for JAX-RS to work with enterprise beans, you need to annotate the class of a bean with
@Path to convert it to a root resource class. You can use the
@Path annotation with stateless session beans and singleton POJO beans.
The following code snippet shows a stateless session bean and a singleton bean that have been converted to JAX-RS root resource classes.
@Stateless @Path("stateless-bean") public class StatelessResource {...} @Singleton @Path("singleton-bean") public class SingletonResource {...}
Session beans can also be used for subresources.
JAX-RS and CDI have slightly different component models. By default, JAX-RS root resource classes are managed in the request scope, and no annotations are required for specifying the scope. CDI managed beans annotated with
@RequestScoped or
@ApplicationScoped can be converted to JAX-RS resource classes.
The following code snippet shows a JAX-RS resource class.
@Path("/employee/{id}") public class Employee { public Employee(@PathParam("id") String id) {...} } @Path("{lastname}") public final class EmpDetails {...}
The following code snippet shows this JAX-RS resource class converted to a CDI bean. The beans must be proxyable, so the
Employee class requires a nonprivate constructor with no parameters, and the
EmpDetails class must not be
final.
@Path("/employee/{id}") @RequestScoped public class Employee { public Employee() {...} @Inject public Employee(@PathParam("id") String id) {...} } @Path("{lastname}") @RequestScoped public class EmpDetails {...} | http://docs.oracle.com/javaee/7/tutorial/doc/jaxrs-advanced004.htm | CC-MAIN-2014-41 | refinedweb | 257 | 50.53 |
By Subalaxmi Venkataraman on Dec 10, 2018 6:04:31 AM
Windows Presentation Foundation offers various controls and one of the basic control is the combo box.
The combo box has various events such as DropDownOpened, DropDownClosed, SelectionChanged, GotFocus, etc.,.
In this blog, we will see how to handle the selectionchanged event of combo box which is inside grid using Model–View–Viewmodel (MVVM) pattern.
Create Model (Person):
Define a class Person as show below.
Create View Model:
Create a view model named MainWindowViewModel.cs
Create View:
Create a view named MainWindow.xaml.
In the above code, `Cities' are defined in the view model and are not part of Persons class, the ItemsSource is defined as ItemsSource="{Binding Path=DataContext.Cities,RelativeSource={RelativeSource FindAncestor, AncestorType = UserControl}}" which will take the data from viewmodel Cities property.
We need to import the namespace xmlns:i=".
View Codebehind:
Create view model field and instantiate the view model in the constructor of View Code behind file.
MainWindow.xaml.cs
When we run the application, the grid will bind with the person details and city combo box will be bound with the cities list. Now we can change the city for the respective person and it will be handled by the CityChangeCommand in the view model class.
In this manner we can handle the combo box selection change event using MVVM pattern in WPF. | https://blog.trigent.com/handling-combo-box-selection-change-in-viewmodel-wpf-mvvm-2 | CC-MAIN-2019-04 | refinedweb | 229 | 54.73 |
The Scratch GDB environment setting is the location of a file geodatabase you can use to write temporary data.
Writing output to the scratch geodatabase will make your code portable, as this geodatabase will always be available or created at execution time.
Usage notes
- The scratch geodatabase is guaranteed to exist when your script or model runs, and you will have write access to this geodatabase.
- The Scratch GDB environment is read-only; you cannot set the location directly.
- If the Scratch Workspace environment has been set, the Scratch GDB environment will reflect this value first.
- If your Scratch Workspace environment references a geodatabase, the Scratch GDB and Scratch Workspace environments will point to the same paths.
- If your Scratch Workspace environment points to a folder, the Scratch GDB environment will look for a geodatabase in the folder named scratch.
- If the Scratch Workspace environment has not been set, the Scratch GDB environment defaults to a geodatabase within the AppData section of the User Profile, typically C:\Users\<user_name>\AppData\Local\Temp\1\scratch.gdb.
- Data written to the Scratch GDB environment is not automatically deleted. You must do your own cleanup.
Dialog syntax
Note:
The Scratch GDB environment is only available in Python and models.
Scripting syntax
arcpy.env.scratchGDB
Script example
import arcpy print(arcpy.env.scratchGDB) | https://pro.arcgis.com/en/pro-app/latest/tool-reference/environment-settings/scratch-gdb.htm | CC-MAIN-2022-33 | refinedweb | 219 | 55.44 |
BMP085 Pressure Sensor
Introduction
The BMP085 from Bosch Sensortech is an excellent high-resolution sensor, for measuring absolute atmospheric pressure. You can use it for measuring barometric pressure as part of a weather station, or as an altimeter. It's fast enough to handle rocketry in lower resolution modes, but tops out at 300 mb which is roughly 30k ft - if you want to go higher, use a different sensor (I've heard that it still outputs data above 300 mb, outside the factory calibration range, this could get user calibrated).
There is a lot of noise in the atmosphere, so don't expect a smooth output from any pressure sensor unless it's in a sealed, insulated chamber. For the same reason, discussions about sensors measuring a few centimeters of altitude change are meaningless. If you're comparing the output to a local weather station, pressure changes won't quite match because of local effects. Also there will be an offset, because professional weather stations are calibrated for sea level no matter where they are located.
Most pressure sensors are g-sensitive. If you are using one using in a high-g environment, such as model rocketry, fast model airplane or UAV, the sensor element should be mounted 90 deg to the g-axis, or use two in opposing orientations and average them. I've been meaning to try calibrating out the g-effect using an accelerometer.
What's Good
- Moderately high resolution in "ultra high" mode, perfectly adequate for most applications (but not quite up to Bosch's marketing)
- Fast enough for most applications (up to 130 or 222 Hz in two lower resolution modes)
- Low cost, compared to precision barometric pressure sensors
- Flexible, has several modes of resolution and update rates
- Fast response, no hysteresis (which I've seen on gel stabilized pressure sensors, probably used to reduce vibration effects)
- All digital, uses I2C interface
- Also provides temperature
- Easy to wire up
- Very small size, low power
- The manufacturer provides working code for the calibration routines, although it's a bit ugly
- okini3939 has posted a published library - I haven't tested it, and used my own code here. Note that my post is a tutorial and product review, and I don't intend to create my own library
Not so Good
- A complex calibration routine must be run for each sample (however it uses integer math and the < 1 uS execution time is trivial on mbed)
- Doesn't have a continuous mode of operation, so you need to run a ticker on the mbed to get samples at fixed rates
- Doesn't have an hose port, so if you need one you'll have to make an adapter
- No option for changing I2C bus address, however the data sheet has a trick for using two on the same I2C bus
- Should have options for even more oversampling than given by the "ultra high" resolution mode
Hardware
Breakout boards are available from SparkFun (provided photo below) and DIY Drones, and other distributors. Bosch has their own breakout board (available from Future Electronics) but it's twice as big and expensive as the others. The sensor uses a very small surface mount package, so you will need some experience to mount it on your own board.
Wiring
See the BMP085 data sheet for pinouts - the connections are pretty simple.
- Power - wire up 3.3v and ground to the BMP085 from the mbed. The sensor has separate analog and digital 3.3v pins but you can tie them together.
- Ideally, the two power pins should be supplied power and filtered separately to prevent digital noise feeding into the analog circuitry.
- I2C - wire up SCL and SDA, and make sure to use pull up resistors if the breakout board you are using doesn't have them.
- EOC - add a wire for end of conversion notification, from the BMP085 to an InterruptIn pin on the mbed. You could also poll EOC, or skip this step entirely and use a time delay, but both methods are inefficient.
- You can ignore the master clear input and leave it floating; the device resets on power up. I didn't use it in numerous tests and didn't notice any issues. However you may want to use this with a DigitalOut pin and force a reset on occasion.
Test Setup
All tests below are run in "ultra high" resolution mode to get the best oversampling in hardware, since I'm primarily interested in high-res applications. Also I don't have a test setup for a fast change in altitude, to see how well that works. I might add a "standard mode" resolution noise test later. I didn't test for temperature stability, although that can be done by hitting the sensor with radiant heat and seeing how much drift is induced.
Comparison with Professional Sensor
This chart compares results from the BMP085 to a professional barometric sensor at the University of Washington, Dep't of Atmospheric Sciences, about a mile from my house. The results compare extremely well - the BMP085 is great for this application! The results won't match perfectly because of the distance between the locations, how the sensors are housed, etc. The chart shows passage of a deep low (Point A), followed by a rapid rise in pressure to a weak high (Point B) and then another weaker trough (Point C). The traces are offset slightly to improve readability. The samples from both sources are averaged and updated once per minute.
Noise Comparison
The chart below left shows a noise level comparison between the BMP085 and a Freescale analog sensor (MPXAZ6115A) hooked to a ultra-high resolution oversampling ADC (AD7190) displaying 17 bits of data. The sensors are just sitting on my bench, not in a chamber, and running simultaneously. The external ADC, which is much higher tech solution, obviously gives somewhat better results but the BMP085 holds up remarkably well.
The x-axis shows number of samples, and with sensors running at 10 SPS. The y-axis is in millibars * 10. The atmosphere at sea level will typically be in the 1005 to 1015 mb range - this trace was taken during a low pressure passage. The two traces are offset to make the chart more readable. The second chart, below right, is a standard deviation taken every 5 samples from the same data as the first chart. It shows the difference more dramatically, with the BMP085 having much greater deviation.
Resolution Comparison
The 17-bit AD7190 solution provides far better resolution than the BMP085, as it should, given the much better hardware. In this test, I walked up and down two short flights of stairs (twice for each sensor) - the total height of the stairs is about 10 feet. The BMP085 is running in its best mode (ultra high resolution) at its fastest speed (40 Hz), with a 41-tap moving average filter. The AD7190 is running at 20 Hz with a 21-tap filter, so the response time is roughly similar. The AD7190 is doing more oversampling at 20 Hz, which is a big advantage since it further reduces noise. And it will oversample even further down to 1.5 SPS, or you can run it up to 4800 SPS. The x-axis shows number of samples, and with sensors running at about 10 SPS.
You will not get clean 17-bits of resolution out of the BMP085 unless you are averaging a very large number of samples, and this will slow down response time considerably. Many applications don't need the both the extra resolution and response time anyway - one or the other will do. If you're running a weather station you don't need sub-second response time, so you can get very good resolution by running the sensor at 40 Hz and averaging all the samples over a period of a minute or two - this is the setup above, against the professional sensor.
However the test does show that even a very good all-digital sensor is not quite competitive with a solution made of top grade discrete components. No doubt digital sensors will continue to improve - I look forward to the next generation BMP085, especially since this one's been available for a couple years.
AD7190 with Freescale analog sensor - followed by BMP085
Code Explanation
The code example has a ticker that triggers a pressure conversion every 50 ms (20 Hz) on the BMP085. You can slow this down or can speed it up, but note that the BMP085 is running in the ultra high resolution (highest) oversampling mode to get the best resolution. It can't sample any faster than about 40 Hz unless you lower the oversampling settng.
After the conversion is complete, the BMP085 signals end of conversion which is serviced by an interrupt. This is much more efficient that polling or using a timer, but those solutions may be ok for some apps.
The BMP085 requires a temperature reading for proper pressure compensation. The code below takes a temperature sample at the same rate as the pressure, but this may be unnecessary. For example, in a weather station the temperature won't fluctuate as fast the pressure. I commented out a few lines of code that reduce the interval for temperature sampling, you can uncomment them for better efficiency and change the interval using the tCount variable. The temperature routine also has a fixed wait of 4 ms for the sampling to complete; this could improved - wasting 4 ms isn't great, but it's not a huge penalty.
The pressure data is smoothed slightly with a moving average filter.
I borrowed code posted on the manufacturer's web site for the calibration routines. This is a basic version and doesn't handle all the operating modes.
#include "mbed.h" #define EE 22 #define BMP085ADDR 0xEF long b5; int calcPress(int upp); int calcTemp(int ut); short ac1; short ac2; short ac3; unsigned short ac4; unsigned short ac5; unsigned short ac6; short b1; short b2; short mb; short mc; short md; #define COEFZ 21 static int k[COEFZ]; static int movAvgIntZ (int input); I2C i2c(p28, p27); // sda, scl InterruptIn dr(p26); uint32_t drFlag; void cvt(); void drSub(); Serial pc(USBTX, USBRX); // tx, rx int main() { // let hardware settle wait(1); ///////////////////////////////////////////////// // set up timer to trigger pressure conversion ///////////////////////////////////////////////// Ticker convert; convert.attach_us(&cvt, 50000); // 50 ms, 20 Hz ///////////////////////////////////////////////// // set up data ready interrupts ///////////////////////////////////////////////// // set up interrupts __disable_irq(); // ADC data ready drFlag = 0; dr.mode(PullDown); dr.rise(&drSub); ///////////////////////////////////////////////// // set up i2c ///////////////////////////////////////////////// char addr = BMP085ADDR; // define the I2C Address char rReg[3] = {0,0,0}; char wReg[2] = {0,0}; char cmd = 0x00; ///////////////////////////////////////////////// // get EEPROM calibration parameters ///////////////////////////////////////////////// char data[EE]; cmd = 0xAA; for (int i = 0; i < EE; i++) { i2c.write(addr, &cmd, 1); i2c.read(addr,rReg,1); data[i] = rReg[0]; cmd += 1; wait_ms(10); } // parameters AC1-AC6 ac1 = (data[0] <<8) | data[1]; ac2 = (data[2] <<8) | data[3]; ac3 = (data[4] <<8) | data[5]; ac4 = (data[6] <<8) | data[7]; ac5 = (data[8] <<8) | data[9]; ac6 = (data[10] <<8) | data[11]; // parameters B1,B2 b1 = (data[12] <<8) | data[13]; b2 = (data[14] <<8) | data[15]; // parameters MB,MC,MD mb = (data[16] <<8) | data[17]; mc = (data[18] <<8) | data[19]; md = (data[20] <<8) | data[21]; //int tCounter = 200; //int pCounter = 0; int temp = 0; // ready to start sampling loop __enable_irq(); ///////////////////////////////////////////////// // main ///////////////////////////////////////////////// while (1) { if (drFlag == 1) { ///////////////////////////////////////////////// // uncompensated pressure ///////////////////////////////////////////////// cmd = 0xF6; i2c.write(addr, &cmd, 1); i2c.read(addr,rReg,3); int up = ((rReg[0] << 16) | (rReg[1] << 8) | rReg[2]) >> 5; ///////////////////////////////////////////////// // temperature // only do this every 10 sec or so ///////////////////////////////////////////////// //if (tCounter == 200) { wReg[0] = 0xF4; wReg[1] = 0x2E; i2c.write(addr, wReg, 2); wait_ms(4); // uncompensated temperature cmd = 0xF6; i2c.write(addr, &cmd, 1); i2c.read(addr,rReg,2); int ut = (rReg[0] << 8) | rReg[1]; // compensated temperature temp = calcTemp(ut); //tCounter = 0; //} ///////////////////////////////////////////////// // compensated pressure ///////////////////////////////////////////////// int press = calcPress(up); int pressZ = movAvgIntZ(press); //if (pCounter == 20) { pc.printf("%d\t%d\n",pressZ,temp); //pCounter = 0; // } ///////////////////////////////////////////////// // data ready cleanup tasks ///////////////////////////////////////////////// // reset data ready flag drFlag = 0; //tCounter++; //pCounter++; } } } //////////////////////////////////////////////////////////////////////////////////// // start pressure conversion //////////////////////////////////////////////////////////////////////////////////// void cvt() { char w[2] = {0xF4, 0xF4}; i2c.write(BMP085ADDR, w, 2); } //////////////////////////////////////////////////////////////////////////////////// // Handle data ready interrupt, just sets data ready flag //////////////////////////////////////////////////////////////////////////////////// void drSub() { drFlag = 1; } ///////////////////////////////////////////////// // calculate compensated pressure ///////////////////////////////////////////////// int calcPress(int upp) { long pressure,x1,x2,x3,b3,b6; unsigned long b4, b7; int oversampling_setting = 3; unsigned long up = (unsigned long)upp; b6 = b5 - 4000; // calculate B3 x1 = (b6*b6) >> 12; x1 *= b2; x1 >>=11; x2 = (ac2*b6); x2 >>=11; x3 = x1 +x2; b3 = (((((long)ac1 )*4 + x3) <<oversampling_setting) +>>oversampling_setting)); if (b7 < 0x80000000) { pressure = (b7 << 1) / b4; } else { pressure = (b7 / b4) << 1; } x1 = pressure >> 8; x1 *= x1; x1 = (x1 * 3038) >> 16; x2 = (pressure * -7357) >> 16; pressure += (x1 + x2 + 3791) >> 4; // pressure in Pa return (pressure); } ///////////////////////////////////////////////// // calculate compensated temp from uncompensated ///////////////////////////////////////////////// int calcTemp(int ut) { int temp; long x1,x2; x1 = (((long) ut - (long) ac6) * (long) ac5) >> 15; x2 = ((long) mc << 11) / (x1 + md); b5 = x1 + x2; temp = ((b5 + 8) >> 4); // temperature in 0.1°C return (temp); } //////////////////////////////////////////////////////////////////////////////////// // int version of moving average filter //////////////////////////////////////////////////////////////////////////////////// static int movAvgIntZ (int input) { int cum = 0; for (int i = 0; i < COEFZ; i++) { k[i] = k[i+1]; } k[COEFZ - 1] = input; for (int i = 0; i < COEFZ; i++) { cum += k[i]; } return ( cum / COEFZ ) ; }
5 comments on BMP085 Pressure Sensor:
Please log in to post comments.
Do your firmware have bug? I load your ex to my complier and edit only printf like this
but the result are
no temperture data and pressure wrong!
I connect SDA to mbed pin 28 and SCL to pin 27 EOC to pin 26 respectively
I use hyperterminal to monitor the data from sensor using baudrate 9600 bps
but I try okini's Lib OK! show 1008.88hPa and 30.60C
thanks | https://os.mbed.com/users/tkreyche/notebook/bmp085-pressure-sensor/?compage=1 | CC-MAIN-2021-31 | refinedweb | 2,313 | 55.17 |
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:
It seems a bug("RateOfChange100") in momentum.py, which doesn't pass period value to ROC
I reviewed the code("RateOfChange100") and found it doesn't pass period to ROC.
def __init__(self): self.l.roc100 = 100.0 * ROC(self.data) # not pass period to ROC super(RateOfChange100, self).__init__()
Correct it:
def __init__(self): self.l.roc100 = 100.0 * ROC(self.data, period=self.p.period) super(RateOfChange100, self).__init__()
If there is anything I misunderstand, pls feel free to let me know.
-Le
- backtrader administrators last edited by
Release 1.9.63.122 | https://community.backtrader.com/topic/954/it-seems-a-bug-rateofchange100-in-momentum-py-which-doesn-t-pass-period-value-to-roc | CC-MAIN-2020-40 | refinedweb | 115 | 53.17 |
There are different ways to use Python..
my_program.pyPython file (e.g. using a text editor of choice, or an IDE), run the file with
$ python my_program.py
The IPython/Jupyter notebook is a good way to combine text/formulas/images/diagrams with executable Python code cells, making it ideal for teaching Python tutorials.
Most Important:
x = "Hello World!"
x.
x.lower?
Shift + Enter: Execute cell and jump to the next cell
Ctrl + Enter: Execute cell and don't jump to the next cell
First things first: you will find these six lines in every notebook. Always execute them! They do three things:
print("Hello")
This essentially makes the notebooks work under Python 2 and Python 3.
%matplotlib inline from __future__ import print_function import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') plt.rcParams['figure.figsize'] = 12, 8
Here is collection of resources to introduce you the scientific Python ecosystem. They cover a number of different packages and topics; way more than we will manage today.
If you have any question regarding some specific Python functionality you can consult the official Python documentation.
Furthermore a large number of Python tutorials, introduction, and books are available online. Here are some examples for those interested in learning more.
Some people might be used to Matlab - this helps:
Additionally there is an abundance of resources introducing and teaching parts of the scientific Python ecosystem.
If you need to create a plot in Python/ObsPy, the quickest way to success is to start from some example that is similar to what you want to achieve. These websites are good starting points:
# Three basic types of numbers a = 1 # Integers b = 2.0 # Floating Point Numbers c = 3.0 + 4j # Complex Numbers # Arithmetics work more or less as expected d = a + b # (int + float = float) e = c ** 2 # c to the second power
current_state = True next_state = not current_state print(next_state) print(b > 100)
# You can use single or double quotes to create strings. location = "Bloomington" # Concatenate strings with plus. where_am_i = "I am in " + location # Print things with the print() function. print(location) print(where_am_i) # In Python everything is an object.. # Strings have a lot of attached methods for common manipulations. print(location.upper()) # Access single items with square bracket. Negative indices are from the back. print(location[0], location[-1])
# Lists use square brackets and are simple ordered collections of items (of arbitrary type). everything = [a, c, location, 1, 2, 3, "hello"] print(everything) # Access elements with the same slicing/indexing notation as strings. # Note that Python indices are zero based! print(everything[0]) print(everything[:3]) print(everything[2:-2]) print(everything[-3:]) # Append things with the append method. # (Other helper methods are available for lists as well) everything.append("you") print(everything)
# Dictionaries have named fields and no inherent order. As is # the case with lists, they can contain items of any type. information = { "name": "John", "surname": "Doe", "age": 48, "kids": ["Johnnie", "Janie"] } # Acccess items by using the key in square brackets. print(information["kids"]) # Add new things by just assigning to a key. print(information) information["music"] = "jazz" print(information)
# Functions are defined using the "def" keyword. # The body of the function is the indented block following the # call syntax definition and usually ends with a "return" statement. def do_stuff(a, b): return a * b # Functions calls are denoted by round brackets. print(do_stuff(2, 3))
# Arguments to functions can have a default value.. def traveltime(distance, speed=80.0): return distance / speed # If not specified otherwise, the default value is used.. print(traveltime(1000))
# Import the math module, and use it's contents with the dot accessor. import math a = math.cos(4 * math.pi) # You can also selectively import specific things. from math import pi b = 2.0 * pi # And even rename them if you don't like their name. from math import cos as cosine c = cosine(b) print(c)
temp = ["a", "b", "c"] # The typical Python loop is a for-each loop, e.g. for item in temp: print(item)
# Useful to know is the range() function. for i in range(5): print(i)
# If/else works as expected. age = 77 if age >= 0 and age < 10: print("Younger than ten.") elif age >= 10: print("Older than ten.") else: print("wait.. what??")
# List comprehensions are a nice way to write compact loops. a = list(range(10)) print(a) b = [i ** 2 for i in a] print(b) # Equivalent for-loop to generate b. b = [] for i in a: b.append(i ** 2) print(b)
# while-loops get executed over and over again, # as long as the condition evaluates to "True".. happy = False candies = 0 print(candies) while not happy: candies += 1 if candies >= 100: happy = True print(candies)
for earthquake in my_earthquake_list: try: download_data(event) except: print("Warning: Failed to download data for event:", event) continue
But beware, just catching any type of Exception can mask errors in the code:
for earthquake in my_earthquake_list: try: donwload_daat(event) except: print("Warning: Failed to download data for event:", event) continue
import numpy as np # Create a large array with with 1 million samples, equally spaced from 0 to 100 x = np.linspace(0, 100, 1E6) # Most operations work per-element. y = x ** 2 # Uses C and Fortran under the hood for speed. print(y.sum())
import matplotlib.pyplot as plt x = np.linspace(0, 2 * np.pi, 2000) y = np.sin(x) plt.plot(x, y, color="green", label="sine wave") plt.legend() plt.ylim(-1.1, 1.1) plt.show()
a) Write a function that takes a NumPy array
x and three float values
a,
b, and
c as arguments and returns
$$ f(x) = a x^2 + b x + c $$
b) Use the function and plot its graph using matplotlib, for an arbitrary choice of parameters for x values between -3 and +3.
'} message = "Pnrfne pvcure? V zhpu cersre Pnrfne fnynq!" | https://nbviewer.jupyter.org/github/obspy/docs/blob/master/workshops/2015-08-03_iris/01_Python_Crash_Course.ipynb | CC-MAIN-2018-26 | refinedweb | 986 | 66.44 |
Hi,
It's hard to see why they say that, unless there was some mistake in your return .
From their site:
An addition to tax is imposed for failure to file or failure to pay. An addition to tax is imposed for failure to fileby the due date at the rate of 5 percent per month, not to exceed 25 percent of the unpaid balance.
so that should be ... not to exceed $25 in you case
What may be happening here, is that they are not taking withholding into account at all
Also, just fund this:
Residents who receive a notice of adjustment or notice of deficiency must file their official protest of deficiency disputing their tax liabilities within 60 days.
Taxpayers can file appeals with the Administrative HearingCommission, an independent state government agency, to appeal the Department of Revenue's Final Determination ruling within 30 days of when the Department of Revenue mails its decision.
Here's an excellent overview:
Sorry that was the URL for THIS site ... Here we go:
Did you have taxes withheld from your pay?
If your amount of 100 is even close then there's really something wrong with that $3000 number
I would speak with either a CPA or a tax attorney, but if there were funds withheld, you should be able to file at anytime to show that the amount you owe is not the entire amount... if that is what is making the number spo big
especially if you are within a year or so
Here's an attorney in MO that specializes in tax issues:
My advice wold be to file your return with them as soon as possible ... WHen you don't file they will calculate the number with whatever information they have
By filing your tax return - you are informing the department of revenue about your actual tax liability. They will review your tax return and if accepted,this assessment will be relieved.
Most of thme when they do this, they don't even use all of the deductions and credits that may be coming to you ... it
's essentially a scare tactic
By filing, you take all of that off the table and THEN you may owe a little interest
I still don't see you coming into the chat, so I'll move us to the Q&A mode ... we can still continue a dialogue there, just not in real-time chat as we can here
Let me know if you have further questions, but if you received a letter like this ... ... you need to simply send in your taxes, so that the amount owed can be adjusted.
hope this helps
Lane
Hi Jennifer,
... just checking back in, as I never saw you come into the chat.
Again, they are probably simply calculating the tax without applying any of the deductions, credits or withholding that may apply.
Sending in your return (I would send it by certified mail/return receipt, so you have proof that it was received) is what will give them enough information to adjust the tax owed to right number.
Lane
We mailed in our return when my husband sent the letter. We did have taxes withheld from our income that year. I don't remember a letter like that, but the letter we received today said we had ten days to pay. There is just no way I see that it is right for them to demand $3000+ from me when we only owed a little over $100. We had a refund this year of almost $160 which they took to put towards our last years return. It looks like we will have to contact some expert as we are not going to be "scared" into paying something we do not owe. | http://www.justanswer.com/tax/7uufs-live-state-missouri-file-taxes-every.html | CC-MAIN-2015-48 | refinedweb | 630 | 64.34 |
Paul is a member of the R&D staff at Cap Gemini Telecommunications. He can be reached at ptremble@usa.capgemini.com.
The usefulness of static HTML has run its course and web sites whose sole content is comprised of static HTML pages are now often dismissed as "brochureware." The real world is dynamic and web pages that want to reflect this must be capable of accommodating this dynamism. It is possible, however, to deliver dynamic data content to otherwise static HTML pages by leveraging the power of Java and JavaServer Pages (JSP).
Listing One is a simple JSP. You can see that the text in the unshaded areas is recognizable as HTML. This means not only that getting started with JSP is easy, but also that you do not have to give up your favorite HTML authoring tool. You can also see that the text in the shaded area is as familiar to Java programmers as the text in the unshaded areas is to HTML authors -- it is Java code that creates an instance of GregorianCalendar for the Eastern time zone and invokes the object's getTime() method. Figure 1, the output displayed from the .jsp file in Listing One, is accessed using Internet Explorer.
Now look at Listing Two, which should look somewhat familiar if you have examined the Snoop servlet that is delivered as an example with many of the leading servlet engines. As with Listing One, you can see that Listing Two contains a combination of HTML and Java code. The Java code invokes a number of methods on an instance of Request resulting in the output in Figure 2. The browser used in this case is Netscape Navigator.
Finally, look at Listing Three, which contains a more complex JSP page representative of what you might typically encounter in the real world. Like the previous two, it consists of HTML and Java code, as well as some directives I'll discuss when showing how this JSP page, in conjunction with an HTML page and two other JSP pages, delivers dynamic data to a browser.
The Definition of a JavaServer Page
A JavaServer Page is a collection of JSP elements and fixed-template data that describes how to process a request to create a response. There are two types of JSP elements -- directives and actions.
Directives provide information to the JSP engine. Lines 12, 13, 14, and 23 in Listing Three are examples of directives. Actions can create objects that can be manipulated by scripting elements that are written using the language specified by the language attribute of the page directive. Although the JSP specification makes provisions for multiple languages, I will discuss only Java. The three scripting elements are:
- Declarations, which have the syntax <%! declaration %>, declare a variable or method that then becomes available to other scripting elements. Line 36 in Listing Three is a declaration. Declarations produce no output to the output stream.
- Scriptlets, which have the syntax <% code fragment %>, contain code that is executed at request-processing time. The code can create new objects, modify existing objects, and perform branching and iterative operations. In short, scriptlets can perform any operation permitted by the language in which it is written. Scriptlets may or may not produce output to the output stream.
- Expressions, which have the syntax <%= expression %>, contain a single expression written in the scripting language. The result of the evaluation of this expression must be able to be coerced into a String that is inserted into the output stream. Lines 20-23 in Listing One, line 23 in Listing Two, and line 32 in Listing Three are all expressions.
All text in the JSP page that does not fall into one of these categories (that is, has no meaning to the JSP engine) is called "fixed-template data" and is emitted unchanged to the output stream in the order in which it appears. All unshaded text in Listings One, Two, and Three is fixed-template data.
JavaServer Pages at Work
The application I present here is a listbox from which you can select a month. After you select the month and click a button, a table containing a list of the articles in the 1999 issue of Dr. Dobb's Journal for the month you selected is displayed. If you fail to select a month, an error screen is displayed. If you select a month for which no data is available, a screen listing the available months is displayed.
Figure 3 shows the application's initial screen. Listing Four (ArticleLister.html) is the HTML that generates this screen. The difference between the form tag <form method=''post'' action=''ArticleLister.jsp''> and other form tags presented up to now is that the target of the action is not a CGI program or a servlet but rather a JavaServer page -- it is the one in Listing Three.
Every JSP page has a corresponding implementation class generated once by the JSP Engine and reused. The Engine starts by creating an empty translation unit consisting of six basic sections -- the implementation class declaration, declaration section, generated method signature, initialization section, main section, and generated method closure. It then adds to the first section the Java source code declaring the implementation class. The name of the class is implementation specific. The Engine next populates the initialization section with code that defines and initializes a number of implicit objects available to the JSP page. You have already seen one such object, the request object, on line 23 in Listing Two. Other implicit objects are response, pageContext, session, application, out, config, and page. The Engine then creates a method signature and method closure for the _jspService() method. This method's signature looks like:
void _jspService(ServletRequestSubclass request,
ServletRequestSubtype response) throws IOException,
ServletException {
Next, the Engine processes the page source as follows:
1. The first block of unshaded lines (lines 1-11) contains no syntax that is meaningful to the JSP engine This means that these lines are fixed-template data that is inserted unchanged onto the output stream. The JSP Engine inserts into the main section of the translation unit Java code that looks like: out.print(fixed template data);
2. Lines 12-14 all begin with the characters "<%@." This makes these lines directives. The first directive notifies the JSP Engine that the scripting language is Java and that all of the public types in the package "ddj" are accessible from the scripting language by their simple names rather than their fully qualified names. The second directive instructs the JSP Engine to generate code that makes the string "DDJ Article Lister V1.0" available for return by an implementation of the Servlet.getServletInfo() method. The third directive designates the page "errorpage.jsp" as the page that will handle exceptions thrown by the current page. If you examine Listing Five (errorpage.jsp), you will see that line 12 contains the directive ''<%@ isErrorPage=''true'' %>. This directive instructs the JSP Engine to make available to the scripting language of this page an implicit variable exception. This variable contains a reference to the Throwable thrown by the page in error. You can see that line 17 of errorpage.jsp contains an expression that invokes the getMessage() method of the Throwable. The String returned by this method is inserted into the output stream.
3. Line 15 contains a useBean action. The term "useBean" is perhaps somewhat ambiguous because although this action can indeed refer to an instance of a JavaBean as expected by the instantiate() method of the java .beans.Beans class and as named by the beanName attribute, it can also refer to an instance of the class named by the class attribute. The beanName and class attributes of the "useBean" action are mutually exclusive. In both cases, a reference to the instance is assigned to the scripting variable whose name is the value specified by the id attribute. In the present case, a reference to an instance of ddj.Articles is assigned to the scripting variable articles. The code for the ddj.Articles class is in Articles.java (available electronically; see "Resource Center," page 5).
The scope attribute modifies the behavior of the id attribute. It determines the namespace and lifecycle of the object reference associated with the scripting variable named by the id attribute and also determines the API used to access the referenced object; see Table 1.
4. Lines 17-22 contain a scriptlet that is a code fragment. Scriptlets are added unchanged to the main section of the translation unit.
5. Line 23 is a directive that instructs the JSP Engine to process the JSP source in file NoData.jsp (available electronically) in the same way a C-language preprocessor would process a #include statement. By using the include directive you can place common code in a single file rather than having a copy of it in multiple places. It also makes the JSP page more readable.
6. Lines 24-27 comprise a scriptlet containing the remainder of the Java code started in lines 17-22. This code is added to the main section of the translation unit.
7. Lines 28-31 contain more fixed-template data that is processed as in paragraph 1.
8. Line 32 is an expression. The Engine generates Java code that coerces the result of the expression to a String and inserts it into the output stream. This code looks like this: out.print(Stringified expression);.
9. Lines 33-35 contain more fixed-template data that is processed as in paragraph 1.
10. Line 36 is a declaration that makes the array alternatingColors available for subsequent use in line 41. The declaration is added to the declaration section of the translation unit.
11. Lines 37-39 form a scriptlet containing part of the code required to iteratively process articleList and scriptlet code is added as is to the main section of the translation unit.
12. Line 40 is fixed-template data that is processed as in paragraph 1.
13. Line 41 is an expression for alternating the colors and is processed as in paragraph 8.
14. Lines 42-44 are fixed-template data that is processed as in paragraph 1.
15. Line 45 is an expression representing the title of the article in element i of the articleList array and is processed as in paragraph 8.
16. Lines 46-49 are fixed-template data that are processed as in paragraph 1.
17. Line 50 is an expression representing the author of the article in element i of the articleList array. It is processed as in paragraph 8.
18. Lines 51-53 are fixed-template data that are processed as in paragraph 1.
19. Lines 54-56 form a scriptlet to end the for loop started in line 38. The scriptlet code is processed as in paragraph 4.
20. Lines 57-61 are fixed-template data that are processed as in paragraph 1.
21. Lines 62-64 are scriptlet code terminating the else statement in line 26. The scriptlet code is processed as in paragraph 4.
22. Lines 65-66 are fixed-template data that are processed as in paragraph 1.
The Engine now concatenates all six sections to form a complete translation unit, which is passed to the Java compiler. If compilation is successful, a .class file containing the bytecode for the JSP Page implementation class is created. The most important component of this file is the _jspService() method that is invoked at each client request. The arguments to this method are request and response. These arguments are subInterfaces of javax.servlet.ServletRequest and javax.servlet.ServletResponse, respectively.
When you run the Article Lister application, the first time you select a month and click on the GET ARTICLES button you will experience a noticeable delay. This delay is the result of the translation/compilation process. Subsequent requests are faster because they involve only invocation of the _jspService() method.
Invocation of the _jspService() method results in the following:
- The out.print() statements are executed and they insert the fixed-template data in lines 1-11 into the output stream.
- The Java code resulting from lines 18-20 in Listing Three invokes the processRequest() method of articles, which is an instance of ddj.Articles passing request (an implicit object) as an argument.
- Lines 137-139 in Articles.java (available electronically) invoke the getParameter() method of request passing it the argument ''month,'' which is the name of the <select> HTML element in line 21 of Listing Four (ArticleLister.html). If the result is null (meaning no month was selected), an exception is thrown. This exception is caught by the code generated by the JSP Engine for errorpage.jsp (available electronically). This code sends HTML to the output stream resulting in the browser page in Figure 4. If the result is not null, setMonth() is invoked to save the value of the selected month in an instance variable for subsequent retrieval using the getMonth() method. The code resulting from lines 18-20 of Listing Three invokes the getMonth() method of articles and passes the returned month to the getArticles() method, which returns a 2D String array containing a list of the articles and corresponding authors. If the length of the returned array is zero (meaning no data is available), the code generated from NoData.jsp (available electronically) is executed resulting in the browser page in Figure 5. Finally, the for loop generated from lines 37-39, 41, 45, 50, and 54-56 is executed. This loop creates a table, which is displayed in the browser page in Figure 6.
Conclusion
To fully realize the power of JSP technology, imagine if you replaced the ArticleLister class with Enterprise Java Beans that communicated with large databases or other Enterprise resources (including other computers), and replaced the simple list of articles and their authors with the results of complex queries or transactions.
My discussion of JavaServer Pages is by no means exhaustive. Much more detailed information about JSP can be obtained from the JavaServer Pages Specification, available at.
DDJ
Listing One
1| <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> 2| <html> 3| <head> 4| <title>JSP Example, sans-serif"><b><font size="+2"> 14| JSP Example 1 15| </font></b></font> 16| <br> 17| <br> 18| <font face = "Arial, Helvetica"><font size="+1"> 19| It is now 20| <%= 21| new java.util.GregorianCalendar(new java.util.SimpleTimeZone 22| (-5*60*60*1000,"EDT")).getTime() 23| %> <pre> 24| </font> 25| </body> 26| </html>
Listing Two
1| <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> 2| <html> 3| <head> 4| <title>JSP Example"><b><font size="+2"> 14| <center> 15| JSP Example 2 16| </font></b></font><br> 17| <br> 18| <font face = "Arial, Helvetica, sans-serif"><font size="+1"> 19| <b><u>Some Info About Your Request:</u></b> 20| </center> 21| <br> 22| Protocol: 23| <%= request.getProtocol() %> <pre> 24| <br> 25| Remote Addr: 26| <%= request.getRemoteAddr() %> <pre> 27| <br> 28| Remote Host: 29| <%= request.getRemoteHost() %> <pre> 30| <br> 31| URL Scheme: 32| <%= request.getScheme() %> <pre> 33| <br> 34| Server Name: 35| <%= request.getServerName() %> <pre> 36| <br> 37| Server Port: 38| <%= request.getServerPort() %> <pre> 39| </font> 40| </body> 41| </html>
Listings Three
1| <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> 2| <html> 3| <head> 4| <title>DDJ Article Lister<| <%@ page 17| <% 18| articles.processRequest(request); 19| String month = articles.getMonth(); 20| String[][] articleList = articles.getArticles(month); 21| if (articleList.length == 0) { 22| %> 23| <%@ include 30| <caption><b><font face="Arial, Helvetica"><font size=+1> 31| Dr. Dobb's Articles For The Month Of 32| <%= month %> <pre> 33| 1999 34| </font></font></b> 35| </caption> 36| <%! String[] alternatingColors = {"#ffff99", "#ffcc99"}; %> 37| <% 38| for (int i = 0; i < articleList.length; ++i) { 39| %> <pre> 40| <tr bgcolor= 41| <%=alternatingColors[i % 2] %> <pre> 42| > 43| <td> 44| <b> 45| <%= articleList[i][0] %> <pre> 46| </b> 47| </td> 48| <td> 49| <b> 50| <%= articleList[i][1] %> <pre> 51| </b> 52| </td> 53| </tr> 54| <% 55| } 56| %> <pre> 57| </table> 58| <form method=get 59| <input type=submit 60| </form> 61| </center> 62| <% 63| } 64| %> <pre> 65| </body> 66| </html>
Listing Four
<="#FFFFFF"> <p> <font face="Arial, Helvetica, sans-serif"><b><font size="+2"> <center> DDJ Article Lister </font></b></font><br> <br> <font face = "Arial, Helvetica, sans-serif"><font size="+1"> Select a month from the list below: <form method="post" action="ArticleLister.jsp"> <select name="month" size=4> </select> <br> <br> <br> <input type="submit" name="submit" value="GET ARTICLES"> </form> </center> </body> </html>
Listing Five
<="#ff0000"> <%@ page <input type="submit" value="Try Again"> </form> </CENTER> </body> </html> | http://www.drdobbs.com/jvm/java-qa/184411139 | CC-MAIN-2017-22 | refinedweb | 2,768 | 55.54 |
0
Hi, I'm having trouble coming up with a function that scans a word, starting from bit StartingBit, until the first zero bit is found. The function is suppose to return the index of the found bit and if the bit at StartingBit is alrieady what sought, then startingbit is return. And if no bit if found, then UINT_MAX is return.
And here is the main program.
#include <iostream> #include <iomanip> #include <cstdlib> #include <bits.h> using namespace std; // Print a number in binary: class Bin { public: Bin(int num) { n = num; } friend ostream& operator<<(ostream& os, const Bin& b) { uint bit = 1 << (sizeof(int)*CHAR_BIT - 1); while (bit) { os << ((b.n & bit) ? 1 : 0); bit >>= 1; } return os; } private: int n; }; unsigned int Scan0(unsigned int word, unsigned int startingBit); int main() { unsigned int i, x; while (cin >> x) { cout << setw(10) << x << " base 10 = " << Bin(x) << " base 2" << endl; for (i = 0; i < sizeof(unsigned int) * CHAR_BIT; ++i) cout << "Scan0(x, " << setw(2) << i << ") = " << setw(2) << Scan0(x, i) << endl; cout << endl; } return EXIT_SUCCESS; } [B]This is the part I need help on.[/B] unsigned int GetBit(unsigned word, int i) { return (word>>i) & 01; } unsigned int Scan0 (unsigned int word, unsigned int startingBit) { int counter; while (startingBit!=0) { if (word>=startingBit) { return startingBit; } else if (!word) { return UINT_MAX; } counter++; } } | https://www.daniweb.com/programming/software-development/threads/151444/a-function-that-scans-a-word-returning-bits | CC-MAIN-2017-26 | refinedweb | 223 | 67.89 |
Contents
- Introduction
- Applying the Graphics\Graphic.mqh library
- Exchange, comparison and boundary functions
- Sorting methods
- Selection Sort
- Insertion Sort
- Bubble Sort
- Quick Sort
- Merge Sort
- Heap Sort
- Radix MSD and LSD Sorts
- Other sorts
- Methods speed summary table
- All methods on a single screen
- Testing multi-threading
- Editor's note
Introduction
On the web, you can find a number of videos showing various sorting types. For example, here you can find visualization of 24 sorting algorithms. I have taken this video as a basis along with a list of sorting algorithms.
The Graphic.mqh library is responsible for working with graphics in MQL5. It has already been described in a number of articles. For example, the library features are shown in details here. My objective is to describe its application areas and consider the process of sorting in details. The general concept of sorting is described here since each type of sorting already has at least one separate article, while some of sorting types are objects of detailed studies.
Applying the Graphics\Graphic.mqh library
First, we need to include the library.
#include <Graphics\Graphic.mqh>
We will sort the histogram columns by applying the Gswap() and GBool() exchange and comparison functions. The elements to be processed (compared, replaced, etc.) are highlighted in color. To achieve this, a separate CCurve type object is assigned to each color. Make them global in order not to compartmentalize these objects by the Gswap functions (int i, int j, CGraphic &G, CCurve &A, CCurve &B and CCurve &C). The CCurve class has no default constructor, so it cannot be simply declared global. Therefore, declare global CCurve type pointers — *CMain.
The color can be set in three different ways. The most visible of them is C'0x00,0x00,0xFF' or C'Blue, Green, Red'. The curve is created using the CurveAdd() of the CGraphic class that has several implementations. For the majority of elements, it is convenient to select CurveAdd(arr, CURVE_HISTOGRAM, "Main") with a single array of values, while for auxiliary ones — CurveAdd(X, Y, CURVE_HISTOGRAM, "Swap") with the X array of arguments and Y array of values, since there will be only two elements. It is convenient to make arrays for auxiliary lines global.
#include <Graphics\Graphic.mqh> #property script_show_inputs input int N =42; CGraphic Graphic; CCurve *CMain; CCurve *CGreen; CCurve *CBlue; CCurve *CRed; CCurve *CViolet; double X[2],Y[2],XZ[2],YZ[2]; //+------------------------------------------------------------------+ //| Script program start function | //+------------------------------------------------------------------+ void OnStart() { //arrays------------------------------ double arr[]; FillArr(arr,N); X[0]=0;X[1]=0; Y[0] =0;Y[1]=0; //------------------------------------- Graphic.Create(0,"G",0,30,30,780,380); CMain =Graphic.CurveAdd(arr,CURVE_HISTOGRAM,"Main"); //index 0 CMain.HistogramWidth(4*50/N); CBlue =Graphic.CurveAdd(X,Y,CURVE_HISTOGRAM,"Pivot"); //index 1 CBlue.Color(C'0xFF,0x00,0x00'); CBlue.HistogramWidth(4*50/N); CRed =Graphic.CurveAdd(X,Y,CURVE_HISTOGRAM,"Swap"); //index 2 CRed.Color(C'0x00,0x00,0xFF'); CRed.HistogramWidth(4*50/N); CGreen =Graphic.CurveAdd(X,Y,CURVE_HISTOGRAM,"Borders"); //index 3 CGreen.Color(C'0x00,0xFF,0x00'); CGreen.HistogramWidth(4*50/N); CViolet =Graphic.CurveAdd(X,Y,CURVE_HISTOGRAM,"Compare"); //index 4 CViolet.Color(C'0xFF,0x00,0xFF'); CViolet.HistogramWidth(4*50/N); Graphic.XAxis().Min(-0.5); Graphic.CurvePlot(4); Graphic.CurvePlot(2); Graphic.CurvePlot(0); //Graphic.CurvePlotAll(); simply display all available curves Graphic.Update(); Sleep(5000); int f =ObjectsDeleteAll(0); } //+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ void FillArr(double &arr[],int num) { int x =ArrayResize(arr,num); for(int i=0;i<num;i++) arr[i]=rand()/328+10; }
Graphic.XAxis().Min(-5) set the indent from the Y axis, so that the zero element does not merge with it. Histogramwidth() is a column width.
The FillArr() function fills the array with random numbers from 10 (not to merge with the X axis) to 110. The 'arr' array is made local so that exchange functions have the standard look swap(arr, i, j). The last command ObjectDeleteAll erases what we have plotted. Otherwise, we would have to delete the object via the Menu's Object List Ctrl+B.
Our preparations are complete. It is time to start sorting.
Exchange, comparison and boundary functions
Let's write the functions for visualizing the exchange, comparisons, and allocation of boundaries for sorting by division. The first exchange function void Gswap() is standard, although it features the CRed curve for allocating compared elements
//+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ void Gswap(double &arr[],int i, int j) { X[0]=i;X[1]=j; Y[0] =arr[i];Y[1] =arr[j]; CRed.Update(X,Y); Graphic.Redraw(); Graphic.Update(); Sleep(TmS); double sw = arr[i]; arr[i]=arr[j]; arr[j]=sw; //------------------------- Y[0] =0; Y[1] =0; CRed.Update(X,Y); CMain.Update(arr); Graphic.Redraw(); Graphic.Update(); //------------------------- }
First, allocate the columns. Then return to the initial state after Sleep(TmS) time defining the demonstration speed. Write the comparison functions for "<" and "<=" as bool GBool(double arr, int i, int j) and GBoolEq(double arr, int i, int j) in similar way. Add the columns of another color CViole to them.
//+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ bool GBool(double &arr[], int i, int j) { X[0]=i;X[1]=j; Y[0] =arr[i];Y[1] =arr[j]; CViolet.Update(X,Y); Graphic.Redraw(); Graphic.Update(); Sleep(TmC); Y[0]=0; Y[1]=0; CViolet.Update(X,Y); Graphic.Redraw(); Graphic.Update(); return arr[i]<arr[j]; }
The function for allocating borders, the columns of which should not be erased.
//+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ void GBorders(double &a[],int i,int j) { XZ[0]=i;XZ[1]=j; YZ[0]=a[i];YZ[1]=a[j]; CGreen.Update(XZ,YZ); Graphic.Redraw(); Graphic.Update(); }
Now, let's focus our attention on sorting.
Sorting methods
Before proceeding to analyzing the methods, I recommend to launching the VisualSort application attached below (VisualSort.ex5). It allows you to visually see how each sort works separately. The complete sort code is in the GSort.mqh include file. The file is quite big, therefore I have outlined only the main ideas here.
- The first sorting, which usually opens the study is Selection sort. First, we find the number of the minimum value in the current list and replace it with the value of the first unsorted position.
template <typename T> void Select(T &arr[]) { int n = ArraySize(arr); for(int j=0;j<n;j++) { int mid=j; for(int i=j+1;i<n;i++) { if(arr[i]<arr[mid]) { mid=i; } } if(arr[j]>arr[mid]){swap(arr,j,mid);} } }
Here and further below, the exchange function is a previously described Gswap(), while comparing the array elements is then replaced with GBool(). For example, if(arr[i]<arr[mid]) => if(GBool(arr,i,mid).
- The next sorting type (often used when playing cards) is insertion sort. In this method, the elements of the input sequence are analyzed one at a time. Each newly arrived element is placed in a suitable location among the previously arranged ones. Basic method:
template<typename T> void Insert(T &arr[]) { int n= ArraySize(arr); for(int i=1;i<n;i++) { int j=i; while(j>0) { if(arr[j]<arr[j-1]) { swap(arr,j,j-1); j--; } else j=0; } } }
The derivatives of the method are Shell Sort and Binary Insertion Sort. The first one compares not only the adjacent elements but also the ones located at a certain distance from each other. In other words, this is the Insertion Sort with preliminary "rough" passes. Binary Insertion applies binary search to find the insertion location. On the above mentioned demonstration (VisualSort.ex5), we can clearly see that Shell searches the array using a gradually decreasing "comb". In this regard, it has much in common with the Comb method described below.
- Bubble Sort and its derivatives — Shake, Gnom, Comb and Odd-Even Sort. The idea behind the Bubble Sort is simple: we go through the entire array from the beginning to the end comparing two neighboring elements. If they are not sorted, we interchange their positions. As a result of each iteration, the largest element will be located at the end of the array. The process is then repeated. In the end, we receive a sorted array. The basic Bubble Sort algorithm:
template<typename T> void BubbleSort(T &a[]) { int n =ArraySize(a); for (int i = 0; i < n - 1; i++) { bool swapped = false; for (int j = 0; j < n - i - 1; j++) { if (a[j] > a[j + 1]) { swap(a,j,j+1); swapped = true; } } if (!swapped) break; } }
In Shake Sort, passes are made in both directions allowing us to detect both large and small elements. Gnome Sort is interesting in that it is implemented in a single cycle. Let's have a look at its code:
void GGnomeSort(double &a[]) { int n =ArraySize(a); int i=0; while(i<n) { if(i==0||GBoolEq(a,i-1,i)) i++; //if(i==0||a[i-1]<a[i]) else { Gswap(a,i,i-1); //Exchange a[i] and a[i-1] i--; } } }
If we mark the place of the already sorted elements, we will get the Insertion Sort. Comb uses the same idea as Shell: it goes along the array with combs having different steps. The optimal reduction factor = 1.247 ≈ ( 1 / ( 1-е^(-φ) ) ), where е is an exponent and φ is a "golden" number. When applied to small arrays, Comb and Shell Sorts are similar to fast sorting methods in terms of speed. Odd-Even Sort passes odd/even elements in turns. Since it has been developed for multi-threaded implementation, it is of no use in our case.
- More complex methods are based on "divide and conquer" principle. The standard examples are Hoare Sort or Quick Sort
The general idea of the algorithm is as follows: we select the pivot element from the array. Any array element can be selected as a pivot one. All other elements are compared with the pivot one and re-arranged within the array so that the array is divided into two continuous segments: "less" than the pivot one and "exceeding" it. If the segment length is greater than one, the same sequence of operations is performed for "less" and "exceeding" segments recursively. The basic idea can be found in the pseudocode:
algorithm quicksort(A, lo, hi) is if (lo < hi) { p = partition(A, lo, hi); quicksort(A, lo, p – 1); quicksort(A, p, hi);
}
The difference in sorting options in this case is reduced to different ways of partitioning the array. In the original version, the pointers move towards each other from the opposite sides. The left pointer finds the element exceeding the pivot one, while the right one looks for the lesser one, and they are interchanged. In another version, both pointers move from left to right. When the first pointer finds the "lesser" element, it moves that element to the second pointer's location. If the array contains many identical elements, the partitioning allocates space for elements equal to the pivot one. Such an arrangement is applied, for example, when it is necessary to categorize employees only by two keys — "M" (male) and "F" (female). The appropriate partitioning is displayed below:
Let's have a look at QSortLR with Hoare partitioning as an example:
//----------------------QsortLR----------------------------------------+ void GQsortLR(double &arr[],int l,int r) { if(l<r) { GBorders(arr,l,r); int mid =PartitionLR(arr,l,r); GQsortLR(arr,l,mid-1); GQsortLR(arr,mid+1,r); } } int PartitionLR(double &arr[],int l,int r) { int i =l-1; int j =r; for(;;) { while(GBool(arr,++i,r)); j--; while(GBool(arr,r,j)){if(j==l) break;j--;} if(i>=j) break; Gswap(arr,i,j); } //--------------------------------------------- Gswap(arr,i,r); YZ[0]=0;YZ[1]=0; CGreen.Update(XZ,YZ); Graphic.Redraw(); Graphic.Update(); return i; }
The array can be divided into 3, 4, 5... parts with several pivot elements. The DualPivot sorting with two pivot elements can also be used.
- Other methods based on "divide and conquer" principle use the merging of the already sorted array parts. They divide the array until its length becomes equal to 1 (sorted), then they apply the merging function that also sorts the elements along the way. General principle:
void GMergesort (double &a[], int l, int r) { int m = (r+l)/2; GMergesort(a, l, m); GMergesort(a, m+1, r) ; Merge(a, l, m, r); }
BitonicSort() makes one half of the array ascending, while the other one descending combining them afterwards.
- Heap Sort
The general principle is as follows. Form a "heap" by "sifting" the array. Remove the older element at the top of the "heap". Move it to the end of the source array as the oldest one replacing it with the last element from the unsorted part. Build the new "heap" having the size of n-1, etc. The "heaps" can be based on different principles. For example, Smooth() sorting applies "heaps" having the number of elements equal to Leonardo numbers L(x+2) = L(x+1) +L(x) +1.
- Radix Sorts. Counting Sort and Radix MSD/LSD Sorts
Counting Sort is a sorting algorithm applying the range of numbers of the sorted array (list) for calculating matching elements. Applying the Counting Sort is reasonable only when sorted numbers have a range of possible values that is small enough compared to the sorted set, for example, a million natural numbers less than 1000. Its principle is applied in Radix Sorts as well. Here is the algorithm:
void CountSort(double &a[]) { int count[]; double aux[]; int k =int(a[int(ArrayMaximum(a))]+1); int n =ArraySize(a); ArrayResize(count,k); ArrayResize(aux,n); for (int i=0;i<k;i++) count[i] = 0; for (int i=0;i<n;i++) count[int(a[i])]++; // number of elements equal to i for (int i=1;i<k;i++) count[i] = count[i]+count[i-1]; // number of elements not exceeding i for(int j=n-1;j>=0;j--) aux[--count[int(a[j])]]=a[j]; // fill the intermediary array for(int i=0;i<n;i++)a[i]=aux[i]; }
The Counting Sort ideas are applied in MSD and LSD Sorts in linear time. This is the N*R time, where N is a number of elements, while R is a number of digits in the number representation within the selected number system. MSD (most significant digit) sorts by the top digit, while LSD (least significant digit) — by the lowest one. For example, in the binary system, LSD calculates how many numbers end in 0 and 1 placing even numbers (0) to the first half and odd numbers (1) to the second half afterwards. MSD, on the contrary, starts with the top digit. In the decimal system, it places the numbers < 100 in the beginning and the numbers >100 further on. Then, the array is again sorted into segments 0-19, 10-19,..., 100-109 etc. This sorting method is applied to integers. However, the quote 1.12307 can be made integer 1.12307*100 000.
The QuicksortB() binary quick sort is obtained by combining the Quick and Radix Sorts. QSortLR sorting is applied here, although selection is performed by the top digit rather than the pivot array element. To do this, the function of searching for the n th digit number is applied here and in LSD/MSD:
int digit(double x,int rdx,int pos) // Here x number, rdx number system { // in our case 2, pos digit index int mid =int(x/pow(rdx,pos)); return mid%rdx; }First, the sorting by the top digit is performed int d =int(log(a[ArrayMaximum(a)])/log(2)), then the parts are sorted by lower digits recursively. Visually, it look similar to QuickSortLR.
- Some specific sorts
Cycle Sort is intended for systems where data overwriting leads to resource wear. Therefore, the task here is to find the sort with the least amount of exchanges (Gswap). Cycle Sorts are used for that. Roughly speaking, 1 is rearranged to 3, 3 to 2, 2 to 1. Each element is re-arranged either 0 or 1 time. All this takes much time: O(n^2). The details on the idea and the method can be found here.
Stooge sort. This is yet another example of a far-fetched and very slow sorting. The algorithm is as follows. If the element value at the end of the list is less than the element value at the beginning, they should exchange their places. If the current subset of the list contains 3 or more elements, then: call Stoodge() recursively for the first 2/3 of the list, recursively call Stoodge() for the last 2/3, call Stoodge() recursively for the first 2/3 again, etc.
Methods speed summary table
All methods on a single screen
Let's plot all described sorts on a single screen simultaneously. On the source video, described sorts are launched one by one. The results then have been edited in the video editor. The trading platform has no built-in video editing features. There are several solutions.
Option 1
Simulate multi-threading. Parallelize the sorts so that it is possible to exit function after each comparison and exchange passing the queue to other sort and then re-enter it again in the same place. The absence of the GOTO operator complicates the task. For example, this is how the simplest Selection Sort described at the very beginning looks like in that case:
void CSelect(double &arr[]) { static bool ch; static int i,j,mid; int n =ArraySize(arr); switch(mark) { case 0: j=0; mid=j; i=j+1; ch =arr[i]<arr[mid]; if(ch) mid =i; mark =1; return; break; case 1: for(i++;i<n;i++) { ch =arr[i]<arr[mid]; mark =1; if(ch) mid=i; return; } ch =arr[j]>arr[mid]; mark=2; return; break; case 2: if(ch) { swap(arr,j,mid); mark=3; return; } for(j++;j<n;j++) { mid=j; for(i=j;i<n;i++) { ch =arr[i]<arr[mid]; if(ch) mid=i; mark =1; return; } ch =arr[j]>arr[mid]; mark =2; return; } break; case 3: for(j++;j<n;j++) { mid=j; for(i=j;i<n;i++) { ch =arr[i]<arr[mid]; if(ch) mid=i; mark =1; return; } ch =arr[j]>arr[mid]; mark=2; return; } break; } mark=10; }
The solution is quite viable. At such a launch, the array is sorted:
while(mark !=10) { CSelectSort(arr); count++; }
The 'count' variable shows the total number of comparisons and exchanges. At the same time, the code has become bigger three to four times, and this is the simplest sorting. There are also sorts consisting of several functions, recursive ones, etc.
Option 2
There is a simpler solution. The MQL5 terminal supports the concept of multi-threading. To implement it, a custom indicator should be called for each sort. The indicator should belong to a separate currency pair for each thread. The Market Watch window should have enough tools for each sort.
n = iCustom(SymbolName(n,0),0,"IndcatorSort",...,SymbolName(n,0),Sort1,N);
Here, SymbolName (n,0) is a separate tool for each created flow and the parameters passed to the indicator. The graphical object name can be conveniently assigned by SymbolName (n,0). One chart can only have one CGraphic class object with the given name. Sort method, number of elements in the array and the size of the CGraphic itself are not displayed here.
The sorting function selection and other additional actions related to visual display (for example, adding the sort name to the axes) take place in the indicator itself.
switch(SortName) { case 0: Graphic.XAxis().Name("Selection"); CMain.Name("Selection");Select(arr); break; case 1: Graphic.XAxis().Name("Insertion"); CMain.Name("Insertion");Insert(arr);break; etc............................................. }
The creation of graphical objects takes a certain amount of time. Therefore, the global variables have been added to make sorts work simultaneously. NAME is a global variable with the instrument name with an initial value of 0. After creating all the necessary objects, it gets the value of 1, while the value of 2 is assigned after the sorting is complete. In this way, you are able to track the sort start and end. To do this, the timer is launched in the EA:
void OnTimer() { double x =1.0; double y=0.0; static int start =0; for(int i=0;i<4;i++) { string str; str = SymbolName(i,0); x =x*GlobalVariableGet(str); y=y+GlobalVariableGet(str); } if(x&&start==0) { start=1; GlobalVariableSet("ALL",1); PlaySound("Sort.wav"); } if(y==8) {PlaySound("success.wav"); EventKillTimer();} }
Here the variable of х catches the beginning, while у — the end of the process.
At the beginning of the process, the sound from the original video is played. To work with the source files, it should be located in the MetaTrader/Sound folder together with other *.wav system sounds. If it is located elsewhere, specify the path from the MQL5 folder. For completion, the success.wav file is used.
So, let's create the EA and the indicator of the following type. EA:
enum SortMethod { Selection, Insertion, ..........Sorting methods......... }; input int Xscale =475; //Chart scale input int N=64; //Number of elements input SortMethod Sort1; //Method ..........Various inputs................. int OnInit() { //--- Set the global variables, launch the timer, etc. for(int i=0;i<4;i++) { SymbolSelect(SymbolName(i,0),1); GlobalVariableSet(SymbolName(i,0),0);} ChartSetInteger(0,CHART_SHOW,0); EventSetTimer(1); GlobalVariableSet("ALL",0); //.......................Open a separate indicator thread for each sort......... x=0*Xscale-Xscale*2*(0/2);//row with the length of 2 y=(0/2)*Yscale+1; SymbolSelect(SymbolName(0,0),1); // Without it, an error may occur on some instruments S1 = iCustom(SymbolName(0,0),0,"Sort1",0,0,x,y,x+Xscale,y+Yscale,SymbolName(0,0),Sort1,N); return(0); } //+------------------------------------------------------------------+ //| Expert deinitialization function | //+------------------------------------------------------------------+ void OnDeinit(const int reason) { ChartSetInteger(0,CHART_SHOW,1); EventKillTimer(); int i =ObjectsDeleteAll(0); PlaySound("ok.wav"); GlobalVariableSet("ALL",0); IndicatorRelease(Sort1); .......All that is to be removed...... //+------------------------------------------------------------------+ //| Expert tick function | //+------------------------------------------------------------------+ void OnTick() { ...Empty... } //+------------------------------------------------------------------+ void OnTimer() { ....Described above.... }
The appropriate indicator:
#include <GSort.mqh> //Include all previously written sorts //+------------------------------------------------------------------+ //| Custom indicator initialization function | //+------------------------------------------------------------------+ ..........Inputs block..................................... input int SortName; //method int N=30; //number ...................................................................... int OnInit() { double arr[]; FillArr(arr,N); //fill the array with random numbers GlobalVariableSet(NAME,0); //set the global variables Graphic.Create(0,NAME,0,XX1,YY1,XX2,YY2); //NAME — currency pair Graphic.IndentLeft(-30); Graphic.HistoryNameWidth(0); CMain =Graphic.CurveAdd(arr,CURVE_HISTOGRAM,"Main"); //index 0 CMain.HistogramWidth(2); CRed =Graphic.CurveAdd(X,Y,CURVE_HISTOGRAM,"Swap"); //index 1 CRed.Color(C'0x00,0x00,0xFF'); CRed.HistogramWidth(width); ......Various graphical parametes.......................... Graphic.CurvePlotAll(); Graphic.Update(); GlobalVariableSet(NAME,1); while(!GlobalVariableGet("ALL")); //Wait till all graphic objects are created switch(SortName) { case 0: Graphic.XAxis().Name("Selection"); CMain.Name("Selection");GSelect(arr); break; .....List of sorts.............................................. } GlobalVariableSet(NAME,2); return INIT_SUCCEEDED; } //+------------------------------------------------------------------+ //| Custom indicator iteration function | //+------------------------------------------------------------------+ int OnCalculate(...) { ..............Empty........................ } //+------------------------------------------------------------------+ void OnDeinit(const int reason) { Graphic.Destroy(); delete CMain; delete CRed; delete CViolet; delete CGreen; ObjectDelete(0,NAME); }
Testing multi-threading
Light.ex5 and Sort24.ex5 are ready-to-use programs. They have been copied via the resources and do not require anything else. When working with source codes, they should be installed in the corresponding folders of indicators and sounds.
Sort24.ex5 with 24 sorts is unstable and works unevenly on my common single-core PC. Therefore, I should close all unnecessary programs and do not touch anything. Each sort contains five graphical objects — four curves and a canvas. All of them are continuously redrawn. Twenty four threads create 120 (!) individual continuously changing graphical objects in 24 threads. This is a mere demonstration. I have not worked with it anymore.
The working and quite stable version of Light.ex5 displays 4 sorts simultaneously. Here you can select the number of elements and sorting methods in each window. Besides, it is possible to select the method of shuffling the array: random, upward (already sorted), downward (sorted in reverse order) and an array with many identical elements (step array). For example, Quick Sort degrades down to O(n^2) on an array sorted in reverse order, while the Heap one always sorts for n*log(n). Unfortunately, the Sleep() function is not supported in the indicators, therefore the speed depends only on the system. Besides, the amount of resources allocated for each thread is also arbitrary. If the same sorting is launched in each window, they all make the finish in different ways.
Summary:
- Single-threaded VisualSort is 100% working
- Four-threaded Light is stable
- Twenty four-threaded Sort24 is unstable
We could follow the first option simulating multi-threading. In that case, we would be able to control the process. The Sleep() functions would work, each sorting would receive a strictly equal time, etc. But the very concept of MQL5 multi-threaded programming would be lost in that case.
Final version:
....................................................................................................................................................................
The list of attached files
- VisaulSort.ex5 - each sort presented individually
- Programs(Sort24.ex5, Light.ex5) - ready-made applications
- Audio files
- Codes. Program source codes - sorts, indicators, include files, EAs.
Editor's note
The author of the article implemented an interesting version of multi-threaded sorting using simultaneous launch of multiple indicators. This architecture loads the PC heavily, so we have slightly modified the author's source codes, namely:
- disabled calling for chart re-drawing from each indicator in the auxiliary.mq file
//+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ void GBorders(double &a[],int i,int j) { XZ[0]=i;XZ[1]=j; YZ[0]=a[i]+2;YZ[1]=a[j]+5; CGreen.Update(XZ,YZ); Graphic.Redraw(false); Graphic.Update(false); }
- added the randomize() function that prepares the same random data in the array for all indicators
- added the external parameters to the IndSort2.mq5 file for defining the array size with random values (it has been rigidly equal to 24 in the author's version) and the initializing 'sid' number for data generation
input uint sid=0; //initialization of randomizer int N=64; //number of bars on histogramm
- added the timer for drawing the chart using ChartRedraw() to the Sort24.mq5 file
//+------------------------------------------------------------------+ //| Timer function | //+------------------------------------------------------------------+ void OnTimer() { double x=1.0; double y=0.0; static int start=0; //--- check each indicator's initialization flag in the loop for(int i=0;i<methods;i++) { string str; str=SymbolName(i,0); x=x*GlobalVariableGet(str); y=y+GlobalVariableGet(str); } //--- all indicators are successfully initialized if(x && start==0) { start=1; GlobalVariableSet("ALL",1); PlaySound("Sort.wav"); } //--- all sorts are over if(y==methods*2) { PlaySound("success.wav"); EventKillTimer(); } //--- update the sort charts ChartRedraw(); }
- moved the audio files to the <>\MQL5\Sounds folders, so that they can be included as a resource
#resource "\\Sounds\\Sort.wav"; #resource "\\Sounds\\success.wav"; #resource "\\Indicators\\Sort\\IndSort2.ex5"
The compilation-ready structure is attached in the moderators_edition.zip archive. Simply unpack it and copy to your terminal. If your PC is not too powerful, we recommend setting methods=6 or methods=12 in the inputs.
Translated from Russian by MetaQuotes Software Corp.
Original article: | https://www.mql5.com/en/articles/3118 | CC-MAIN-2019-04 | refinedweb | 4,504 | 56.96 |
Interface and Inheritance in Java: Inheritance
Interface and Inheritance In Java
- Interface and Inheritance in Java: Interface
- Interface and Inheritance in Java: Inheritance
When a class extends another class it’s called inheritance. The class that extends is called sub class while the class that is extended is called super class. Any class in java that does not extend any other class implicitly extends Object class.
In other words every class in java directly or indirectly inherits Object class.
By means of inheritance a class gets all the public, protected properties and methods of the super class no matter which package the sub class is present in. If the sub class is present in the same package as that of super class then it gets the package private properties and methods too. Once the sub class inherits the properties and methods of super class, it can treat them as if it defined them.
By using inheritance you can reuse existing code. If you have an already written class (but no source) and it lacks some features then you don’t have to write everything from scratch. Just extend the class and add a new method that satisfies your needs.
Note:
Private methods and properties are not inherited. However, if the super class has a private variable and a public method that uses the variable then the variable is made available inside the method in the sub class. You should also note that constructors are never inherited.
How to Extend
Use extends keyword to inherit the super class-
class A{ //properties and methods of A } class B extends A { }
Note: A class can inherit only one class. Multiple inheritance is not supported in Java.
Method Overriding And Hiding
When a sub class defines a method that has same signature and return type (or compatible with return type of super class method) it is called method overriding.
Example:
class A{ int x; public void printIt(){ System.out.println(“method in class A”); } } class B extends A{ public void printIt(){ System.out.println(“method in class B”); } } class Test{ public static void main(String[] args){ A a=new B(); a.printIt(); // prints “method in class B” without quotes } }
Here class B extends class A and overrides the method
printIt(). So, when we run the above code the method that gets called is
printIt() of class B. By overriding
printIt() defined in class A, class B can provide a different implementation to it. Still it can access the super class version of the method by using super keyword.
Example:
class B extends A{ public void printIt(){ super.printIt(); System.out.println(“method in class B”); } }
Note:
In the previous example we have used the reference of type A and object of B. But whose
printIt() method will be called is not decided during compilation time. Java waits till the runtime of the program and checks which object the reference is pointing to. In this case the object is of class B. So, it’s B’s
printIt() method which is getting executed. This is called dynamic method dispatch or virtual method invocation.
Now let’s assume class A has a static method
printStatic and class B, which is a sub class of A, defines a static method having same signature as that of A’s
printStatic. This is a case of method hiding.
Example:
class A{ public static void printStatic(){ System.out.println("In A"); }} class B extends A{ public static void printStatic(){ System.out.println("In B"); } } class Test{ public static void main(String[] args){ A a=new B(); a.printStatic(); // prints “In A” without quotes //We can also call like this A.printStatic() } }
In this case at compilation time Java will look for the reference type and not the object that is being pointed to. Here, the reference type is A. So,
printStatic() of class A is executed.
Casting Objects
Let’s take the example where class A is super class and class B is the sub class. We already know that it is possible to create an object of class B and assign it to a reference of type A. But by doing this you will be able to call the methods that are defined in class A only. In order to call the methods that are defined by class B you need to cast the reference.
Example:
A a=new B(); a.methodSpecificToB(); // illegal B b=(B)a; b.methodSpecificToB(); //legal
Constructor Chaining
When we instantiate a sub class the super class constructor also runs. It happens by means of a call to
super(). When we don’t explicitly call
super() inside a sub class constructor the compiler implicitly puts
super() as the first statement in each of the overloaded constructors (like methods, constructors can also be overloaded) assuming that a call to
this() is not present in the constructors.
If you don’t define a constructor for your class the compiler creates a default constructor (which is no arg) and places a call to
super(). The only condition is that your super class should have a no arg constructor or else it will produce a compile time error. If you have a super class that does not have a no arg constructor then you should call
super() with appropriate parameters from your sub class constructor.
The concept will be clear from the following example:
class A{ A(){ System.out.println(“Constructor of A running”); } } class B extends A{ B(){ //a call to super() is placed here System.out.println(“Constructor of B running”); } } public class Test{ public static void main(String[] args){ new B(); } }
Output
Constructor of A running
Constructor of B running
Note: The sub class constructor is called first, but it’s the super class constructor that finishes executing first.
Summary
- A class can extend only one class.
- Every class is a sub class of Object directly or indirectly.
- A super class reference can refer to a sub class object.
- If reference is super class and object is of sub class then calling an instance method on it will result in execution of the method defined in sub class.
- An overridden method has same signature and return type (or compatible with the return type) as that of super class method.
- We can hide a static method defined in super class by defining a static method having same signature or return type (or compatible with the return type) as that of super class method.
- When a sub class constructor runs the super class constructor also runs. This is called constructor chaining.
- Alice Young
- Richard Hassinger
- Steve | http://www.sitepoint.com/interface-and-inheritance-in-java-inheritance/ | CC-MAIN-2013-48 | refinedweb | 1,098 | 62.98 |
Generators in JavaScript - What, Why and How - FunFunFunction #34
1267 40 62182
Generators are (kind of) pausable functions in JavaScript. Another word for them is co-routines. They are used (among other things) to manage async operations, and play very well with promises.
I'm also active on:
► Twitter
► Medium
► Quora
Resources:
► Recursion in JavaScript
► Promises in JavaScript
► ES6 JavaScript features Playlist (videos like this one)
► Joakim Karud (Background music in episode)
Objectives
When I started this thread, I had two objectives in mind:
- Having error differentiation
- Avoid an
if then elseof doom in a generic catcher
I have now come up with two radically distinct solutions, which I now post here, for future reference.
Solution 1: Generic error handler with Errors Object
This solution is based on the solution from @Marc Rohloff, however, instead of having an array of functions and looping through each one, I have an object with all the errors.
This approach is better because it is faster, and removes the need for the
if validation, meaning you actually do less logic:
const errorHandlers = { ObjectRepeated: function(error){ return { code: 400, error }; }, SomethingElse: function(error){ return { code: 499, error }; } }; Survey.findOne({ _id: "bananasId" }) .then(doc => { //we dont want to add this object if we already have it if (doc !== null) throw { reason: "ObjectRepeated", error:"Object could not be inserted because it already exists."}; //saving empty object for demonstration purposes return new Survey({}).save(); }) .then(() => console.log("Object saved with success!")) .catch(error => { respondToError(error); }); const respondToError = error => { const errorObj = errorHandlers[error.reason](error); if (errorObj !== undefined) console.log(`Failed with ${errorObj.code} and reason ${error.reason}: ${JSON.stringify(errorObj)}`); else //send default error Obj, server 500 console.log(`Generic fail message ${JSON.stringify(error)}`); };
This solution achieves:
- Partial error differentiation (I will explain why)
- Avoids an
if then elseof doom.
This solution only has partial error differentiation. The reason for this is because you can only differentiate errors that you specifically build, via the
throw {reaon: "reasonHere", error: "errorHere"} mechanism.
In this example, you would be able to know if the document already exists, but if there is an error saving the said document (lets say, a validation one) then it would be treated as "Generic" error and thrown as a 500.
To achieve full error differentiation with this, you would have to use the nested Promise anti pattern like the following:
.then(doc => { //we dont want to add this object if we already have it if (doc !== null) throw { reason: "ObjectRepeated", error:"Object could not be inserted because it already exists." }; //saving empty object for demonstration purposes return new Survey({}).save() .then(() => {console.log("great success!");}) .catch(error => {throw {reason: "SomethingElse", error}}); })
It would work... But I see it as a best practice to avoid anti-patterns.
Solution 2: Using ECMA6 Generators via
co.
This solution uses Generators via the library
co. Meant to replace Promises in near future with a syntax similar to
async/await this new feature allows you to write asynchronous code that reads like synchronous (well, almost).
To use it, you first need to install
co, or something similar like ogen. I pretty much prefer co, so that is what i will be using here instead.
const requestHandler = function*() { const survey = yield Survey.findOne({ _id: "bananasId" }); if (survey !== null) { console.log("use HTTP PUT instead!"); return; } try { //saving empty object for demonstration purposes yield(new Survey({}).save()); console.log("Saved Successfully !"); return; } catch (error) { console.log(`Failed to save with error: ${error}`); return; } }; co(requestHandler) .then(() => { console.log("finished!"); }) .catch(console.log);
The generator function
requestHandler will
yield all Promises to the library, which will resolve them and either return or throw accordingly.
Using this strategy, you effectively code like you were coding synchronous code (except for the use of
yield).
I personally prefer this strategy because:
- Your code is easy to read and it looks synchronous (while still have the advantages of asynchronous code).
- You do not have to build and throw error objects every where, you can simply send the message immediately.
- And, you can BREAK the code flow via
return. This is not possible in a promise chain, as in those you have to force a
throw(many times a meaningless one) and catch it to stop executing.
The generator function will only be executed once passed into the library
co, which then returns a Promise, stating if the execution was successful or not.
This solution achieves:
- error differentiation
- avoids
if then elsehell and generalized catchers (although you will use
try/catchin your code, and you still have access to a generalized catcher if you need one).
Using generators is, in my opinion, more flexible and makes for easier to read code. Not all cases are cases for generator usage (like mpj suggests in the video) but in this specific case, I believe it to be the best option.
Conclusion
Solution 1: good classical approach to the problem, but has issues inherent to promise chaining. You can overcome some of them by nesting promises, but that is an anti pattern and defeats their purpose.
Solution 2: more versatile, but requires a library and knowledge on how generators work. Furthermore, different libraries will have different behaviors, so you should be aware of that.
Well the promise hasn't been resolved, so what you can do is use async await functions or JavaScript generators which will make the client wait till the rating is incremented and the results json is sent.
Here's a tutorial on async-await and generators.
Popular Videos 373
Submit Your Video
By anonymous 2017-09-20
I would not like to read your code for you but i think, with a little help you could understand this code your self. I guess you need help with the various new syntax being used in the code above. I'll try to note down those so that you can understand all of this code yourself.
This line basically is similar to
where x is a function which takes a generator function as an argument.
Whenever you have a line of the form (x,y) it will always return the last value. So in the case of(x,y) it will return y. If its (0,x) it will return x. Thus in code which you posted, the first line will return (_asyncToGenerator2 || _load_asyncToGenerator()).default.
You could now translate the code to
This means that above code will return a function which takes a generator as argument
If you need more information on generator you could go here The generator function has attributes like yield. They are pretty useful especially to handle asynchronous operations. It streamlines your code and makes it easy to read. To get more information what yield means, you could go here and here
You could also see some lines like these in the code.
This is basically spread operators being used. Spread operators basically allow an expression to be expanded in places where multiple arguments are expected. You could go here to know more about spread operators.
Hope you will be able to read the above code yourself after you understand the concepts.
Original Thread | https://dev-videos.com/videos/ategZqxHkz4/Generators-in-JavaScript--What-Why-and-How--FunFunFunction-34 | CC-MAIN-2018-26 | refinedweb | 1,190 | 55.13 |
Stack in C++ Example | C++ Stack Program is today’s topic. The stack is a data structure that works on the principle of LIFO(Last in first out). In stacks, we insert element from one side as well as we remove the items from that side only. The stack is a container adapter that are classes that use an encapsulated object of a specific container class, which provides a particular set of member functions to access its elements.
Stack in C++
Content Overview
Stacks are a type of container adaptor with LIFO(Last In First Out) type of work, where the new element is added at one end and (top) an item is removed from that end only.
In stacks, the elements which are inserted initially are taken out from the stack at last.
We can use stacks in PDA(Push down Automata).
Header file required to use stack in C++ – #include<stack>
With this we can use stack STL.
Different functions associated with stacks
empty()
The empty() function returns whether the stack is empty or not.
Syntax
stack_name.empty()
In this, we don’t pass any parameter, and it returns true if the stack is empty or false otherwise.
Example
stack1 = 1,2,3 stack1.empty();
Output
False
size()
It returns a number of items in the stack.
Syntax
stack_name.size()
In this, we don’t pass any parameter, and it returns the number of elements in the stack container.
Example
stack_1 = 1,2,3,4,5 stack_1.size();
Output
5
top()
It returns a reference to the topmost element of a stack.
Syntax
stack_name.top();
In this, we don’t need to pass any parameter, and it returns a direct reference of the top element.
Example
stack_name.push(5); stack_name.push(6); stack_name.top();
Output
6
push(k)
The push() function is used to insert the elements in the stack.
It adds the element ‘k’ at the top of the stack.
Syntax
stack_name.push(value)
In this, we pass the value as a parameter, and as a result, adding the element to the stack.
Example
stack1.push(77) stack1.push(88)
Output
77, 88
pop()
It deletes the topmost element from the stack.
Syntax
stack_name.pop()
In this, we don’t pass any parameter this function pops the topmost element from the stack.
Example
stack1 = 10,20,30; stack1.pop();
Output
10, 20
Errors and Exceptions
1. Shows error if a parameter is passed.
2. Shows no exception throw guarantee.
C++ Stack Algorithm
In stack related algorithms, the TOP initially point 0, index of elements in the stack starts from 1, and an index of the last element is MAX.
INIT_STACK (STACK, TOP) Algorithm to initialize a stack using array. TOP points to the top-most element of stack. 1) TOP: = 0; 2) Exit
The push() operation is used to insert an element into the stack.
PUSH_STACK(STACK,TOP,MAX,ITEM) Algorithm to push an item into stack. 1) IF TOP = MAX then Print “Stack is full”; Exit; 2) Otherwise TOP: = TOP + 1; /*increment TOP*/ STACK (TOP):= ITEM; 3) End of IF 4) Exit
The pop() operation is used to delete the item from the stack, first get an item and then decrease TOP pointer.
POP_STACK(STACK,TOP,ITEM) Algorithm to pop an element from stack. 1) IF TOP = 0 then Print “Stack is empty”; Exit; 2) Otherwise ITEM: =STACK (TOP); TOP:=TOP – 1; 3) End of IF 4) Exit
IS_FULL(STACK,TOP,MAX,STATUS) Algorithm to check stack is full or not. STATUS contains the result status. 1) IF TOP = MAX then STATUS:=true; 2) Otherwise STATUS:=false; 3) End of IF 4) Exit
IS_EMPTY(STACK,TOP,MAX,STATUS) Algorithm to check stack is empty or not. STATUS contains the result status. 1) IF TOP = 0 then STATUS:=true; 2) Otherwise STATUS:=false; 3) End of IF 4) Exit
C++ Stack Program
Q1- Write a program to insert five elements in the stack and print the top element using top() and print the size of the stack and check if the stack is empty or not.
#include <iostream> #include <stack> using namespace std; int main() { stack<int> stack1; //empty stack of integer type stack1.push(100); stack1.push(200); stack1.push(300); stack1.push(400); stack1.push(500); cout << "The topmost element of the stack is:" << stack1.top() << endl; cout << "The size of the stack is=" << stack1.size() << endl; if (stack1.empty()) { cout << "Stack is empty" << endl; } else { cout << "Stack is not empty" << endl; } }
See the output.
Q2- Write a program to insert 5 elements in a stack and delete 2 elements then print the stack.
#include <iostream> #include <stack> using namespace std; int main() { stack<int> stack1; //empty stack of integer type stack1.push(100); stack1.push(200); stack1.push(300); stack1.push(400); stack1.push(500); stack1.pop(); stack1.pop(); while (!stack1.empty()) { cout << "Element =" << stack1.top() << endl; stack1.pop(); } }
See the output.
Applications of Stack
- Conversion of polish notations
There are three types of notations:
1) Infix notation – An Operator is between the operands: x + y
2) Prefix notation – An Operator is before the operands: + xy
3) Postfix notation – An Operator is after the operands: xy +
- To reverse a string
A string can be reversed by using the stack. The characters of string pushed on to the stack till the end of the string. The characters are popped and displays. Since the end character of the string is pushed, at last, it will be printed first.
- When a function (sub-program) is called
When the function is called, the function is called the last will be completed first. It is the property of the stack. There is a memory area specifically reserved for the stack.
Conclusion
Stack is an abstract data structure that contains a collection of elements.
Stack implements the LIFO mechanism, i.e. the element that is pushed at the end is popped out first.
Finally, Stack in C++ Example | C++ Stack Program And Algorithm is over.
Recommended Posts
Priority Queue in C++ Example
C++ Queue Standard Template Library | https://appdividend.com/2019/10/09/stacks-in-cpp-example-cpp-stack-program/ | CC-MAIN-2019-47 | refinedweb | 1,007 | 64.1 |
How can i make the extjs 3.3 to work in Aptana?
How can i make the extjs 3.3 to work in Aptana?
I downloaded the ext air package but everything i made (like a simple window) shows me some error. And i tried the 3.3.0 jar file.
But if i use the 2.2 jar everything goes fine.
What am i doing wrong?
Thanks in advance
Hi,
Aptana has nothing to do with working or not working AIR applications, it's just an IDE.
How does your code look like?
What AIR version do you use?
If 2.5 (the current AIR version that is out for some days), change the application.xml file according to the new specs. Especially change the namespace to 2.5 and rename the "version" tag to "versionNumber" and make sure it has the format <0-999>[.<0-999>][.<0-999>] (maximum depth of 3, one needed, no characters allowed).
And last but not least, consider to use extjs3.3.0 in when using ext-air3.3.0. The air adapter may work with prior extjs versions, but that is not intended, not tested and not guaranteed.
Where to obtain the extJS plugin for AptanaBy dcipher in forum Ext 3.x: Help & DiscussionReplies: 1Last Post: 5 Nov 2009, 2:02 PM
Aptana + AIR + ExtJS-2.0.2By prologic in forum Ext.air for Adobe AIRReplies: 4Last Post: 20 Apr 2008, 5:35 AM
This crap doesn't work in aptana!!By delcom5 in forum Ext 2.x: Help & DiscussionReplies: 6Last Post: 4 Mar 2008, 4:18 PM
Aptana 1.0.2 out. EXT 2.0 is out. current Aptana EXT release is 1.1By PaulyWolly in forum Ext 2.x: Help & DiscussionReplies: 2Last Post: 2 Jan 2008, 2:23 AM
aptana and extjs 1.0By alien3d in forum Community DiscussionReplies: 0Last Post: 12 Nov 2007, 6:47 PM | http://www.sencha.com/forum/showthread.php?114076-How-can-i-make-the-extjs-3.3-to-work-in-Aptana | CC-MAIN-2013-48 | refinedweb | 317 | 79.56 |
Up to [cvs.NetBSD.org] / src / lib / libc / gen
Request diff between arbitrary revisions
Default branch: MAIN
Revision 1.16.44.1 / (download) - annotate - [select for diffs], Tue Apr 17 00:05:18 2012 UTC (4 years, 9 months ago) by yamt
Branch: yamt-pagecache
CVS Tags: yamt-pagecache-tag8
Changes since 1.16: +3 -5 lines
Diff to previous 1.16 (colored) next main 1.17 (colored)
sync with head
Revision 1.17 / (download) - annotate - [select for diffs], Tue Mar 20 16:36:05 2012 UTC (4 years, 9.16: +3 -5 lines
Diff to previous 1.16 (colored)
Use C89 definitions. Remove use of __P
Revision 1.13.2.2 / (download) - annotate - [select for diffs], Wed Mar 1 17:12:51 2006 UTC (10 years, 10.13.2.1: +3 -5 lines
Diff to previous 1.13.2.1 (colored) to branchpoint 1.13 (colored) next main 1.14 (colored).
Revision 1.16 / (download) - annotate - [select for diffs], Tue Sep 13 01:44:09 2005 UTC (11 years,.15: +9 -15 lines
Diff to previous 1.15 (colored)
compat core reorg.
Revision 1.15 / (download) - annotate - [select for diffs], Tue Apr 12 21:36:46 2005 UTC (11 years, 9 months ago) by drochner
Branch: MAIN
Changes since 1.14: +5 -7 lines
Diff to previous 1.14 (colored)
getmntinfo() if a compatibility function, so there is no point in hiding references to the compatibility getfsstat() The real problem behind PR lib/29919 was a stale weak_alias, so back out the workaround.
Revision 1.13.2.1 / (download) - annotate - [select for diffs], Fri Apr 8 13:38:11 2005 UTC (11 years,.13: +7 -5 lines
Diff to previous 1.13 (colored)
Pull up revision 1.14 (requested by bouyer in ticket #125): PR/29919: Evaldo Gardenali: getmntinfo() calling deprecated function getfsstat() Fixed by defining an _getfsstat() internal function and calling that instead.
Revision 1.14 / (download) - annotate - [select for diffs], Thu Apr 7 16:24:18 2005 UTC (11 years, 9 months ago) by christos
Branch: MAIN
Changes since 1.13: +7 -5 lines
Diff to previous 1.13 (colored)
PR/29919: Evaldo Gardenali: getmntinfo() calling deprecated function getfsstat() Fixed by defining an _getfsstat() internal function and calling that instead.
Revision 1.13 / (download) - annotate - [select for diffs], Wed Apr 21 01:05:32 2004 UTC (12 years, 8 months ago) by christos
Branch: MAIN
CVS Tags: netbsd-3-base
Branch point for: netbsd-3
Changes since 1.12: +12 -10 lines
Diff to previous 1.12 (colored)
Replace the statfs() family of system calls with statvfs(). Retain binary compatibility.
Revision 1.12 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:50 2003 UTC (13 years,.10 / (download) - annotate - [select for diffs], Mon Sep 20 04:39:01.9: +2 -8 lines
Diff to previous 1.9 (colored)
back out the #ifdef _DIAGNOSTIC argument checks; too many people complained. _DIAGASSERT() is still retained.
Revision 1.9 / (download) - annotate - [select for diffs], Thu Sep 16 11:44:59 1999 UTC (17 years, 4 months ago) by lukem
Branch: MAIN
Changes since 1.8: Feb 26 03:08:13 1998 UTC (18: +8 -6 lines
Diff to previous 1.7 (colored)
trivial changes to reduce lint complaints
Revision 1.7 / (download) - annotate - [select for diffs], Mon Jul 21 14:07:09], Sun Jul 13 19:45:58 1997 UTC (19 years, 6 months ago) by christos
Branch: MAIN
Changes since 1.5: +3 -2 lines
Diff to previous 1.5 (colored)
Fix RCSID's
Revision 1.5.4.1 / (download) - annotate - [select for diffs], Thu Sep 19 20:02:56 1996 UTC (20 years, 4 months ago) by jtc
Branch: ivory_soap2
Changes since 1.5: +7 -2 lines
Diff to previous 1.5 (colored) next main 1.6 (colored)
snapshot namespace cleanup: gen
Revision 1.5 / (download) - annotate - [select for diffs], Mon Feb 27 04:12:53: +10 -4 lines
Diff to previous 1.4 (colored)
update from Lite, with local changes. fix Ids, etc.
Revision 1.1.1.2 / (download) - annotate - [select for diffs] (vendor branch), Sat Feb 25 09:11:54 1995 UTC (21 years, 10 months ago) by cgd
Branch: WFJ-920714, CSRG
CVS Tags: lite-2, lite-1
Changes since 1.1.1.1: +5 -4 lines
Diff to previous 1.1.1.1 (colored)
from lite, with minor name rearrangement to fit.
Revision 1.4 / (download) - annotate - [select for diffs], Sun Jun 12 22:52:03 1994 UTC (22 years,)
fix up includes for new FS code
Revision 1.3 / (download) - annotate - [select for diffs], Thu Aug 26 00:44:40:30. | http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/getmntinfo.c | CC-MAIN-2017-04 | refinedweb | 770 | 76.22 |
Fixes for the SQLAlchemy 0.4 and 0.5 Oracle driver
Project Description
collective.saoraclefixes
Fix SQLAlchemy 0.4 and 0.5 Oracle quoting and offset/limit problems.
This package monkeypatches the SQLAlchemy Oracle driver with backports from the SQLAlchemy 0.6 branch. Fixes included are:
Reserved words. Before 0.6 no Oracle-specific reserved words were included, requiring you to rename columns and labels, making it difficult to code in a database-neutral fashion.
Note that the oracle reserved words list in this package is a superset of the list in SQLAlchemy 0.6 (at least until SQLAlchemy branches/rel_0_6 r6245). This package includes semi-reserved identifiers as defined by the Oracle V$RESERVED_WORDS view
Bindparameter quoting. Using a reserved word as a bind parameter requires quoting as well. Parameters are quoted in the generated SQL code, and all use of the parameters during execution correctly uses the quoted versions.
Limit/offset handling. 0.6 introduces a (much) better method of implementing query limits and offsets in Oracle, following recommended Oracle practice. The 0.4/0.5 method could result in incorrect ordering, or would completely fail when sorting on a aliased column.
The backport includes support for the optimize_limits=False dialect flag. For more information on the change, see SQLAlchemy ticket #536.
To use, simply add an import to this package:
import collective.saoraclefixes # apply oracle fixes
Note that the patches will only apply when SQLAlchemy versions 0.4 and 0.5 are used. All patching activity will be logged via python’s logger module with loglevel DEBUG.
License
collective.saoraclefixes is distributed under the MIT license, just like SQLAlchemy.
Credits
- Backporting work
- Martijn Pieters at Jarn
Changelog
1.2 - 2009-08-20
- Disable FromClause.default_order_by in 0.4; 0.5 removes this functionality altogether, it’s not needed, and interferes with the quoting of rowid on Oracle.
1.1 - 2009-08-12
- Add the improved LIMIT/OFFSET handling from 0.6.
1.0 - 2009-08-05
- Initial release
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/collective.saoraclefixes/ | CC-MAIN-2018-13 | refinedweb | 352 | 52.87 |
- Communication between HttpApplication that run on the same server
- Clearing HTTPHandlers
- Blob
- Send aspx to Email..?
- reset values on a webform
- Underlying connection was closed
- Good Books/Dev Tools Recommendation on Website Development w/ASP?
- When adding lots of records, do you still use INSERT INTO?
- Asp.Net Debugging Problem
- Composite controls
- Calling Windows application from Web application
- ot javascript question
- Horizontal Scroll Bar for listbox
- smart navigation property concept
- ASP page to show different schedule on each day
- Xslt and webcontrols
- aspnet_wp
- How can I user binarywrite and frames?
- using inline stmts vs Page_Load event
- Cancel button?
- Is there a Namespace Reference somewhere?
- Calendar Control Recommendation?
- Error trying to run
- can't drag and drop CrystalReportViewer to a web form
- ASP.NET customized error page
- session state between ASP and ASP.NET
- Page Caching
- Type Hashtable not defined
- HttpContext.Cache
- VStudio very slow
- Slow page loading EVERY time
- DataView Columns
- Steps to use resource files in Visual Studio.NET
- .NET v1.1 on Exchange Server
- Problem with Link Button
- Response.Redirect
- Can a traditional ASP site be done with VS?
- Session_Start
- Radius Search
- Dynamic controls in DataGrid not retaining values
- enter button as submit?
- iewebcontrols
- Dynamic Columns Sorting issue in DataGrid
- C# Hit hyperlink and send a column value to the code behind.
- Display Excel Shape Object in Web Browser(using ASP.Net)
- .NET v1.0 / .NET v1.1 Deployment Q
- Can't drag and drop the CrystalReportViewer to a web form
- Can't hide column in datagrid
- No Data.. urgent
- Can anyone point me in the right direction.
- Updating db from relational view - laborious or what?
- Very simple Question i hope about server control
- Debugging traditional asp pages under VS.NET 2003
- ASP.NET - Keyword not found: 1;PASSWORD
- WebControls and Integrated Security
- Firing events?
- Inter-Application Data sharing
- Dot Net Framework on win2k server
- tilde-replacement changes from framework 1.0 to 1.1 ????
- autopopulating PDF files using C3 in asp.net
- sms email receive twice for tmobile and cingular
- Highlighting text
- Printing in Crystal Reports ?
- OLE Automation by Javascript
- add dataset to table
- How can I submit an ASP.NET web form to a new page
- could not load type library
- 403 access forbidden
- using DataBinder.Eval programatically
- Windows Control hosted in WebForm
- Why is no one answering my queries..
- Page templetes
- Session End Synchronisation
- How to pass special characters in Update stmt!!
- What is the diference?
- Firing the Command event for all items in a repeater
- Please help!!
- Invalid CurrentPageIndex value
- Ccopying a datatable content from an untyped dataset into a table which is inside a typed dataset
- Pass Datagrid Data To Parent Form
- <script language="vb" runat=server> versus <% %>
- CreateChildControls... when and why?
- .NET Parameter Direction affects datatype casting
- Calling .ASP pages
- can a local video feed be on a web page?
- Public variable = application("variable") ?
- Put value from DataList on user control to parent page?
- binding data to a radiobuttonlist within an ItemTemplate
- Session End Help...
- onbeforeunload event is fired twice
- Dynamically loaded User Controls - problem with button_click event
- ASP.Net Versions and Compatibility
- Complex DataList formatting with a <table>
- Refreshing DataGrid in page_load
- Web.Config
- Image control does not refresh
- 32 dpp BMP to 24dpp BMP??
- How to detect when user clicks on a broken link to an external web page.
- PostBack on Posted Page
- Help: Data Binding to Custom Collection
- Forms authentication in a subfolder problem, please help
- delayed page requests
- how to not send a silentpost in asp.net C#
- Problems with asp:placeholder nested withing datalist
- CustomValidator in the EditItemTemplate of a DataGrid
- system.outofmemory exception
- Compare Validator Not working after copy project to web server ?
- Deploying Web Application
- unable to find script library '/aspnet_client/system_web/1_1_4322/WebUIValidation.js'
- inheriting from a class that inherits from the UserControl class
- RequiredFieldValidator Runtime Error
- Transform 24bit BMP to grayscale??
- copying a datatable content from an untyped dataset into a table which is inside a typed dataset
- Web Custom Control - Tooltip and Rendering within IDE
- Datagrid info to labels in table
- Are instances of ASPX pages EVER reused ??
- javascript and c#
- Auto Complete Textbox or Editable Dropdown (VB style)
- Specified cast is not valid - please help
- asp:repeater not doing a refresh/postback
- Dopostback Paging Question
- having trouble sending a post in asp.net using C#
- Datagrid in-place editing
- page break
- User Accounts difference
- count visits
- ASP.NET control for Password Fields
- Application Scoped Objects
- Inserting a Record Into SQL
- My page crashes Internet Explorer on Macs
- Custom Control not displaying text in Designer
- SMTP Mail as HTML
- SMTP Mail as HTML
- How to create menus?
- set user & passwd for SmtpMail.SmtpServer?
- Web server upgraded to 2003 & IIS 6 and now call to win32 dll will not work
- Deleting file
- asp.net file access
- Memory leak?
- IIS 6.0 Window2003 ASP.NET Session problem
- Cookie not created! Help please.
- BIG problem when sending complex datatype to a webservice
- MSFT Why does this happen?
- Problem with solution_name.dll not being created in /bin directory
- Writing Styles to the page
- How can I Use Multiple Authentication in ASP.NET
- Response.End
- Response.End
- Typed-DataSet in a WebControl Library
- fragment page caching
- Create directory on a web server
- LoadControl v calling constructor
- How to start code when IIS starts - Any expert?
- App Domains....
- .NET Framework and Server side include
- Opening a web project from web
- Custom Error
- Debugging Errors
- Office Web Components
- Custom Error
- Directive different
- Visual Studio change my code
- ASP.NET Submit button in different frame
- Why save dialog displaying twise
- Asp.net server controls
- datagrid columns programatically
- Microsoft.Web.UI.webControls
- what is postback?
- Server Error
- Application_Error - Error Handling & Webservices
- Basic download question
- Button_Click event in user control not firing on page
- Custom popup confirm windows?
- Validating
- Getting Form id in code behind
- asp.net error
- Change Web Setup Project default virtual directory
- Accessing Web Service with HTTP status 407: Proxy Access Denied
- User Controls and Postback
- Can't load web project
- Strip space before page rendering
- Cant seem to transalte this VB code to C#
- adding attribute to checkbox
- MailMessage problems
- Web User Controls
- webconfig error
- asp setfocus and performance
- datepicker help needed
- Any one had any problems with insalling visual studio 6 after vs2003 ?
- Would like to disable validation for entire page
- getting the values from a datalist control
- Mysterious problem with forms....
- muliple datagrids on one page.
- Grid Row's ID?
- PostBack Fails
- Page.RegisterClientScriptBlock rebuilds Javescript for every control
- Global IP Address
- Javascript forced button click
- aspnet account
- form issues
- Windows Server 2003 ASP.NET using .NET Framework 1.0
- buffering
- Redirecting
- getting form variables from previous page
- Extracting matches from Regex.Split
- Would this be possible through ASP.net ?
- batch compilation
- IIS v5.0 and ASP.NET v1.0
- forms-based authentication
- User Control
- Refer to Control by ID or Other String (not Index)
- HttpHandler and Windows Integrated Security
- Custom Tags???
- ADODB.Connection error '800a0e7c'
- Getting confirmation from javascript
- Trace.Warn
- undesired behavior in setting hyperlink .NavigateURL property
- Outputting in-line image from BLOB in ASP.NET
- FMStock Install Error
- Response.End
- Response.End
- How do I create a Web Application at the domain root?
- Printing a large datagrid (hopefully a simple problem)
- App center
- forms identity problem
- Datalist control questions
- IE web Controls are not sup
- Calling a PowerBuilder object in ASP.NET
- fragment cache
- web.Config Confusion
- How to hide a column in datagrid with AutoGenerateCol=true
- Refresh window?
- C# to VB Overrides
- PopUp with session variables?
- Microsoft IE Treeview Control Right Click popup menu
- QuickStart sample?
- Problems with WebClient.UploadFile
- Ie Web Controls Problem (Not displaying Properly On Production Server)
- Multipage
- Skip HttpModule Use?
- Debugging Problems
- Clearing cached dlls
- Can't view runtime error
- Problems deleting from the GAC
- HTTPS and file downloading
- Access to the registry key problem
- ASP:DropDownList Question
- Regarding Microsoft.web.UI.webcontrols
- SetFcus using Javascript
- Module to notify application when leaving site?
- Web Output Component
- Catching ChildControl render output and modify before sent out
- Which choice for a business service
- Uploading ASPX files
- asp:dropdownlist
- ASP Repeater
- batch compilation
- Invalid export DLL or export format
- There can be only one 'page' directive
- Can you force .NET to use browser errors?
- Validating Date Ranges using Javascript
- How to Create Report
- Object reference not set to an instance of an object.
- user control property
- SortedLists and Databinding
- asp.net version mismatch????
- Client side HttpRequest or Call WebService.
- URL Encoding German Chars problem. help please
- Automate word ?
- cannot get the SqlDataReader object field value
- Response.Filter, Compression and POST
- Starting a vb.net standalone application from asp.net
- calendar web control error
- Refreshing a custom user control instance
- All Experts, please HELP!!
- Error: Invalid element value.
- ASP.NET Auction Application
- Auction site design tips needed.
- Close Window from Server Side.
- Using xml source in datagrid -- possible with non-structured data?
- dropdown list not working
- reading asp session in aspx
- Good books
- Authentication: Need to re-login for every directory
- what are .mspx extension files?
- .NET Remoting vs. Web Services
- Forms Authentication +Active Directory +Roles
- Pass variables to executable
- Compiling (JIT and reverse)
- dataAdapter Wizard and Jet
- Anonymous login and Windows Authentication
- Anyone know of .Net Web hosting
- htmlInputFile
- Firing off javascript functions before a ImageButton does a postback?
- user roles for downloading files
- how to disable a linkbutton in a repeater?
- Asp.net and IIS Login Problem
- Retrieve ID after inserting into Access database
- aspnet_wp
- Datagrid
- VS.NET 2003 Project "Conversion"
- Using two Datareader
- .Net documentation
- includes in ASP .NET
- Fire a javascript function from vb.net
- getting data from datalist control
- passing server (not form) variables to same page
- Get Form ID at Runtime
- Ignore DTD pi in xsl transform?
- Action Queries inside MDB files
- import delimited text file to msde
- what is the .NET framework good for in IE?
- asp.net C# payflowlink post and capturing silent post
- Urgent
- long running query...
- Odd ExecuteReader Error
- Creating an MDB/XLS file
- Dataset ReadXML
- Content Management System - ASP.NET Examples
- Regularexpression validator on multiline text box
- Specifiying the location of DLL's in ASP.NET.
- Passing Parameters to a Function
- retrieving values from a datalist control
- Urgent
- asp.net C# payflowlink post and capturing silent post
- long running query...
- ListItems are added in reverse order
- Creating an MDB/XLS file
- Odd ExecuteReader Error
- err SQL Server
- Dataset ReadXML
- Content Management System - ASP.NET Examples
- Regularexpression validator on multiline text box
- Specifiying the location of DLL's in ASP.NET.
- Passing Parameters to a Function
- retrieving values from a datalist control
- Posting to another page?
- Calendar child not working in a custom composite control
- Is it possible to read a textbox control from another web form?
- stand alone VS automation - possible???
- ListItems are added in reverse order
- Why it doesn't work
- err SQL Server
- How to compile VB functions
- carrying the user input across pages..
- ASP.NET Content Replication Over Server Farm
- web form designer error
- Posting to another page?
- carrying the user input across pages..
- Calendar child not working in a custom composite control
- ASP:Image vs ASP:Literal
- Is it possible to read a textbox control from another web form?
- stand alone VS automation - possible???
- asp.net won't start up, again...
- Why it doesn't work
- Datagrid columns???
- How to compile VB functions
- carrying the user input across pages..
- Font Usage
- ASP.NET Content Replication Over Server Farm
- What is MSPX
- web form designer error
- carrying the user input across pages..
- ASP:Image vs ASP:Literal
- Opinions of Hostdepartment
- AD
- storing connection string in session
- asp.net won't start up, again...
- Datagrid columns???
- Font Usage
- Regex validator question
- What is MSPX
- Calling function from aspx
- AD
- Opinions of Hostdepartment
- storing connection string in session
- Regex validator question
- Calling function from aspx
- Configuration Error
- Life without ViewState
- Question: Invalid Cast Exception Error
- IsObject equivalent?
- asp.net and Jmail
- URL Rewriter
- LinkButton inside DataGrid Header
- validation controls not working on production server
- Does ASP.NET wait for the entire request body before executing the ASP page?
- Postback and set focus to Page anchor
- Help - aspnet_wp.exe could not be launched
- using Include directive in ASP .NET
- Form Fields Visibility
- Access deny in Windows XP
- User Controls Events
- Alternatives to Windows Service?
- Setting the page title by program code
- How to create a WORD doc?
- Pager for datalist control
- Server security
- How can i import namespace in asp.net?
- Returning my object(s) from web service
- Administer IIS from ASP.NET
- about Javascript??
- Datakey in Datagrid
- Default document in IIS loads after every page request in application.
- Include files
- Request.Form("field_name") no longer use in ASP.NET?
- get record field value in ASP.NET
- XSL help....
- Basic Architecture 2
- web setup question
- BUG : Datagrid ?
- Dynamic server controls using XML and XSLT
- ASP.Net on different Browsers
- programatically generated LinkButton within Datagrid header can't call a sub
- How can I get relative path in Class Library?
- Hosting
- NANT, web projects and resource files
- Resizing a DataGrid to a DIV
- asp.net and excell.. file share problems
- The best way to make a generic report page using a web service indicated in a xml file.
- can't debug asp.net; won't stop at breakpoints
- Where is the spell checker in Visual Studio .NET ?
- simple postback to a database
- Dropdownlist in web application similar to DropDown in windows
- Error when setting exchange security: The directory property cannot be found in the cache
- HTTPS and WebRequest
- bug with panel and datagrid
- html gets deleted
- aspx refresh
- DataList's ItemCreated event
- Module Not Found
- Pointing to the current record identifier
- Registering ASP.net
- JavaScript submit method doesn't work
- Bind a string collection to a data grid
- Server Side Controls in ASP.NET
- download files
- HttpPostedFile.SaveAs() permission error
- Refresh Frame?
- Validation
- ASP.NET Hosting
- hyperllink - default document
- Looking for a component
- Xml and XSLT
- ASP.NET Hosting
- How to print pdf document.
- Httprequest problems
- IIS Web Sites
- WebControls Handlers dont fire
- request.querystring and frames problem
- Check If Username Exsists
- Simple form postback to update a database
- What to download?
- Good book to get me started...
- Q&A from PDC
- Help! User control caching not working in Mozilla, works in IE
- Dataset persistence???
- Changing stylesheets
- nunitasp and proxys
- C# UserControl Bubbling events
- Drop down list - Display only items
- Recyclye of ASP.net
- receive garbage from oracle records
- Calling COM from ASP.NET
- asp
- Concatenating asp server controls
- .NET Clr Data performance counters
- GUI Design help
- What can Content Management Server do?
- Please help.. .can't make this work on win 2k3 (delegates related)
- Web Form
- Web Forms
- Using .net to enter form values (WebClient)
- Throw Exception Vs Throw New Exception
- deployment proj would not build - ideas needed
- I need help with .bat script
- Error de depuracion en Visual Studio .NET 2002
- asp.net & web services
- How to retrieve HTML in ASPX page
- Problem accessing application through a gateway
- how to transfer a file that created in server to client computer?
- XML and XSLT
- Output of WebForm Page
- server unavailable error message
- Session state being lost
- what the heck is "ASP.NET Web Matrix" ?
- Folder hierarchy from a database table
- ASP.NET using ADO.NET connection runtime error
- How to retrieve db table field's max length by using ADO.NET?
- Grid's cell?
- How to do that..... ?
- VB.NET Application File Upload to Webserver
- Transfer, Viewstate, redirect...
- How do I retrieve / request a Windows.Form.Control's Value running on a WebForm ?
- Changing text in another frame
- Download dialog box show instead of ASP.NET page
- Checking string's length with validator controls
- Creating a Progress Bar for Uploads in DotNet (VB.NET)
- cache or delay postback?
- String.ToUpper
- web.config and global.asax
- makeing code run on each page view
- Running exe as an independent process
- Serious issues with webcontrols...
- cannot get ASP.NET running with simple testfile
- using a querystring
- Refreshing an ASPX Page
- Partial Types with Code Behind in ASP.NET 2.0?
- application mapping in iis
- Try on this internet pack
- Refreshing a Crystal Report from a Viewer
- Problem with dropdownlist and viewstate.
- problem with assigning data from data reader to label control in web form
- Virtual Paths and HttpModules
- Caching user controls - problem with multiple users?
- Posting Data ??
- DropDownList in DataGrid ColumTemplate Problem
- ValidationSummary NOT ok in Win XP ?
- Binary Download of Files from another internal Server
- XML query and databinding
- Login failed for user when deleting row from datagrid
- Can Novell Security impact ASP.NET performance?
- how do head and items
- SQL 7
- REGEX NON MATCHING - LINES NOT CONTAINING STRING LOGIC
- ActiveX Control on Web page
- <span>'s in ASP.NET
- How to Delete Old Records from an Excel File Using ASP.Net and Visual Basic .NET
- How to create Voice Mail...?
- Using Objects
- getting last record in access db
- Button Click Event
- Manual Compile
- HttpWebResponse Problems
- URLEncode Problem from ASP.NET
- webdataadmin installation
- XPathDocument
- ItemCommand firing instead of SortCommand
- Crea
- Manual Compiling
- Default Parameters in c#
- DataSet
- gracefully handling stored procedure errors
- @Assembly
- listbox?
- Web Custom Control Part 2 | https://bytes.com/sitemap/f-329-p-170.html | CC-MAIN-2020-45 | refinedweb | 2,836 | 50.33 |
B2J Distribution Tutorial (Running your BPEL across multiple hosts)
Author: Antony Miguel, Last Updated 15th March 2006
Overview
- This document explains how to use B2J to run BPEL processes distributed across multiple hosts. This tutorial assumes that the reader has already read the B2J beginners tutorial to familiarise themselves with the basics of running a choreography. If you have not already read the beginners tutorial you can read it here. B2J contains two reference implementations of the B2J framework. One is a simple BPEL engine capable of running a BPEL process within the JVM it is invoked from. The second is a more complicated distributed engine capable of running multithreaded BPEL processes (<flow> or <forEach parallel="yes"> activities) as a single process distributed across multiple physical hosts. The TPTP Choreography engine is capable of running as a single unit spread across multiple physical hosts. Common reasons for distributing a BPEL process are:
- To run local resource intensive BPEL processes more efficiently by utilising resources across a group of hosts
- To simulate or invoke operations from different geographic locations without requiring a choreography of web services
Requirements
- The user should have an installation of Eclipse and the B2J subproject plugins installed. See the B2J subproject main page for versions, downloads and installation information.
- The user should have read the B2J Beginners Tutorial.
Distributing a BPEL process
To distribute a BPEL process, you must first set up a launch configuration as described in the beginners tutorial. Run the launch configuration with the default local engine to ensure everything is configured properly.
A good example process to run for this tutorial is the test_messaging sample BPEL process as it contains a significant degree of multithreading.
Once you have your launch configuration set up, click the Distribtion tab in the launch configuration dialog and select the Use distributed engine (one or more hosts) button.
From here you can use the buttons on the right hand side of the table to add and remove hosts from your distribution. Click the Add button once for each host you wish to run the engine on. Edit the hostnames in the first column of the table.
In our example, even though we only wish to run over two hosts, we have three entries in the table. This is because the first host in the table is the Coordinator host which hosts the part of the engine that coordinates the activities of the Worker hosts.
The Coordinator host can be on a separate host from the worker hosts or on the same host as one of the worker hosts.
In our example, we will use the same host as one of the worker hosts.
When we have specified all the hosts we wish to run the BPEL process over, we must run the B2J engine daemon on each of these hosts.
Note: If you plan to use the B2J engine in a distributed environment over a long period of time, you need only install and run any version of the B2J engine daemon once. The B2J engine will automatically update and choose the right version of the engine to run based on the version used in the client that executes the BPEL.
E.g. if you install and run B2J 1.0 on 10 physical hosts, then later you download the 1.2 plugin to your Eclipse workbench, you need not manually update all 10 physical hosts. The 1.2 plugin will automatically update any hosts used in the BPEL process execution.
To install the B2J engine daemon, copy or unzip the B2J plugin onto the target host and run either the b2j.bat file on Windows or the b2j.sh on Linux or other *nix variants.
Alternatively you can run the command:
java -cp b2j.jar org.eclipse.stp.b2j.core.jengine.internal.mainengine.SoapDaemon
From the B2J plugin directory.
Please Note: The choreography engine requires Java 1.4 or above to run. If you do not have Java installed you can download installation binaries for most operating systems from.
The B2J engine daemon will try to listen on port 11000 for incoming SOAP/HTTP connections. Worker or coordinator hosts that the B2J engine daemon creates will scan ports above 11000 until they get to a free port (e.g. 11001, 11002...). If you have a firewall on your host you should configure it to allow incoming connections on port 11000+ (e.g. 11000 to 11100).
If you specify localhost as one of the hosts to run the engine on, the launch configuration will automatically run an engine daemon inside the Eclipse Workbench JVM in case you haven't started a separate engine.
When you have finished installing the B2J daemon on your hosts and you have specified the hosts in the Distribution tab Hosts table, click Run to run your launch configuration.
The initial (engine related, green and red) output from your launch configuration should look similar to:
Aborting a Run
If, for any reason you wish to abort the B2J process execution, you can open the Progress view and click the red terminate button pertaining to the B2J execution you wish to abort.
Note that the engine will not perform any cleanup during this abort - it will simply quit the processes hosting the engine (note that this does NOT include any engine daemons you may have running).
Headless Execution
If you run a distributed engine (even just on a single host) and you close the client to that engine (e.g. the Eclipse Workbench) the engine will automatically abort the run. If you want the engine to continue running after you have closed the workbench you can check the Leave engine running on client disconnect option in the Compilation tab of the launch configuration.
If you later wish to terminate one of these engines you can use the WTP web services explorer to invoke the engine daemon's SOAP/HTTP WSDL interface.
To see this, open a standard web browser and go to (substitute localhost with whatever host the engine daemon is running on). You should see something like:
Click on the Daemon Public WSDL link to see the WSDL interface to the engine daemon. You can use the address of this WSDL in the WTP web services explorer or with any generic web services UI to query the B2J engine daemon and terminate any engines running in it.
Network Layer Robustness
If you run a distributed engine over a non-LAN network link or over any poor network link there is a chance that the network link will break and cause the engine to abort the execution. If you want the engine to be robust against these network failures you can set the Reconnection and Reconnect Timeout options for each transport link in the B2J engine. These properties are in the B2J launch configuration under the Distribution tab.
For each host mentioned in the table, you can set individual properties for the link to that host from the coordinator host (or from the workbench, if you change settings for the coordinator host itself).
With transport link reconnection ON, if a network failure occurs, the B2J engine will pause any parts of the BPEL process which are dependant on that network link and try to re-establish the link, timing out after the period set under Reconnect Timeout.
Deployment and Load Balancing
Deployment and load balancing is performed within the BPEL program itself via the use of java bound engine web services. The example file test_deployment shows how parallel threads in a flow can be located either on the first host in the list of engine hosts, as an even spread across all hosts, or on a specific host identified by its index in the list of hosts. Child flows can then be used to create arbitrary combinations of these two deployment methods.
In the future it is expected that the deployment service will grow in functionality to specify a range of deployment strategies for any given flow or parallel activity.
The WSDL for the deployment service and a number of other services can be found in the plugin directory under the conf/Default/bpel directory.
As demonstrated in the test_deployment example, these WSDL files can be included into any BPEL process running in B2J via the namespace.
(E.g. <import ...).
Engine Deployment and Load Balancing BPEL | http://www.eclipse.org/stp/b2j/docs/tutorials/distribution/distribution_tut.php | crawl-003 | refinedweb | 1,391 | 58.01 |
Lightning has plenty of uses in games, from background ambience during a storm to the devastating lightning attacks of a sorcerer. In this tutorial, I'll explain how to programmatically generate awesome 2D lightning effects: bolts, branches, and even text.
Note: Although this tutorial is written using C# and XNA, you should be able to use the same techniques and concepts in almost any game development environment.
Final Video Preview
Step 1: Draw a Glowing Line
The basic building block we need to make lightning is a line segment. Start by opening up your favourite image editing software and drawing a straight line of lightning. Here's what mine looks like:
We want to draw lines of different lengths, so we're going to cut the line segment into three pieces as shown below. This will allow us to stretch the middle segment to any length we like. Since we are going to be stretching the middle segment, we can save it as only a single pixel thick. Also, as the left and right pieces are mirror images of each other, we only need to save one of them. We can flip it in the code.
Now, let's declare a new class to handle drawing line segments:
public class Line { public Vector2 A; public Vector2 B; public float Thickness; public Line() { } public Line(Vector2 a, Vector2 b, float thickness = 1) { A = a; B = b; Thickness = thickness; } }
A and B are the line's endpoints. By scaling and rotating the pieces of the line, we can draw a line of any thickness, length, and orientation. Add the following
Draw() method to the
Line class:
public void Draw(SpriteBatch spriteBatch, Color color) { Vector2 tangent = B - A; float rotation = (float)Math.Atan2(tangent.Y, tangent.X); const float ImageThickness = 8; float thicknessScale = Thickness / ImageThickness; Vector2 capOrigin = new Vector2(Art.HalfCircle.Width, Art.HalfCircle.Height / 2f); Vector2 middleOrigin = new Vector2(0, Art.LightningSegment.Height / 2f); Vector2 middleScale = new Vector2(tangent.Length(), thicknessScale); spriteBatch.Draw(Art.LightningSegment, A, null, color, rotation, middleOrigin, middleScale, SpriteEffects.None, 0f); spriteBatch.Draw(Art.HalfCircle, A, null, color, rotation, capOrigin, thicknessScale, SpriteEffects.None, 0f); spriteBatch.Draw(Art.HalfCircle, B, null, color, rotation + MathHelper.Pi, capOrigin, thicknessScale, SpriteEffects.None, 0f); }
Here,
Art.LightningSegment and
Art.HalfCircle are static
Texture2D variables holding the images of the pieces of the line segment.
ImageThickness is set to the thickness of the line without the glow. In my image, it's 8 pixels. We set the origin of the cap to the right side, and the origin of the middle segment to its left side. This will make them join seamlessly when we draw them both at point A. The middle segment is stretched to the desired width, and another cap is drawn at point B, rotated 180°.
XNA's
SpriteBatch class allows you to pass it a
SpriteSortMode in its constructor, which indicates the order in which it should draw the sprites. When you draw the line, make sure to pass it a
SpriteBatch with its
SpriteSortMode set to
SpriteSortMode.Texture. This is to improve performance.
Graphics cards are great at drawing the same texture many times. However, each time they switch textures, there's overhead. If we draw a bunch of lines without sorting, we'd be drawing our textures in this order:
LightningSegment, HalfCircle, HalfCircle, LightningSegment, HalfCircle, HalfCircle, ...
This means we'd be switching textures twice for each line we draw.
SpriteSortMode.Texture tells
SpriteBatch to sort the
Draw() calls by texture so that all the
LightningSegments will be drawn together and all the
HalfCircles will be drawn together. In addition, when we use these lines to make lightning bolts, we'd like to use additive blending to make the light from overlapping pieces of lightning add together.
SpriteBatch.Begin(SpriteSortMode.Texture, BlendState.Additive); // draw lines SpriteBatch.End();
Step 2: Jagged Lines
Lightning tends to form jagged lines, so we'll need an algorithm to generate these. We'll do this by picking points at random along a line, and displacing them a random distance from the line. Using a completely random displacement tends to make the line too jagged, so we'll smooth the results by limiting how far from each other neighbouring points can be displaced.
The line is smoothed by placing points at a similar offset to the previous point; this allows the line as a whole to wander up and down, while preventing any part of it from being too jagged. Here's the code:
protected static List<Line> CreateBolt(Vector2 source, Vector2 dest, float thickness) { var results = new List<Line>(); Vector2 tangent = dest - source; Vector2 normal = Vector2.Normalize(new Vector2(tangent.Y, -tangent.X)); float length = tangent.Length(); List<float> positions = new List<float>(); positions.Add(0); for (int i = 0; i < length / 4; i++) positions.Add(Rand(0, 1)); positions.Sort(); const float Sway = 80; const float Jaggedness = 1 / Sway; Vector2 prevPoint = source; float prevDisplacement = 0; for (int i = 1; i < positions.Count; i++) { float pos = positions[i]; // used to prevent sharp angles by ensuring very close positions also have small perpendicular variation. float scale = (length * Jaggedness) * (pos - positions[i - 1]); // defines an envelope. Points near the middle of the bolt can be further from the central line. float envelope = pos > 0.95f ? 20 * (1 - pos) : 1; float displacement = Rand(-Sway, Sway); displacement -= (displacement - prevDisplacement) * (1 - scale); displacement *= envelope; Vector2 point = source + pos * tangent + displacement * normal; results.Add(new Line(prevPoint, point, thickness)); prevPoint = point; prevDisplacement = displacement; } results.Add(new Line(prevPoint, dest, thickness)); return results; }
The code may look a bit intimidating, but it's not so bad once you understand the logic. We start by computing the normal and tangent vectors of the line, along with the length. Then we randomly choose a number of positions along the line and store them in our positions list. The positions are scaled between
0 and
1 such that
0 represents the start of the line and
1 represents the end point. These positions are then sorted to allow us to easily add line segments between them.
The loop goes through the randomly chosen points and displaces them along the normal by a random amount. The scale factor is there to avoid overly sharp angles, and the envelope ensures the lightning actually goes to the destination point by limiting displacement when we're close to the end.
Step 3: Animation
Lightning should flash brightly and then fade out. To handle this, let's create a
LightningBolt class.
class LightningBolt { public List<Line> Segments = new List<Line>(); public float Alpha { get; set; } public float FadeOutRate { get; set; } public Color Tint { get; set; } public bool IsComplete { get { return Alpha <= 0; } } public LightningBolt(Vector2 source, Vector2 dest) : this(source, dest, new Color(0.9f, 0.8f, 1f)) { } public LightningBolt(Vector2 source, Vector2 dest, Color color) { Segments = CreateBolt(source, dest, 2); Tint = color; Alpha = 1f; FadeOutRate = 0.03f; } public void Draw(SpriteBatch spriteBatch) { if (Alpha <= 0) return; foreach (var segment in Segments) segment.Draw(spriteBatch, Tint * (Alpha * 0.6f)); } public virtual void Update() { Alpha -= FadeOutRate; } protected static List<Line> CreateBolt(Vector2 source, Vector2 dest, float thickness) { // ... } // ... }
To use this, simply create a new
LightningBolt and call
Update() and
Draw() each frame. Calling
Update() makes it fade.
IsComplete will tell you when the bolt has fully faded out.
You can now draw your bolts by using the following code in your Game class:
LightningBolt bolt; MouseState mouseState, lastMouseState; protected override void Update(GameTime gameTime) { lastMouseState = mouseState; mouseState = Mouse.GetState(); var screenSize = new Vector2(GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); var mousePosition = new Vector2(mouseState.X, mouseState.Y); if (MouseWasClicked()) bolt = new LightningBolt(screenSize / 2, mousePosition); if (bolt != null) bolt.Update(); } private bool MouseWasClicked() { return mouseState.LeftButton == ButtonState.Pressed && lastMouseState.LeftButton == ButtonState.Released; } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(SpriteSortMode.Texture, BlendState.Additive); if (bolt != null) bolt.Draw(spriteBatch); spriteBatch.End(); }
Step 4: Branch Lightning
You can use the
LightningBolt class as a building block to create more interesting lightning effects. For example, you can make the bolts branch out as shown below:
To make the lightning branch, we pick random points along the lightning bolt and add new bolts that branch out from these points. In the code below, we create between three and six branches which separate from the main bolt at 30° angles.
class BranchLightning { List<LightningBolt> bolts = new List<LightningBolt>(); public bool IsComplete { get { return bolts.Count == 0; } } public Vector2 End { get; private set; } private Vector2 direction; static Random rand = new Random(); public BranchLightning(Vector2 start, Vector2 end) { End = end; direction = Vector2.Normalize(end - start); Create(start, end); } public void Update() { bolts = bolts.Where(x => !x.IsComplete).ToList(); foreach (var bolt in bolts) bolt.Update(); } public void Draw(SpriteBatch spriteBatch) { foreach (var bolt in bolts) bolt.Draw(spriteBatch); } private void Create(Vector2 start, Vector2 end) { var mainBolt = new LightningBolt(start, end); bolts.Add(mainBolt); int numBranches = rand.Next(3, 6); Vector2 diff = end - start; // pick a bunch of random points between 0 and 1 and sort them float[] branchPoints = Enumerable.Range(0, numBranches) .Select(x => Rand(0, 1f)) .OrderBy(x => x).ToArray(); for (int i = 0; i < branchPoints.Length; i++) { // Bolt.GetPoint() gets the position of the lightning bolt at specified fraction (0 = start of bolt, 1 = end) Vector2 boltStart = mainBolt.GetPoint(branchPoints[i]); // rotate 30 degrees. Alternate between rotating left and right. Quaternion rot = Quaternion.CreateFromAxisAngle(Vector3.UnitZ, MathHelper.ToRadians(30 * ((i & 1) == 0 ? 1 : -1))); Vector2 boltEnd = Vector2.Transform(diff * (1 - branchPoints[i]), rot) + boltStart; bolts.Add(new LightningBolt(boltStart, boltEnd)); } } static float Rand(float min, float max) { return (float)rand.NextDouble() * (max - min) + min; } }
Step 5: Lightning Text
Below is a video of another effect you can make out of the lightning bolts:
First we need to get the pixels in the text we'd like to draw. We do this by drawing our text to a
RenderTarget2D and reading back the pixel data with
RenderTarget2D.GetData<T>(). If you'd like to read more about making text particle effects, I have a more detailed tutorial here.
We store the coordinates of the pixels in the text as a
List<Vector2>. Then, each frame, we randomly pick pairs of these points and create a lightning bolt between them. We want to design it so that the closer two points are to one another, the greater the chance that we create a bolt between them. There's a simple technique we can use to accomplish this: we'll pick the first point at random, and then we'll pick a fixed number of other points at random and choose the nearest.
The number of candidate points we test will affect the look of the lightning text; checking a larger number of points will allow us to find very close points to draw bolts between, which will make the text very neat and legible, but with fewer long lightning bolts between letters. Smaller numbers will make the lightning text look more crazy but less legible.
public void Update() { foreach (var particle in textParticles) { float x = particle.X / 500f; if (rand.Next(50) == 0) { Vector2 nearestParticle = Vector2.Zero; float nearestDist = float.MaxValue; for (int i = 0; i < 50; i++) { var other = textParticles[rand.Next(textParticles.Count)]; var dist = Vector2.DistanceSquared(particle, other); if (dist < nearestDist && dist > 10 * 10) { nearestDist = dist; nearestParticle = other; } } if (nearestDist < 200 * 200 && nearestDist > 10 * 10) bolts.Add(new LightningBolt(particle, nearestParticle, Color.White)); } } for (int i = bolts.Count - 1; i >= 0; i--) { bolts[i].Update(); if (bolts[i].IsComplete) bolts.RemoveAt(i); } }
Step 6: Optimization
The lightning text, as shown above, may run smoothly if you have a top of the line computer, but it's certainly very taxing. Each bolt lasts over 30 frames, and we create dozens of new bolts each frame. Since each lightning bolt may have up to a couple hundred line segments, and each line segment has three pieces, we end up drawing a lot of sprites. My demo, for instance, draws over 25,000 images each frame with optimizations turned off. We can do better.
Instead of drawing each bolt until it fades out, we can draw each new bolt to a render target and fade out the render target each frame. This means that, instead of having to draw each bolt for 30 or more frames, we only draw it once. It also means there's no additional performance cost for making our lightning bolts fade out more slowly and last longer.
First, we'll modify the
LightningText class to only draw each bolt for one frame. In your
Game class, declare two
RenderTarget2D variables:
currentFrame and
lastFrame. In
LoadContent(), initialize them like so:
lastFrame = new RenderTarget2D(GraphicsDevice, screenSize.X, screenSize.Y, false, SurfaceFormat.HdrBlendable, DepthFormat.None); currentFrame = new RenderTarget2D(GraphicsDevice, screenSize.X, screenSize.Y, false, SurfaceFormat.HdrBlendable, DepthFormat.None);
Notice the surface format is set to
HdrBlendable. HDR stands for High Dynamic Range, and it indicates that our HDR surface can represent a larger range of colors. This is required because it allows the render target to have colors that are brighter than white. When multiple lightning bolts overlap we need the render target to store the full sum of their colors, which may add up beyond the standard color range. While these brighter-than-white colors will still be displayed as white on the screen, it's important to store their full brightness in order to make them fade out correctly.
Each frame, we first draw the contents of the last frame to the current frame, but slightly darkened. We then add any newly created bolts to the current frame. Finally, we render our current frame to the screen, and then swap the two render targets so that for our next frame,
lastFrame will refer to the frame we just rendered.
void DrawLightningText() { GraphicsDevice.SetRenderTarget(currentFrame); GraphicsDevice.Clear(Color.Black); // draw the last frame at 96% brightness spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointClamp, null, null); spriteBatch.Draw(lastFrame, Vector2.Zero, Color.White * 0.96f); spriteBatch.End(); // draw new bolts with additive blending spriteBatch.Begin(SpriteSortMode.Texture, BlendState.Additive); lightningText.Draw(); spriteBatch.End(); // draw the whole thing to the backbuffer GraphicsDevice.SetRenderTarget(null); spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointClamp, null, null); spriteBatch.Draw(currentFrame, Vector2.Zero, Color.White); spriteBatch.End(); Swap(ref currentFrame, ref lastFrame); } void Swap<T>(ref T a, ref T b) { T temp = a; a = b; b = temp; }
Step 7: Other Variations
We've discussed making branch lightning and lightning text, but those certainly aren't the only effects you can make. Let's look at a couple other variations on lightning you may way to use.
Moving Lightning
Often you may want to make a moving bolt of lightning. You can do this by adding a new short bolt each frame at the end point of the previous frame's bolt.
Vector2 lightningEnd = new Vector2(100, 100); Vector2 lightningVelocity = new Vector2(50, 0); void Update(GameTime gameTime) { Bolts.Add(new LightningBolt(lightningEnd, lightningEnd + lightningVelocity)); lightningEnd += lightningVelocity; // ... }
Smooth Lightning
You may have noticed that the lightning glows brighter at the joints. This is due to the additive blending. You may want a smoother, more even look for your lightning. This can be accomplished by changing your blend state function to choose the max value of the source and destination colors, as shown below.
private static readonly BlendState maxBlend = new BlendState() { AlphaBlendFunction = BlendFunction.Max, ColorBlendFunction = BlendFunction.Max, AlphaDestinationBlend = Blend.One, AlphaSourceBlend = Blend.One, ColorDestinationBlend = Blend.One, ColorSourceBlend = Blend.One };
Then, in your
Draw() function, call
SpriteBatch.Begin() with
maxBlend as the
BlendState instead of
BlendState.Additive. The images below show the difference between additive blending and max blending on a lightning bolt.
Of course max blending won't allow the light from multiple bolts or from the background to add up nicely. If you want the bolt itself to look smooth, but also to blend additively with other bolts, you can first render the bolt to a render target using max blending, and then draw the render target to the screen using additive blending. Be careful not to use too many large render targets as this will hurt performance.
Another alternative, which will work better for large numbers of bolts, is to eliminate the glow built into the line segment images and add it back using a post-processing glow effect. The details of using shaders and making glow effects are beyond the scope of this tutorial, but you can use the XNA Bloom Sample to get started. This technique will not require more render targets as you add more bolts.
Conclusion
Lightning is a great special effect for sprucing up your games. The effects described in this tutorial are a nice starting point, but it's certainly not all you can do with lightning. With a bit of imagination you can make all kinds of awe-inspiring lightning effects! Download the source code and experiment with your own.
If you enjoyed this article, take a look at my tutorial about 2D water effects, too.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://gamedevelopment.tutsplus.com/tutorials/how-to-generate-shockingly-good-2d-lightning-effects--gamedev-2681 | CC-MAIN-2019-51 | refinedweb | 2,859 | 57.16 |
.
Sample Input: ar = {5, 9, 16}, k = 5
Sample Output: 2
Explanation: Note that if we increment 9 by 1 and decrement 16 by 1(no. of operation = 2), the new array will be {5, 10, 15} and hence the GCD will be 5.
What is a GCD?
So, GCD stands for Greatest Common Divisor which is the greatest integer that divides given two or more integers(not all zero) and leaves 0 as remainder. For example, the GCD of 15(5 * 3 * 1) and 10(5 * 2 * 1) is 5.
Heading towards the solution
It looks like in order to change the GCD to k, we will need to shift every array element towards the closest multiple of k. Now, how are we going to do this?
See, by shifting we basically mean applying a bunch of increment or decrement operations altogether such that we can move in larger steps, say x (x>=1). Next up is moving to the "nearest" multiple. How could we decide this?
Yes, we will be comparing the difference between the possible multiple and the element itself.(Note that for any element ele, there would be two choices for the nearest multiple of k. Those two choices being the greatest multiple of k less than ele and lowest multiple of k greater than ele ).As per the problem, we are now left to figure out these two differences with ele in order to select x(no. of operation) such that it is minimum.
We can easily claim that the two possible solution for x are none other than ele % k and k - ele % k (where ele % k is remainder value we will get on dividing ele by k and k - ele % k indeed is possible difference from the next multiple). So aren't we done now? we could just chose the minimum of two values?
No,Think of a case when arr = {11, 30, 19} and k = 5 (i.e., minimum element in arr > k). Here, after doing the above steps, we will get the array as {10, 30, 20} but are we successful? No, GCD has changed to 10 instead of 5.
Seems like we are stuck. What could we do in order to keep the GCD as k even if the array elements shift towards a multiple of k which is not 1?
You could make an important note here that the maximum value, a GCD can hold is none other than the value of minimum element in the array.
So, taking this as the hint, we could change the minimum element to k, that will make sure that we don't shift GCD beyond k.(because the highest common factor between the array elements will be k only)
Pseudocode
1. Sort the array 2. For i = 1 to n - 1 3. No_of_operations += min(arr[i] % k, k - arr[i] % k) 4. If arr[0] > k, 5. No_of_operations += arr[0] - k else 6. No_of_operations += k - arr[0]
Consider this example:
where arr = {9, 5, 18, 21, 7} and k = 7
Note that going by the steps, we first sorted the array and hence changed arr to {5, 7, 9, 18, 21}. Now, it is the turn we check and change arr[i] to suitable multiple of 7 where i ∈ {1,..,n}.
For the second element(i = 1) in array, since min(7 % 7, 7 - (7 % 7 )) = min(0, 7) = 0. Therefore, No_of_operations = 0
For the third element(i = 2), we noticed min(9 % 7, 7 - (9 % 7)) = min(2, 5) = 2. Therefore, No_of_operations = 2
For the fourth element, min(18 % 7, 7 - (18 % 7)) = min(4, 3) = 3.Therefore, No_of_operations = 5(3 + 2)
For the fifth element, min(21 % 7, 7 - (21 % 7)) = min(0, 7) = 0.Therefore, No_of_operations remain 5
Now as 5 < 7(arr[0] < k).Therefore, No_of_operations = 7(5 + 2)
And we get, minimum number of operations as 7.
Implementation
#include <bits/stdc++.h> using namespace std; int main() { int n; // no. of elements in array int k; // new GCD cout << "Enter number of elements in array: "; cin >> n; cout << "\nEnter the array: "; int arr[n]; // array for(int i = 0 ; i < n ; i++) cin >> arr[i]; cout << "\nEnter value of k: "; cin >> k; if(k == 0) { cout << "\nInvalid value for gcd"; return 0; } sort(arr, arr + n); int min_operation = 0; for(int i = 1 ; i < n ; i++) min_operation += min(arr[i] % k, k - (arr[i] % k)); //shift towards closest multiple if(arr[0] > k) min_operation += arr[0] - k; else min_operation += k - arr[0]; cout << "\nMinimum Number of Operations: " << min_operation; return 0; }
Important note:
Since we are calculating remainder values, we need to take care of "Divide by Zero Error" that might arise when k = 0. As in the code, we can simple check for k and claim it "Invalid" if it is zero. It doesn't make sense for GCD to be equal to 0 right.
Sample I/O
Complexity
- Time complexity:
Θ(N logN)
- Space complexity:
Θ(1)
With this article at OpenGenus, you must have the complete idea of the algorithm to find the minimum number of increment and decrement operations to make GCD of an array 'k'. Enjoy. | https://iq.opengenus.org/minimum-operations-to-make-gcd-k/ | CC-MAIN-2020-29 | refinedweb | 863 | 70.63 |
#include <nrt/Core/Blackboard/MessagePosterResults.H>
The results of a post()
This essentially is a list of std::future objects, one for each callback that was triggered by a post(). Each future provides access to the future result of one callback triggered (in a parallel thread) by the post(), and that result becomes ready and available when the thread executing the callback completes. Additional convenience interface is provided to wait for all callbacks to have completed their work, to check whether some result is available and ready, and to get() the next result. Note that some of the futures may throw when you attempt a get() on them, if the associated callback threw. We do not implement perfect exception forwarding; instead, the only exceptions that one might expect here are either nrt::exception::BlackboardException (when something went wrong at the Blackboard level, e.g., a network error communicating with a remote Blackboard), or nrt::exception::ModuleException (when something went wrong in the callback function and it threw). Any other exceptions that callbacks may throw are wrapped into an nrt::exception::ModuleException.
Canonical use is:
If you care about the results, then:
If you want to handle exceptions, typical usage becomes:
Note that once you catch an exception, you should either handle the error and swallow the exception, or, if you are not going to handle it, just re-throw it as is. The Blackboard will then automatically update the exception as it propagates it up the caller chain, so that a full trace of nested calls will be visible from the exception.
Finally, if you have other things to do while callbacks triggered by your post are processed, and you do not want to block, then just do something like:
Note that MessagePosterResults is not inherently thread-safe, so in the unlikely event that you may have several threads consuming its results, you would have to ensure thread safety with an external mutex.
Alternatively to the above way of using results, you can also use an iterator-based syntax explained with the iterator definition below. See test-ResultsSyntax.C for examples.
Definition at line 150 of file MessagePosterResults.H.
Destructor. Will block and wait on any result that is still pending and then get() it, which may throw.
Yes this is a potentially throwing destructor, but that's ok as long as people don't derive from MessagePosterResults (destructor is not virtual anyway). If you want to avoid waiting and possibly throwing here, just make sure that you waitall() and get() everything before your results run out of scope.
Definition at line 72 of file MessagePosterResultsImpl.H.
Wait for all callbacks to complete, up to a maximum total wait time.
Returns true if all callbacks have completed within the max allotted wait time. With the default value of maxwait, we block until all callbacks have indeed completed, and always return true. If you just want to check for completion without blocking, just pass a zero maxwait duration. Note that waitall() just waits and does not examine the results, hence it will not throw any exception possibly thrown by one of the callbacks. Exceptions, if any, are thrown by get() or waitgetall().
Definition at line 81 of file MessagePosterResultsImpl.H.
References nrt::now().
Get the next available result.
We return results as soon as they become ready, so they come in unspecified order. Calling get() will possibly block until the next result is available, test for ready() before you call get() if you want to be sure to not block. Calling get() will possibly throw an exception if the associated callback threw.
Definition at line 129 of file MessagePosterResultsImpl.H.
Wait for all results and run a get() on each of them, then trash the obtained return values.
This is useful if the return type of the posting is void or if you don't care about return values, but want to make sure that everything completed correctly without throwing any exception. Just waiting using waitall() will not reveal any exceptions that were thrown by callbacks. Note that the MessagePosterResults destructor calls waitgetall, so it may block and throw if you have not taken care of all results already.
Definition at line 152 of file MessagePosterResultsImpl.H.
Get access to the first iterator.
Definition at line 208 of file MessagePosterResultsImpl.H.
Create an iterator that is equivalent to the 'end' of the sequence of futures.
An iterator created by end() is the only
Definition at line 217 of file MessagePosterResultsImpl.H. | http://nrtkit.net/documentation/classnrt_1_1MessagePosterResults.html | CC-MAIN-2018-51 | refinedweb | 750 | 52.19 |
Open Source's Achilles Heel 476
Tony Shepps writes "From sendmail.net comes an essay by UI consultant Mike Kuniavsky, "It's the User, Stupid", on what might be Open Source's worst failing: User Interfaces. In short, Open Source is geeks writing software for geeks, and usability suffers... and maybe that's an inherent problem with the model. "
Perhaps we need an application UI markup language? (Score:3)
How much of this is already done using HTML and XML? Doesn't Mozilla already use an XML derivative to define its own internal dialog boxes? How difficult would it be to pull *all* formatting and location information out of these things and use style sheets to render it all?
This would allow us to do things like run "X" apps via telnet, using a textual transformation/style sheet, rendering controls using nice ASCII characters.
It would also let us *completely* re-do the formatting present in any application's window in a consistent manner. Move all menus to the top, all operation buttons to the bottom right, etc. Applications could come bundled with their own sets of style sheets, with broader positioning and styling done by system-wide style sheets, all the while giving the end user the option to move and resize controls to their taste. Everything could be themable, and I'm not talking just pixmaps for buttons.
How feasible would this be to do?
Re:A list (Score:1)
No, that actually deletes 10 lines.
:-)
There was a good manual on technical writing (which I have since lost) that pointed out when you have to bend the grammar rules to avoid this sort of thing. Their example was 'To delete a line in command mode, type dd.' dd will delete the line, but dd. will delete two lines! Quotes are also supposed to include a trailing period. But in the case of literal commands, quotes are just to include the command and ignore all other grammatical rules.
Re:UI in Open Source programs (Score:1)
I belive the expression is 'throwing the baby out with the bathwater', but I imagine there are countless regional variations.
Your friendly neighbourhood pedant.
Articles like these are tiring. (Score:2)
What bothers me about articles like this is that they tend to perpetuate the stereotypes. The other thing that bothers me is that the very idea has the feel of "we need to find yet another weakness in what these people are doing."
The February Linux Journal has a GNOME article in which the author mentions a UI team. I've been expecting that to happen at some point. Open Source projects with UI teams--maybe even usability teams---what kind of Achilles' Heel is that?
Re:The author's fundamental mistake (Score:3)
Can someone think of an example Open Source project where the developers are not users?
I think the more appropriate question is can anyone name an open source project where it's users are only it's developers. Obviously the developers of a program in the Open Source community are going to be users, that's only natural, but if all they care about is limiting the usefulness of the program to themselves and not creating a program that can be used by 98% of the population, then the article isn't focused on them.
The problem that lies in most of the comments is people are getting stuck on the GUI only, which is one aspect of of the entire User Interface. As I'm sure everyone will agree, there are some absolutely gorgeous GUIs availabe for Linux and on the flip side, there are some absolutely horrendous ones. One of the problems that the Linux community will see as Linux becomes more popular with "regular" users is that there is no consistency between the different GUIs.
However, the article discusses UIs in general and (from recollection) doesn't even mention GUIs. The user interface of a program is more than the GUI, as I stated, it's about what information do you present to the user, how is that information organized, etc.
As an example, consider the normal installation of Apache and Sendmail for configuring. Neither have a user friendly interface to configure them, sure they're available, but you have to seek them out. For experienced programmers and users working with the configuration files is fine, get in there, make the settings, save and restart. However, when someone from the 98% of the population who doesn't have experience tries to do that it almost always ends in frustration.
And the README files for most programs only add to that frustration for a new user because they are slanted towards those who know what they're doing and understand the slang. This is the perfect example of creating an "in-crowd" in the Linux community. If you're experienced and you know what you're doing, you're a welcome guest to the Linux community, but lord help those new users who try to ask a simple question because they'll be tossed out of the community amid cries of "RTFM." Yeah, thanks for being helpful. It certainly doesn't help when RFM is wrong, as in a case I ran up against with an O'Reilly book and one of the examples in it...check the online errata? Oh, wasn't mentioned in there, even e-mail the publisher and the author and still didn't see the correction show up.
The fact is that the developers want to write a program that does what they want and making an easy to use interface for beginners or those less experienced then they is last on the list.
Good, bad? I won't vote either way, it's just the way it is...right now.
"the writer is wrong here, here and here.." (Score:4)
If we pretend that it isn't broken, no one will fix it.
Evolution of UI (Score:2)
Let's see the evolution of UI...
1) Plug some wire in some holes
2) Punch holes into little paper card
3) Use a terminal to punch holes in little paper cards
4) Use terminal to type command and see response
5) Use a device to move a pointer on a ugly graphical screen
6) Use a device to move a pointer on an ugly complex grapical screen
7) Use a device to move an animated pointer on a good-looking complex graphical screen
8) use a device to move an [animated] pointer on a good-looking simple graphical or text screen.
Guest what #8 is... Hyper Text Markup language... Who build that thing? No it's not a commercial entity but a buch of people who wanted to empower the user with an easy to use interface.
Guest who is going to make that kind of interface as THE user interface in the next consumer version of their OS... MICROSOFT...
Most of geek have no clues for UI design... But a handfull of geek did have great clues about UI. With theire great infrastructure skills they develpped a new kind of UI that is powerfull yes simple. They made the Internet a usefull tool for the average (and less than average) user. Now everyone is building on the geek UI.
Article about 2 years out of date (Score:2)
KDE has a GUI which is not that different to that provided to a Win'9x user and should not be that intimidating as such. In KDE [which I use more than GNOME] it can be said that KDE apps much more consistent in terms of style application and operation than almost any other GUI, including Windows. KDE is _perhaps_ less innovative than Gnome in terms of presentation, but this is in many ways an advantage - users who prefer more exciting interfaces can go for Gnome whilst users who prefer consistency [and stability
Open interfaces can be quite radical in the sense that arguments can be put forward for changes in direction when it is obvious that things don't work as they are; whilst in Windows older bad designs are often supported, sometimes to the detriment of newer ideas.
Re:A list (Score:2)
A good writer can help a good interface better or really, really bad. Maybe what the open-source user interfaces need are better tech writers
Re:keep it simple, stupid (Score:2)
This begins to become a moot point however (as others have pointed out here) with all the advances in gnome/KDE. They're almost ready for prime time. Nice look for the average user, without the commercialized "bells and whistles" feel.
Shortcomings of the new Open Source UIs (Score:5)
I hate the classic MacOS because it lacks memory protection and real multitasking, but of all the user interfaces in existence, classic MacOS stands out as the one UI that gets things right.
Re:A list (Score:2)
How about some examples of where Open Source is leading the field with it's GUI?
Re:UI in Open Source programs...UI!=GUI (Score:2)
What is needed here is a good open source UI initiative, bringing together the Academics who are knowedgeable in UI Design in our community and the developers in our community who are working on GUI Design, to start generating a logical, innovative, and consistent design to be implemented throughout GUI software under linux. Sort of a GUI Standards Association to set guidelines on development that would define a consistent interface and methodology of use. Conforming to it would have to be completely voluntary of course, but it might be of great use to future developers to have a body of code already built to perform all of the basic functions that they can draw upon when developing a new program.
Obviously, this will not impact the currently developed non-GUI programs which are already using their own command systems - I am sure if anyone were to suggest monkeying with the commands which control Emacs or VI there would be riots in the streets and public hangings - religous people can often be quite fanatical about change
:)
Reaching a consensus on GUI design would no doubt prove to be a challenging task - but I am sure we have folks in the community who are both knowledgeable concerning UI design, and capable of approaching the task from a fresh perspective. A lot of research has already been done on this in the Academic world - all we need are folks to actively interpret that work into something comprehensible by developers.
:)
Just my 2 cents worth.
Re:Shortcomings of the new Open Source UIs (Score:2)
Re:UI in Open Source programs...UI!=GUI (Score:2)
So while you're argument that it was copied is probably correct, it really doesn't matter. Isn't that the whole point in creating things? To take good ideas from something and improve upon it. I don't credit Windows with creating everything, I will credit them with putting it together fairly well. I have limited use with Linux, although I would much rather see it succeed because I agree with the principles behind it. If I could get everything I can in Windows under Linux, I would probably switch. Unfortunately, we've got a lot of tools that we use that are only for Windows, and we need them to get by.
Sorry about the rant
Re:A list (Score:2)
I think vi has a great interface. Want to delete 5 lines? Type 5dd. No menu to find, and very efficient and intuitive, but only if you already know how vi works.
The point of a good user interface in the "usual" sense is to reduce the learning curve for new users while providing fast, powerful functionality.
Re:UI in Open Source programs...UI!=GUI (Score:2)
I think the point of the article was that good UI doesn't necessarily equate to having a GUI interface. It's more on how programs have a shared consistent interface. For example, the shortcut to quit in Gnome programs is ctrl-q, in netscape and some others its atl-q, xemacs and emacs set it to ctrl-x-ctrl-c. If I want to quit a program, I don't really want to have to stop a second and figure out what the shortcut is, or end up pressing two or three shortcuts in other to quit. The MacOS and WinXX systems are a lot better in this regard since certain common functions are mapped to the same shortcut in all programs. If I want to quit a program while using those oses I can just hit alt-f4 or apple-q and know that it'll work.
Re:Why not software by geeks for geeks? (Score:2)
My favorite example are games. We think all is well with Linux and games because Loki etc are now porting commercial quality. But the actual development of such quality games under Linux requires that the non-programmers of a software team migrate to linux as well : to edit the video, the story, the dialogue, the puzzles, whatnot. And that will require tools that are easy to use by those non-code guys.. and trust me, artists and editors can be really stupid.
So there you have a concrete example : if you want companies to build games, their editors will need content editors of ultra easy to use interface. That's an example of cool apps that haven't even started to come through.
So can you really be happy to keep Linux in the niche, avoid it's commercialization, and at the same time cry out for cooler games and cooler multimedia native to Linux ?...
Not to mention that a boom of Linux to the desktop would probably a great incentive for OEMs to add support for their scanners, point devices (tablets,
..), printers, modems and whatnot to Linux.
It will happen eventually (Score:2)
However, I firmly believe that an XML grammar for GUI definition is right around the corner. It's just one of those ideas that makes sense.
Consider, for example, a MSVC++ project. When you create a form and start dropping widgets on it, the IDE builds a
If you open a
My uninformed hunch is that we will eventually see "GUIML" libraries which translate markup into native widgets. The concept is similar to Java VMs, but much easier to implement. It should allow GUI apps written in "compiled to native" languages to have a lot more portability than they do right now.
Re:Shortcomings of the new Open Source UIs (Score:2)
Git the gun martha, it's another MacHead. Right. They're associated with the application they control. The interface is not modal. I'd really love to see the visual chaos that would result from focus-follows-mouse policies.
> The taskbar on the bottom of the screen has buttons that do not extend all the way to the edge of the screen
This is one of many reasons I despise the gnome panel and the new KDE one. Not only do they not extend to the bottom, they're now crammed into a grid you have to aim the mouse at. The others being that it's too damn big, and applications still maximize to cover it.
> In many window managers, nondestructive buttons such as Maximize are placed right next to destructive buttons such as Close
No argument there, though when using a Mac, I often wished for a maximize button of some sort. Minimize was easy enough (Hide) but it was a mouse operation or a chaotic-looking "unhide all" command to get it back.
> To proceed from a menu to a submenu, it is necessary to manuever the mouse rightward with surgical precision
Motif displays this annoying behavior of having zero delay before activating a submenu. Gtk+, Qt, and Win32 do not. In Windows, the delay is even configurable, using any number of "tweaker" control panels. As an indictment of Windows, as this was meant to be, it doesn't hold up.
Classic MacOS did many things right, but it wasn't without its awful warts too. A proliferation of popups that were modal. A chooser interface with a little bitty non-resizeable window for services. Dragging disks to the trash to eject them. Having only one corner to resize windows, often forcing you to move the window around to expose the drag handle first. Mysterious behavior when clicking a label of an icon, causing you to edit it instead of treat it as a normal click (bogus behavior nearly every interface since has slavishly emulated).
Duh! (Score:2)
Everybody keeps wondering why Linux isn't taking over the desktop (and never will). I've been saying forever that since non-programmers don't have a say, it won't ever be useful to them. Geeze, you don't have to be to bright to figure out that most open source projects (that people would love to think would work for the average joe, like Linux) are nearly IMPOSSIBLE for most users to catch on to. Pointing out exceptions just proves the rule. There's no signs of this changing.
Go ahead Linux freaks, moderate me down.
Re:Exactly!!!! (Score:2)
My main gripe is with the implementation idea. You're defining an entirely new language! Why not just stick with HTML?
Although, as one who is migrating reporting from Access to HTML with a self-maintained Web displayer (doesn't handle any network stuff, but displays HTML properly (tables and all that fun stuff)) I'd say screw it all and just go with a common DLL that you can link.
It'd be a lot easier to update that way as well.
Enjoy the show!
Jezzball
ls:
Quake, Hack, Zork (Score:2)
And what about the graphic tiles introduced for Hack. Now you don't have to know that a brown "d" is a jackal on level 3.
Bring on Quake, and it's a whole new game. They're all essentially the same, but the difference between a real-time graphical view out the eyes of the character is so much better than "You are in a maze of twisty little passages. #>"
Re:Exactly!!!! (Score:2)
Good point. Actually I would use XHTML or XML instead. This may might be applicable, but I only downloaded the full specs on XML a few days ago and have yet to read them.
I'd say screw it all and just go with a common DLL that you can link.
DLL? do you mean shared object. This isn't a Windows machine. Like I stated in my post, I don't want to guess what the utility is doing, and I don't want to look through source code to figure it out. A simple standard config file would work. If you add a new utility and need to add configurations to it. Just place you configurations into the
I recommend different files in a single directory since you don't want a bad install to destroy current files (Windows Registry anyone!).
The format XML or what ever, needs to be able to run commands as well. So the user would need to su to root to run their browser or other application that would read these files. Since the files are ASCII, the likelyhood of a virus would be minimal.
Steven Rostedt. The only hard part of this is recognizing the need. Achille's heel? Far from it, it's just another hill to climb.
The fact is that it's much harder to make good end-user software than it is to make good infrastructure software - and that's going to make it tough for Open Source software to break out of its server niche..
UI != GUI, guys (Score:2)
Nor is it a Microsoft vs. Open Source/Linux thing either; many people who take User Interface design and research seriously think that MS products have huge UI design problems. Dialog boxes that say (in effect) "I've just made a huge unrecoverable internal error, click OK to complete your humiliation. OK?" are rampant in the MS Windows world, and are an abomination from the point of view of UI design. (So, for that matter are the "Click OK to indicate that you really meant to click OK" boxes.) I suggest instead of accusing him of not being clued in on how cool GNOME and KDE are, or saying "GUI? We don't need no steenkin' GUI!", people might try reading Alan Cooper's About Face: The Essentials of User Interface Design to see where he's coming from.
Heart in the right place (Score:2)
X-windows application GUI's are in much worse shape. The much ballyhooed GIMP is a great example. It's filled with all the usual fluff--tips of the day, all sorts of configuration options, toolbars everywhere--but just about every panel you bring up is so brimming with things to tweak that you don't know where to begin. It's a pile of stuff all thrown together with a GUI on top, but that doesn't mean it was well thought out.
What is desperately needed is a good example from someone who knows what's important and what's not; someone who isn't just trying to show Microsoft a thing or two by duplicating a Windows style interface in his basement coding lair. I've read some very interesting UI design papers over the years. Jef Raskin has much to say. There's also an excellent book from someone who used to be in charge of such things at Microsoft and left when he didn't like the direction they insisted on taking. The key to remember is that the purpose of an application is not to shuffle windows and menus and toolbars; it's to actually get a particular job done.
Yes, the Linux UI sucks. (Score:2)
UNIX has an awful interface. Denying there's a problem doesn't help. Being macho about it doesn't help. Bolting some kind of GUI on the front of a collection of command line programs doesn't help. X-Windows doesn't help. Skins don't help. Needing to open a shell window to do anything doesn't help.
User interaction design is hard. Even starting from scratch may not help. Java, for example, blew it.
There are some good examples: The early Macintosh user interface guidelines. Microsoft Word. Nokia cell phones. But not many.
Today's hint: The user should never have to tell the machine something it already knows.
My take (Score:2)
There are those of us that can design fantastic user interfaces and come up with nifty ideas, but it's very difficult for them to just enter an OSS project and re-design their interface. Despite the fact that the original interface is crap, many OSS developers like it (which brings us back to the whole "coders have no UI taste" argument) and in some cases are openly hostile to anyone that wants to change it.
Someone else mentioned that functionality and efficiency is stressed more with OSS software, and what might appear like a UI failing might just be the author trying to stress functionality over "extraneous" cruft. Unfortunately, in many cases, certain usability factors seem either unnecessary to him, or "not worth it" to code, so we end up with applications that are pretty nicely featured, but lack good thoughtful usable interface design.
I think the main hinderance that prevents UI designers from contributing to OSS projects is that the designer needs to be a good coder. It's difficult for a UI person to step into a project and simply re-design the interface. They have to be able to go into the code and make adjustments at the code-level.
Naturally, people that are experienced both in coding and interface, usability and graphics are few and far between. That's not to say that they don't exist, but typically they're quite happily employed because most companies know combinations like this are hard to find too.
"Theming" your application is a nice way to try and abstract the interface from the application. Unfortunately, unless you spend a considerable amount of coding building themes into your application, this just doesn't go far enough. Sure, you can make, say, GTK themes, but that doesn't help you re-design an application's window form.
What is the present state of "graphical" application builders? I know in your various "Visual" Windows IDE's you can create windows and forms and entire user-interfaces graphically, without necessarily needing to know much about the coding.
How easy is it for usability and interface people to contribute to your OSS project?
We need a Microsoft Bob-esque interface for X (Score:2)
Re:Keystrokes (Score:2)
I've found this to be true as well. Years ago I switched out a user's DOS menuing system for QuickMenu, which used icons. I thought it would be easier for him. But he quickly asked me to put back the old menu, since he found it easier looking over the old list of programs and typing in the number listed. Another person I knew used Wordperfect for years right up to WP 5.1. Then she got a new computer with the "new and improved" graphical version, and her productivity plummeted. She went back to WP5.1 and plain DOS on a 286. She still uses it and is one of the leading medical transcriptionists in the California.
Personally, I like GUIs. But I'm wondering if that's because no two programs have the same keystroke commands. I know that the bottom entry of the far left menu will close the application. But I never know if that command is E&xit, &Quit or &Close. And whenever I'm using emacs, I type 'i' to start typing.
A list (Score:4)
vi
Emacs
Apache
sendmail
bind
TeX
X11
And how many of them are easy to use? How many of them are have interfaces that are just evil?*
But hope is looking up. Look at these newer open source projects:
GNOME
KDE
Mozilla
GIMP
So, I think the community is doing pretty well giving geek-interfaces for geek tools. The user-interfaces on user tools aren't doing to badly either. They're not excellent yet, but even interfaces benefit from the open source model.
* Trick question. Just sendmail's interface is evil, the rest are just difficult.
Re:BS... (Score:2)
I've got nothing against *BSD personally, but I think that the 'its designed for hackers' complaint is much more true about the *BSD's than it is to mainstream Linux distros like Red Hat, Caldera and SuSE.
XF86 has a known bug with ATI Rage graphics cards, such as is present on-board on my current mobo. My hsync is completely out of whack, and I can't even start X with something vanilla like SVGA or even VGA16 servers (640x480: mode not defined. no valid modes found. exiting.)
It must not be a completely universal problem because I did an install of Red Hat 6.1 at work on a Compaq Deskpro EN series machine here with onboard ATI Rage video the other day and it not only came up with sync correctly, it even PnP detected by name the monitor (Compaq V75). I was quite favorably impressed when that happened, as were a couple of other people who had dealt with getting X running in the past.
The RH 6.1 CD directly booted up on the Compaq, came up to a GUI installer, it recognized the built in ethernet on the motherboard, recognized the video, monitor and mouse. About the only part of the install that I'd say was perhaps beyond what the average person could handle (without reading the book) is disk partitioning.
When I wanted to add a few additional packages, I found the GUI RPM tool actually worked pretty well. Again, I was favorably impressed.
I think that the article has its points, but I would disagree with the conclusion that improvements in usability will not happen or can not happen is off base. I've seen things come a long way since I first started using Linux back in 1993. Don't get me wrong, things aren't perfect yet, but the gap for point-and-click users is narrowing a lot more quickly than what the article would lead you to believe.
Re:Hard to use at first != Unintuitive (Score:2)
Not to belabor the point, but here's what Joe newbie should be able to do when using an intuitive editor for the first time:
#vi foo.txt
type some text to edit the document
What really happens:
user: vi foo.txt[enter]
vi: ~
user: type
vi:
user: ?
vi: ?
user: [enter]
vi: error
user: q
vi: (beep)
user: x
vi: (beep)
user: [escape]
vi: (beep)
user: dsaf (hitting keys at random)
vi: f
user: fghsdf
vi: fghsdf
You get the point.
--GnrcMan--
Re:"the writer is wrong here, here and here.." (Score:4)
It's also difficult for someone who is working in one discipline (ie. the kernel) to understand/accept the rigors of working in another discipline (ie. UI design)
A good UI takes years of research and design. It's not a matter of making the windows look pretty. It's making them work pretty.
GUIs, MSVC++, etc (Score:3)
Credit (Score:2)
And besides, who'd want a Nova, when you can get a perfectly good Dart Swinger for the same price...?
Exactly!!!! (Score:3)
A first time user doesn't understand most of this. Think of yourself as going to dance lessons. You start off feeling funny and don't understand all the terms. You feel stupid by asking and looked down on by others that have been doing this for a while.
If you want OSS to succeed, you need to make it easy. If we can think of a standard interface utility that performs "Control Panel" operations, then this could help not only the users, but the gurus themselves.
If we can come up with a standard configuration tool that takes an ASCII config file as input, and produces simple user interface, then this would accomplish the task. Something that would be a standard, much like rpms are today. Say we have a Linuxconf interface that would get its commands from a file.
Example:
# hostname command
start command;
label: hostname
entry: text
run: hostname text
end command;
Where this would be read buy some utility and show a label stating hostname: [ ] and then when the user enters in the hostname the command
"hostname entered hostname" is executed.
This way all new utilities can conform to this format and append to the config file, or just have a single config directory (like
Of course, this utility would need to be thought out quite a bit. Maybe have a separate organization under OSI to make the standard. My example was just to make a point and not actually a real example.
Steven Rostedt
Re:BS... (Score:2)
You are entitled to your opinion., but I would disagree with that. I can't say much about MacOSX, but in the case of Windows 2000, unless it is really a giant step beyond NT 4, I can't say that I am all that impressed. It is flashier, and perhaps slightly more consistant than Red Hat, but I find NT frustrating. I don't necessarily equate fancy with advanced. The NT environment may be easier for people who don't like to learn anything to stumble their way through, but for people who know what they are doing, things just take too much click-click-clicking to do. There are a lot of things I'd rather do by command line or script, and those things just aren't that nice to do in the Wintel world.
"Failure"? (Score:2)
The goal of open source, as I see it, is not to make software designed for the least common denominator. That's what Microsoft and AOL are for.
We are not failing if our software seems to complicated to a computer-illiterate newbie -- that's not who we designed the software for.
The goal of open source is to make software that doesn't suck. As long as we continue with this, we're doing fine.
Open source software is usually designed to be very powerful and flexible. With flexibility and power comes complexity. Do we sacrifice writing powerful software with writing "easy to use" software? It cannot always be both.
Re:A list (Score:2)
I tell them that the standard UNIX text editor is ed. I get them to fire it up. I try to hide my laughter. Sometimes, I have a hidden camera. Nothing beats videotaping a newbie's first session with ed! Then I say, "A lot of people prefer vi to ed." Hey, it's true! They always say, "What's vi?" "I say, it's name comes from 'visual editor.'" As soon as the word "visual" escapes my lips, I can see their eyes light up. They fire it up. I say, "press i." And they're in heaven!
If I'm feeling really nice, I tell them about Emacs.
Re:Bingo! (Score:2)
But most people don't need or want that kind of power - they want an appliance. Something they can turn on, perform predetermined tasks with, and turn off. They want it to be inexpensive, reliable, and simple. Customization isn't important - simplicity is. We can't provide that with OSS at this point in time.
As far as "winning" goes though, Linux is useful today. Therefore we've won in a big sense. But the commercial development of Linux and the market acceptance of OSS have come in large part because the promise is there to build something suitable for Everyuser. Red Hat is worth billions in market cap, and can afford to pay lots of people to sit around and write Linux code that gets GPL'ed. That's not because geeks are a huge market - it's because of the potential to grow beyond the geek community. But without that potential, the explosive growth we've seen will go away, the funding will dry up, and Linux will go back to being an excellent OS that people hack on because they want to. But the increased pace of development and market acceptance we're seeing should say something. It tells me "I don't want to go back".
I'll share a dirty little secret with you: I use Linux on a desktop and a server at home, and a couple of servers at work. Users at my company use Windows NT, because it's "good enough" for their use (easy to use, and much more restrictive security than Win95). And when I go home, and want to just turn on the computer, check my email, read Usenet, write checks, and work on my book...
I use an iBook.
- -Josh Turiel
Hold on a minute! (Score:2)
Nobody is out to replace Windows or MacOS here. Total World Domination is a sarcastic battle-cry, one that's intended to promote the "Do It Yourself" ideology. We're not competing for Linux on every desktop here...
What the author suggests, at least implicitly, is that we need focus groups and marketting research user-proxy specifications to tell open source developers what sort of GUI the anonymous masses want to see. Sorry, I don't buy it.
A certain level of technical know-how has always been the price of admission into Unixland. If you don't like it, then you'll like what you're given.
The MS-Windows interface is confusing to new users as much as the X Windows interface. DOS commands, the concept of directory trees and shutting down before shutting off are all foreign concepts to new users - until they learn.
This is the true Achilles' Heel of open source/GNU/Linux. The steep learning curve. We don't need no stinkin' GUIs! What we need is a consistent means for newbies to become knowledgable users. We need to make self-education easier. We should not come up with a conceptual kludge GUI that shows soft links as fuzzy icons and hard links as fuzzy icons with a sharp outline, or any some such bull.
We need usable user documentation, online help that's not written in C or PERL, but rather in English (well, it's a start). man pages suck! We need a central repository for this knowledge, and we need it to be new user-friendly. HOWTO's are great once you know what you're doing and just need a heads-up on a specific item. They are after all, slightly polished notes on the process someone went through. They EXPLAIN LITTLE, they're recipes. We need readable docs, a Q&A bank, an online reference, and a human hand for new users to hold on to for the first few weeks.
Once they are off the ground, they'll run circles around MS-GUI users, in X, in ksh and in the {G}UI of their choice. That's the point. Empowerment. Choice. Can't do it without education.
As was stated in another of today's articles, (to paraphrase Marx)
A pretty face that abstracts the workings (and knowledge thereof) of a system is just sticking more flowers in the chain, so the user feels grateful for being led around by a sweet smelling garland. It's not right.
Sometimes, a non-intuitive (to an ignorant user - no offense intended) interface is the best way to represent the underlying concept beneath it. People who understand the inner workings of the system see it a certain way - true to the core functionality. Misrepresenting the system beneath the interface does the user harm in the long term. It keeps new users from becoming knowledgable about HOW the system works.
Let's not try to save ignorant users from their own ignorance by giving them an indiot-proof interface. Istead, let's teach them to fish.
Re:Rules for writing BS about OS (Score:5)
(installed by three computer engineers; one a Linux user since before it was a buzzword (Patch Level 0.98 if you must know; that was July 1993 or so); one an applications developer with 18 years development experience on a whole slew of OSes, and lots of UI design and development experience; and a computer engineer with 10 years experience, plus a degree is psychology + a minor in ergonomics/human user interaction).
I'm the one with the 18 years experience by the way
A friend of mine has installed Corel Linux
1.0 (he downloaded the standard downloadable distro), and I played around with it today. He did the "Doofus" install - that is, he just stuck the CD in, and said "Uh-huh" to all the questions it asked. It was installed on a Dell Optiplex GXM 5133 machine... and it sucks. Or maybe it's just KDE that sucks.
Within 5 minutes of playing with it, I managed to get the whole OS (seemingly) hung with a complete lockup except for the mouse. Now sure, I could probably have telnetted into it and unlocked it, but I'm afraid that I couldn't get the network working over DHCP (see later). What was I running to do this? Well, we had been running notepad, but I had one "KDE Explorer" (nee Windows Explorer) window open. I can't remember if I'd tried to open the floppy drive, or the CD-ROM drive, but that shouldn't kill everything. Even the keyboard LEDs wouldn't toggle. And of course, CTRL+ALT+DEL doesn't do ANYTHING. Couldn't reset it. Couldn't get a console window. ARGHGHHGHGHGHGHHGHHHHH!!!!!!!!!!!!!!!!!!!
Maybe this should be expected (after all, it's a Corel product), but personally I find that kind of worrying. I do have to wonder how much of this was KDE and how much of it was Corel's fault.
Speaking of KDE - lots and lots of performance issues were there - it kept being REALLLLLLY slow the first time it'd show a dialog - it would visibly rearrange them on the screen. I'm guessing that it was written in something like TCL/TK, and that's why dialog resizing was so slow. Windows - on the same machine - had a perf approximately 5 times faster, if not more so, for its windowing/resizing.
Speaking of which, on moving the "Taskbar" to the top of the screen, and opening Netscape, Netscape opened up UNDERNEATH THE TASKBAR. If I put the taskbar at the bottom of the screen, Netscape would resize out of the way. But at the top of the screen? It was screwed - opening a window shouldn't do that. I had to move the taskbar to fix it.
Speaking of which, a couple of UI issues:
Telling it to "Refresh Desktop" (to move the icons out from UNDERNEATH THE TASKBAR AFTER I'D MOVED IT...) did so quite happily - the full-screen flashed, but it did it.
Telling it to, however, "Arrange Icons" (surely the same thing?) instead caused it to throw up a big nasty DESTRUCTIVE OPERATION WARNING Dialog Box asking me "Am I sure that I want to rearrange the Icons?" Yes/No?
What the HELL is that about? And why not the same thing for Refresh Desktop, which appears to do the SAME THING? Huh?
Speaking of UI inconsistencies.... yes, why I hit "APPLY" in a dialog box, I want it to apply NOW. I do NOT want it to ask me "Do you want to apply changes Now?" - because I just told it to. At least you can turn this off for Deleting files in Windows.
Context menus are missing on edit boxes - but for some reason, they work happily on the desktop. WHY ONE BUT NOT THE OTHER? THIS IS INSANE. IT'S TOTALLY INCONSISTENT.
Speaking of which, Right-Mousing on the desktop left the focus on the desktop - with the control panel I had open still left looking active (including the selected text in the box, and a flashing caret)... sorry, if the window's not active, I want to KNOW about it.
Also, selecting text in a box and typing does NOT delete the extant text in the box - which is also totally counter intuitive - pasting removes what was there if anything's selected, so should typing. That's how it works on EVERYTHING else.
In the control panel, when trying to set up networking, I hit Apply. It asked me Y/N. I said Yes. I said "Close". It closed the *panel* I was working in. I ended up hitting CLOSE 4 TIMES before I could get it to work. And then I had to CLOSE THE APP. If Apply does exactly the SAME THING as the OK/CLOSE button, but the difference is that it asks you if you want to do it or not before it commits changes, WHY THE HELL IS IT THERE? THERE'S NO BENEFIT TO IT BEING THERE WHATSOEVER.
Also, shouldn't DHCP/BOOTP find a DNS server for you? In which case, shouldn't the "ADD DNS" dialog box be DISABLED when you're in DHCP mode? And also, why are the DNS settings boxes (IP, Default Gateway, etc) ALWAYS DISABLED WHATEVER YOU DO? Are they just there for show?
This is S*(&!@(*#& INSANE! THERE ARE CONTROLS ON THE DIALOGS WHICH DON'T DO ANYTHING!!!
So in conclusion, Corel Linux/KDE (after a 5 minute look at it) is pretty impressive. It brings Linux kicking and screaming into the GUI world - and it shows. Now don't get me wrong; I'm not coming at this from the point of view of someone who won't take the time to learn the system, but for all of Microsoft's faults, Windows is
pretty damn good. In comparison:
Microsoft Windows has some pretty annoying flaws. It causes a lot of people to get angry because of these. Corel Linux, because it's running on Linux/KDE, makes people twice as angry, with twice the hassles!
Seriously - Corel Linux is pretty good. However, it still sucks. The KDE/Corel guys should get some people who understand user interface design, and create a CONSISTENT UI. Because at the moment, it doesn't cut it.
Never mind the fact that by the time we left for coffee, we still hadn't worked out how to get the network to work over DHCP.
---
Addendum:
My girlfriend (who has been using Linux since 1993 (pl98))has just tried to install Corel Linux on her machine.
We put the boot floppy in. We put the CD in. We restarted the machine. It came up, detected the hardware, flashed, showed the wallpaper again, open the CD-ROM tray and shut it again (almost like giving us the finger), and restarted.
And did the WHOLE THING ALL OVER AGAIN. AND AGAIN. AND AGAIN. AND AGAIN. AND AGAIN.
Eventually, after this many reboots (my friend Wes had only had to have one on his Dell), we tried removing the bootfloppy. The machine promptly booted into Windows 98 (which Shana uses to play Dungeon Keeper). Now, either this machine was possessed by the spirit of Bill Gates himself (and thus wouldn't let Corel Linux pry it free
from Windows), or something's rotten in Denmark.
The machine in question runs Mandrake Linux *fine*. But the Corel Linux install wouldn't even say what was wrong. It was just braindeadedly rebooting and rebooting and rebooting with nary an error message in sight. This is, by the way, COMPLETELY AND UTTERLY LAME!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Shana (pronounced Shaynna) would like to say at this point "Bring back the Linux Quarterly!!!".
----
I think basically what we have here is a case of "Attack of the Cargo Cult GUI Designers"
Simon
(the last sentence should have been sanitized for your convenience... but I was too busy)
Resource sharing. (Score:2)
Actually what makes it hard is the ammount of system resources your typical program is using.
Take an example I went from version 1.1.10 of the gimp 1.1.14 and suddently more and more memory was being sucked into the program. Wheras a perfectly good program had once run on my machine 16MB or physical ram and 20MB of virtual the machine needed even more. So now I have something like 38virtual just to make sure the memory dosn't just magically run out.
If people would just optimeze the development the UI could be more user friendly and still be interesting.
so what? (Score:2)
As someone said, OSS programmers have tended to put functionality far, far ahead of making pretty and intuitive interfaces. And really, that's the way I like it. To paraphrase from Allen Holub's great book, _Enough Rope to Shoot Yourself In the Foot_, we musn't confuse ease-of-learning with ease-of-use. MS Word is relatively easy to learn: it has buttons with pictures, lots of visual metaphors, decent help, and that damned office assistant. But it's hard to USE. To anyone who learns Word's basic ins and outs, and starts to get comfortable with it, it's "user friendly" features start throwing curveballs. In short, all the fluff designed to make it easy for new users to get into it makes it difficult for advanced users to stay with it. At the opposite end of the spectrum is vi. This little piece of ware has a totally un-intuitive UI. It's so complicated, it has it's own Nutshell book. But everyone who I know who's really taken the time to learn it, swears by it. A vi user has basically taught the commands to their fingers, and doesn't have to really think about what their doing, rather like a manual transmission in capable hands. Use of vi also doesn't require constantly losing momentum by having to go for the mouse. In short, it is difficult to LEARN, but easy to USE. (Another great example is Blender,, a 3D rendering and modelling tool with the least intuitive interface you've ever laid hands/eyes on, but which is a breeze to use once you get the hang of it.)
Anyway, there's a lot of people who don't like this state of affairs. They think that we should all follow the lead of GNOME and KDE and their ilk, to copy the "successful" interfaces, in order to make Linux easier for the newbie (although I contend that those two projects have their own UI "issues" that make them bad examples). Go ahead and call me elitist, but I think the largest reason why this is a bad idea is that it will make Linux more marketable to the Best Buy/CompUSA crowd. You see, until now, the rude state of unix UI's has served to make it the domain of knowledgeable people. People who know that their computer won't work during a power outage, that their CDROM is not a cupholder, the difference between memory and disk space, that they can't order the 'net on a CD, et cetera. In short, non-morons. But people who buy commodity PC's and Macs, are, for the most part, morons, at least when it comes to applying some sense to computer use. The great majority of users are resentful that they can't just sit down and "get it", that they have to think and learn. They get angry when something doesn't work to their own unreasonable expectation, and usually take it out on whoever is trying to help them. From working in a small custom PC shop for 7 months, I encountered, studied, and catalogued quite a few morons. The only really important thing about them is this: I *don't* want them using Linux. Hell, I don't want them using a COMPUTER! These people should have those web-pad thingies.
So to those who cry that Linux UI's are in a horrific state, I say "so &%$#@*! what?" There certainly are some UI's, and many configuration schemes, that are not only difficult for newbies but for advanced users, and those DO need work. But just because you have to sit down and -- the horror! -- read the documentation before you can successfully handle a UI doesn't make it bad. In essence, we shouldn't start a crusade to fix UI's that aren't really broken.
MoNsTeR
Re:Perhaps we need an application UI markup langua (Score:2)
Not so much a window manager but something more akin to X itself. Something that takes UI elements (say, via a pipe or network connection, like X) and constructs a meaningful rendering of that UI for display. It could be graphical, text, auditory, whatever. So long as we have a consistent "widget" set, applications can use those elements to build an interface that fits in consistently with the platform the user is using.
I don't necessarily disagree about the practicality, but it seems like it'd solve a wealth of problems by removing the coder from the responsibility of creating a meaningful user interface entirely, and either leaving it up to the system, or people that know how to build style sheets to *make* a good interface.
So True (Score:2)
This is so true. A common thread around here is that someone will say "But doing X on Linux is hard, it was easier on my old UI" or something like that and then the flames start. Linux is supposed to be for people who are smart enough to understand it etc. etc. I agree to a point, specifically around the infrastructure areas, but to be truly mainstream and industrial strength, Open-Souce software will have to grow up so that the average customer can use it.
Never knock on Death's door:
Re:Rules for writing BS about OS (Score:2)
Re:BS... (Score:2)
If you got RedHat to install without problems on the first try, you are very lucky. Don't assume that just because RedHat installed fine on your particular system configuration, that it will install just as easily on someone else's machine.
I've installed (or tried to install) RedHat on dozens of systems, from antique 486's on up to brand-spanking-new Athalons, with all sorts of random periperals. From my experience, RedHat has maybe a 15% - 20% (tops) chance of installing correctly on the first go on some random collection of parts. (Compared to about 75% first-time success with Win95 or NT [havn't played with 98 or 2000 yet, thank Ghod!]) Now, if I build a system from scratch using only parts that are on RedHat's level-1 supported list, I almost always get a clean install. Unfortunatly, the list of RedHat's level-1 supported hardware is still miniscule compared to that of the Beast Of Redmond and it's evil spawn.
There's an old quote that goes: "Unix is user friendly; it's just very choosy about who it's friends are."
I think the article itself does a very good job at pointing out the weaknesses of open-source. Don't flame me; I think open-source is a great concept. But, in order for the open-source model to displace propriatary software, the open-source community has to be able to acknowledge it's weaknesses and limitations.
The open-source model has proven itself to be very good for developing infrastructure software like operating systems, programming tools, and servers. Linux, Apache, and gcc are examples of open-source at it's best. But remember, these are tools designed to be used by computer professionals, not end-users. The
The vast hoardes of double-digit IQ office workers and home users want software that automagically installs itself, has lots of eye candy, and holds their hand to do even basic tasks. The main reason Windoze has auto-play CD's is because so many people are just too stupid to be able to open the CD in Explorer and double-click on SETUP.EXE. Commercial software companies spend millions of dollars on usability studies and getting user feedback. If you want to pander to the mass market, you'd better give them what they want if you want to succede.
We geeks tend to forget that not everyone is as technology-oriented as we are. Just think for a minute about how many people can't even set the time on their VCR or program it to record a show. Look at VCR-Plus -- a whole system created to make programming a VCR idiot-proof -- and even THAT'S too complicated for a lot of people to handle!
Get non-programmers using development versions (Score:2)
The key thing developers can do to make this process easier for non-programmer users is provide binary rpms or debs of recent development, complete with instructions on how to install them. You must also pay close attention to the feedback they provide, even if it is difficult to follow sometimes (and extracting the exact nature of a problem or complaint can be like pulling teeth). Mozilla, perhaps the free software project most focussed on end-user development, does both of these things.
Not so. (Score:3)
Re:BS... (Score:4)
Most programs installed with an RPM/deb go to a place that IS in the path or that gets added to the path by the after-install script (at least in those RPMs that are correcty constructed)
myimage.png is in the current directory.
If it isn't, you type the whole path, easy.
both of which are very often untrue. Actually finding where the photoshop binary is located is not a newbie task.
Mmmmm...."whereis photoshop" is usually enough, very intuitive, even.
Neither is altering the path.
On this one I have to agree...altering the path is something a newbiew can't usually do by himself, unless he's had some DOS experience.
As far as that "open with" BS... You could just drag the icon for the myimage.png to the icon for the photoshop program. That's the proper way to use a gui. And probably even easier than typing your command line.
Mmmmm....two clicks to open My Computer (same if you want to use Windows Explorer), a bunch more to find the photoshop.exe program (unless you have a shortcut to it on your desktop, which ususally means having to do it yourself...let a newbie try to do that on NT Workstation 4,,,not as easy as you make it sound)...then open another My Computer/WExplorer window, find the silly graphic, drag it to the photoshop.exe icon....yes, very easy, I can see that.
And yes, I know what I'm talking about, I'm writing this on an NT box...and I, among other things, train people on both Windows and Linux at work.
Windows is easy, as somebody else already said, only for those that are used to it, and as an example I'll put my mom...she started using computers about a 18months ago or so...I gave her the choice between running linux and windows ("what do you like better, what I run on my computer (Mandrake 5.x at the time, with E as my wm),or what my brother runs on his (winNT workstation)?" She chose linux
After about 3 months, her computer gave up (very old box...damn HD died after only 8 years), so she started using my brother's box while I was buying parts for a new box for her...after a week, she threatened me with bodily harm if her box wasn't working in the next week, cause she couldn't get a thing done in my brother's NT box.
Two days later her box was ready...15 months later, she still thinks all the "windows is easy" people have no clue what 'easy' means.
And no, before you ask, she had never used a computer before I gave her that old box with linux in it...and no, she didn't set up a thing in it, I did it all...just like I did with NT for my brother, DOS for my father (he runs only one program in his computer, so...boot DOS, load program, do stuff, exit program, turn off puter)...what people still can't understand if that setting up an OS is NOT something for the average Joe...I don't care who makes the OS, my mom will NEVER be able to install it.
And yes, she does install programs in her box sometimes (only RPMs, and if she gets a dep error, she ICQs me and tells me what are the missing things and I find teh packages for her and then she just "rpm -Uvh *" in the dir where she placed them all)myimagemyimageVox, who is REALLY sick of this "this is easier than that"
Does he have a point? Maybe (Score:3)
What's my point? Good UI design is not important for applications that cater to IT pros.
I wonder if he's looking at the apps that are built for IT guys?
The X window is not tough to use. It's certainly not any tougher the MS Windows. Star Office is a breeze, so are the other apps targeted at "users" (xmms, gaim, etc.).
I don't this it's fair to bash the UI of Apache because my Aunt can't set it up. It wasn't built for her.
Completely Disagree (Score:3)
What this article says is that the OSS model has a hard time producing good GUIs, but that's not the case at all. We haven't needed good GUIs because our programs have never been for the end-user. Now that OSS is going mainstream, the end-user is being involved, and I think the OSS model is successfully tackling this challenge as well.
Re:I think there is a point here, but ... (Score:2)
Point one, i am not saying that everybody has to go and learn to program, i'm just saying that those who do are the only source of growth for an open source platform.
Point two, i am not saying that we should not have a GUI, i'm just saying that as far as i'm concerned a GUI should always be a secondary concern to functionality. Some applications however, a GUI is legitimately part of the functionality (like Paint/draw/cad/3d modeling apps) where the task at hand is graphical).
Also, i don't have any problem with a GUI that has _NO NEGATIVE IMPACT_ on efficiency or flexibility. I refuse to sacrifice functionality so your mother (no disrespect to the lady (actually, it was my mother who taught _me_ BASIC) can use the same apps. This can be achieved in several ways, one is to make the GUI optional (like you can compile two versions of EMACS, one with X and one without, but the app is the same either way), and another way is to make apps that can be started with a command-line telling them what to do, but if you start them without any flags, they will bring up a GUI to ask the user what they want done (like Aladdin Expander for windows).
To sum up once again, GUI's are fine so long as they suppliment rather than detract from funcitonality. The Amiga is a great example of a system where the GUI was well integrated but not necesary for users who prefered not to use it.
I do not mean to say that everybody must learn to program, but i am saying that everybody who is inclined to should be helped along, because they are the real, original, and final target of open source software.
You're not looking deep enough (Score:2)
Re:Exactly!!!! (Score:2)
Syntax is irrelevant. Hacker/Cracker, DLL/.so, they're all just names
Personally, I'd much prefer an ".so". That way one can make their own application. Applications can have preference boxes based on the linked object. Everything would be standard.
There's no way, imho, that a "universal" control panel would ever work. Apple originally had that pre-System 7. Sort of. And it sucked, lemme tell you. Linuxconf, e-conf, gnome-conf (or whatever that's called) all have similar problems. How do you deal with different sized windows, etc?
So yeah. Hope this helps explains that
Jezzball
ls:
UI in Open Source programs (Score:4)
I have spent almost no time using Windows since I first ran into it in 1983 or so. I have avoided Win3.1, Win95, Win98, WinNT, and I will probably try to avoid Win00 (as in "Win zero zero") when it plops onto the universe. That doesn't mean I haven't USED it at all, but I haven't attempted to master it.
So, now it looks like I'll be started a new job soon which uses WinTel all over the place. I'm not worried about that (aside from all the obvious places I should be worried: security, scalability, reliablility, etc.
:-), but when I see other people using WinTel OS's it seems that the UI is nothing CLOSE to "immediately obvious". It's levels upon levels of options and configuration which isn't intuitive most of the time.
OK - so let's compare that to X11, Gnome, CDE, and even to older things like OpenWin. Are these UI's THAT far afield from Win9X or NT that they're actually MORE difficult to master? I think not. Sure the key mappings might be different and for most users, there's some fraction of expected interaction with a command line,but even for Win9X people, there is still a LOT of "delving into DOS" that takes place, even though there is a percentage of users who only interact with the GUI.
So what about this new generation of UNIX users (defined for convenience as the post Ultrix/SunOS/ SCO people --- basically the "Linux era", although that also includes non-Linux UNIX, e.g., Solaris)? While it's not common to have users who never ever deal with the command line, I think we've reached the threshhold that it could be done. The available client software that I find under RedHat and Gnome are very quickly eating away any need (other than convenience that my fingers are hardwired to vi sequences and sentences like "ls -lt | head -5"
:-) for the shell. The file managers are no worse than what comes with Win9X, there's a Gnome "finder" that works as well as the one in MacOS (does anyone know why "find . -type d -exec chmod g+s {}\;" doesn;t work under Linux?), and so forth.
At this point I think arguments along the lines of "well, Linux/Open Source loses because they have really geeky UI's" is more FUD than accurate. It will become even less accurate over a somewhat short time scale! One metric to testing this assertion it to see how many "normal" admin UNIX tasks (sysadmin or user adminstrative tasks) are being pushed over to programs with UI's than are done on the shell. Programs like Gftp, xchat, etc. are definitely taking the place of all the command-line programs in my life, mostly because the UI is straightforward and gets me to complicated uses faster and easier than before. At that point, it's no different than searching for some obscure panel item in Win9X...
YMMV, of course.
usability (Score:2)
Once again, I think its about mindshare, not technology per se. People equate typing to bearskins and stone axes. People also have developed a large amount of "muscle memory" when it comes to using a UI. I can't live without alt-tab, I have yet to retrain my fingers for any of the different "superior" keybindings, for instance. And may people now refuse to take their hand off the mouse, even if it provides an "easier" way of doing things.
Themes are nice, but usability is more than themable checkboxes and xmms skins: its a consistent metaphor across the entire UI, which imo is against what X stands for: why force anything? But until a certain level of rigidity is enforced, it won't ever be set for Aunt Millie.
Perfect Example of Terrible Interface Design (Score:2)
( | ) or ( ' )
you've seen the Worst Interface In The History Of The World. This is some geek's idea of what the interface for a Power button should look like. You might have also seen it's precursor, the power switch marked with 1 for on and 0 for off (which is binary for those of you who don't know). The ( | ) is meant to be a 1 and a 0 combined. This is the perfect example of why geeks cannot be trusted with interface design. (Ask yourself if any manager or marketroid would come up with using binary on the front of a home device, I think not) While you and I might recognize that in binary 1 is on and 0 is off, who the hell else would know that? Why did someone think that any user would know what that meant? All we can hope for is that they eventually figure out that the little ( | ) turns the computer on and off. If all we're shooting for is that the user eventually figure out the meaning of the button exclusive of the symbol, why not just make all buttons random sized white squares, eventually they'll figure it out. This is my specific example of why programmers/geeks should not be allowed to design interfaces, because we come up with ideas like the ( | ) button.
This is a small (and some have argued dumb) example but in my opinion, if we can't even come up with a good interface for something as fundemental as the power button, how are normal users supposed to figure out what we're doing with our other interfaces?
For once... (Score:2)
Until the user's perspective is an integral part of the Open Source development process, those Open Source products that rely on end-user interfaces (beyond the command line, that is) will continue to offer substandard interfaces on top of excellent engine code.
This problem can be fixed by merely paying attention to what ordinary users want and/or need.
Of course one person's flaw is another person's feature, but that's okay too. There will continue to be command line interfaces for those of us who want them, but an easy to use "training wheels" open source operating system with lots of GUI bells and whistles would certainly help to spread the gospel to those whose life doesn't revolve around computers, but who still use them from time to time.
Re:Rules for writing BS about OS (Score:2)
Hold on here. Developers are people too. They have computers at home too. They sometimes even have machines that they will refer to as 'desktops'. Many of these very developers need the kind of tools they design (with poor idiot compliancy -er, user friendliness) because they operate better that way. For some, that's why they started uing Linux. I'm one of them.
I will agree that most people are stupid when it comes to interfaces, and therefore selling them a CLI based program just won't work. But I think that might just be a good thing, a very good thing. I don't know, and this is entirely idle speculation, but, it might be the glue of opensource. Think about it, if you write code, and all of the users of said code are competant, then there's a greater chance that they will contribute to your project than if (say) 5% of them were competant.
I know for a fact that many opensource projects behave this way, take Cistron Radius, when someone posts to the mailing list something they should know or have found out in due course, Alan Dekok is the first to not answer thier question, but tell them where they should have already checked. In many cases it appears that it would be more efficient to simply answer the question instead, but it seems he's trying to enforce competance, sometimes I'm so impressed that it brings a tear to my eye.
It's the same for bug reports, every time someone gives too little information, the developers complain profusely. In an idiot-compliant scheme, you can't expect that complaining to anything other than futile. But again, I've seen it work. This is because most people involved in opensource projects are either competant or willing to become so. Even the 'end-users'.
So much for by-hackers-for-hackers.
---
It's not smut it's data
Can't somebody do better? (Score:4)
I think there is a point here, but ... (Score:2)
One thing that made me bristle while reading this was the idea that a graphical user interface should be part of the program. I love all my command line linux tools, because they are small, powerful, and configurable. I can put everything i need to rescue a system on one floppy. I can script stuff. The main thing that makes Linux worth it for me is the tools, and the fact that the GUI is _ENTIRELY_ optional. (this is starting to change... Ever tried installing a distro from CD _without_ the X libraries? <DOH>)
I do think that providing support for desktop users who don't want to know what's under the hood may be a good idea in terms of a quick way to increase the user-base, but i grew up when everybody had to be at least comfortable with a command line (my first comp. was an apple II clone (Franklin Ace 1000)), and most were programmer-users. I think that one thing that is missing is an intermediate step for new users to head towards programmer-users without having to be dropped right into a confuding environment of uncommented C code. I think that it might be more useful to make a nice integrated BASIC (<shudder>) interpreter that is ultra-user friendly to get the new users that have the drive to become programmer-users, but need some sort of a stepping-stone to become comfortable with the system. That's what got me (and a whole generation) of new computer users into the whole thing. Having a starting place that is easily self taught, soon you'll want to do more, and then you'll look for a more powerful language, find C, and if people did a little more commenting, there would be plenty of example code for new programmers to learn from.
To sum up, i think that for short term concerns, it may make sense to lure in desktop users by pampering them, but it would be nice not to damage any of the flexibility in the process. However, i think that for long term growth, what we need to do is create a less intimidating environment for more programmer-users to learn and be nurtured in, because they will ultimately be the main source of growth, reguardless of corporate support, etc...
Re:Many geeks could make great gui's (Score:2)
Consistency, not Quality (Score:2)
All of the people who really know computers extremely well, like many of the readers of slashdot probably would, but the readers of slashdot are but a minority in the group of all computer users.
More important than quality is consistency. My father, a man who has had a home computer since 1987, hates the way each time he buys a new computer, he has to learn a new system, first DOS, then Win3.1, and now Win98. And he uses the computer more than the "average" user. What he, and I think most people want is a system that is consistent. Sure, upgrade the system, make it more capable, but keep the user interface consistent with something people know, so that they can focus on actually creating a product with the computer, rather than learning how to use it.
Ed - the pinnacle of UI design achievement (Score:2)
What's that? You've never heard of it? Don't have access to a Unix box you say?
Fear not! Just compile this source code and you too can experience the joys of ed:
/* ed.c - the editor with the best UI ever! */
#include <stdio.h>
void main()
{
char devnull[80];
while (1)
{
gets(devnull);
printf("?\n");
}
}
--GnrcMan--
Re:Two words for you (Score:2)
GTK+ apps also tend to use Ctrl-{X,C,V}, bless their soul (GTK+ itself uses them in its widgets).
The rationale for Netscape's choice may have been some noise about supporting Emacs keybindings, but it drives me nuts; I'll have to see if I can bludgeon it into going with ^X/^C/^V by tweaking my
.Xdefaults file.
Then again, it also irritates me that Quicken 2000 appears not to use ^X/^C/^V either - and that's not an X app, it's a Windows app. At least the other Windows apps that I've used are better behaved than that.
...which isn't helped by Qt's apparent insistence on using the X primary selection, rather than the X clipboard selection, as its clipboard.
At least middle-mouse-button paste-current-selection tends to work most places, at least if you aren't trying to replace a selected chunk of text with another chunk of text (as doing the latter means you have to select something other than that which you're trying to copy...).
The author's fundamental mistake (Score:4)
I think this is the author's fundamental mistake. The developers of successful Open Source projects are its users. The user has a software itch that must be scratched. No one else is going to do it. Most Open Source developers don't get paid to write their software. They code for personal enjoyment. Can someone think of an example Open Source project where the developers are not users?
Re:Shortcomings of the new Open Source UIs (Score:2)
Let's take their most visible product, the imac. Now I haven't actually used an imac for an extended time so feel free to laugh off my criticisms.
HOWEVER, there are several ui issues that seem obvious without even turning on the machine.
A round, one button hockey puck. Come on! People have five fingers, at least use two or three of them. Also, Apple seems to have forgotten Fitt's Law. As in, "does the mouse fitt in my hand?"
A small screen. It looks like 15 inches. Isn't the norm 17 inches lately? I guess I can forgive them, they were trying to make the imac cheap.
A small cramped keyboard. 'Course windows machines seem to come with these too. Personally, I can't stand standard layout keyboards. The best keyboard I've ever used is the original microsoft natural; the one with the inverse T arrow pad. The current version sucks. Leave it to MS to upgrade the only product that was perfect at version 1.0
No expandability. Not a ui issue (or is it?) but it still pisses me off.
No floppy. C'mon, guys, cheap removable storage is useful! And don't tell me that the built in ethernet makes up for it. You can chain together as many imacs as you want and you still can't take your documents to work with you.
I guess those are the biggies that are very noticable WITHOUT TURNING IT ON.
I know you can buy external usb mice, keyboards, floppies, zips, etc. But do you think this strategy improves the user experience?
Seriously, if I wanted a small screen, cramped keyboard, hostile pointing device, and no expandability I'd buy a laptop. And even that comes with a floppy drive.
Ryan Salsbury -- Apple sucks just as hard as MS.
What is easy for you and me... (Score:5)
Someone didn't know what double-click meant, thought it meant clicking with BOTH mouse buttons. (Makes as much or more sense than clicking with one button twice.)
Someone didn't know that Windows 95 wouldn't run very well on his 386DX system. (how should he know?)
Someone didn't know that return and enter mean the same thing.
Someone didn't know what I meant when I said "monitor."
Someone didn't know what I meant by "Icon"
Someone didn't know how to access the file menu.
Someone didn't know that he had to turn on his external modem seperatly from his computer system.
Someone didn't know what a link was
Someone didn't know how to turn on his monitor. (I had a bit of trouble finding the hidden switch.)
Someone didn't know the difference between memory and disk space (many don't actualy)
These are but a few examples of things that are EXTREMLY basic to us. However few of them are intuitive in actuality. Most "geeks" I've talked to don't understand the mindset of the non-computer literate user. They could write a user-friendly program, if only they knew what the user might need.
Oh and by the way, most extremly new users I worked with prefered keystrokes to mouse strokes, so why do all the manufactuarers rush to put GUI's on all desktops? A simple arrow key oriented shell (in the msdos shell sense of the word) would be better for many of them and applications with a printed list of ctrl commands may well be more usefull than one with confusing pictures.
Re:It's the Interaction Design, Stupid (Score:2)
There seems to be this perception among people here that good UI design is about just adding more eye candy. The Gnome developer who suggested that he could easily add transparency to menus completely missed the point.
First of all, I'm not going to pretend I know a lot about UI design, but it's a field that interests me and I read about it on mailing lists and the like.
In theory, UI design is about making things simple, logical and asthetically pleasing. The reality is that since most people are used to Windows now, a good UI today will find a way to balance good design decisions with the way that Microsoft does everything.
Let's start with an experiment. Bring your mouse pointer into the center of your screen. Now, move the pointer into one corner of the screen. Notice how awkward the movement in your wrist is. Move the mouse back into the center and move it to another corner. Notice how awkward the movement is. Reapeat for the other two corners. Now list the corners from least awkward to most awkward. If you're right handed, your list should look mostly like this:
Bottom Right
Top Left
Top Right
Bottom Left
Now, where is the start button located?
Acording to Tog, (yes, I'm a mac head, but I like all OSes) Default buttons in dialog boxes should be on the right side of the box. Where does windows put the default button? You guessed it, on the left. I think the reason the right is the correct side (for a righty anyway) is that the eye tends to focus on the right side of the screen. This is also probably the reason that the default positions for mac icons are on the right.
As for eye candy, it actually does improve the user experience. Think about it this way, who would you rather see naked, Natalie Portman (if she was 18) or Rosanne Barr? I rest my case. This is also the reason that wallpaper and custom mouse pointers and the like are so popular.
Another misconception here about UI design is that a savings of a 1/2 second is insignificant. Believe it or not, a 1/2 second is actually very significant savings and they do add up. I'm not going to take a side on the one vs. two button mouse (I happen to like a multibutton mouse myself, but advocate a one button mouse for newbies) but until users have enough experience with it, there is a measuarable delay as they try to click the right button. For popup windows, I prefer the timed mechanism that netscape on the mac uses.
As another example, consider the mac menu bar. It has its disadvantages, but for Fitts' Law compliance, it can't be beat. Pop quiz: name the 5 easiest pixels on the screen to hit with the mouse. Anyway, the mac menu bar is infinately high making it a very easy target to hit. Windows (and everthing else) has a menu bar that's a fixed height and width. When I move my mouse to the menu bar, I can over-shoot it as much as I want, and still be right on top of it. Contrast this with the other way where you have to be aware of your acceleration. The downside of the mac menu bar is that new users sometimes think that they're in a different program than the one they're actuall in. They also don't realize that there's a difference between closing all the windows and closing the application. This is the reason that mac os 8.x and up, can display the application name next to the icon in the menu bar, though I'm not sure it helped much.
It's important to keep in mind that there's no perfect interface for all users. I've taught a number of people to use computers and what I tell them is, I'm going to show you how I like to do things, but if you find a way that you like better, then use that instead. This could potentially be a great oppertunity for OSS on the GUI as every hacker has their own way of doing things. On the other hand, most users will just use what's given to them and make the best of it because they're too afriad to try something else because they think that they'll screw up the system or it will be too hard. You can thank Microsoft for that mentality too.
To the developers of Gnome and KDE and all the others, I wish you well, but if you want to get Linux on the desktop, (and maybe you don't, in which case, disregard this post) then it's important to understand what makes something powerfull, consistent and easy to use. There's no reason that software has to be powerfull or easy to use. They can and should be both.
GUI != ease of use (Score:2)
One example is simply finding the file one is looking for. Using grep, find, xargs, cut, sort and their ilk one can find any file he wants. Not so with the fancy pants 'find' utility in Visual C++. So the advantage of having the find facility a keystroke away and integrated into the IDE is quickly lost.
Another example is Star Trek (hey Lederman made Star Trek analogies, why can't I!). They use displays as feedback mechanisms, but for interacting with the computer they talk. Why? Well two reasons, GUIs are really bad TV and also because it is much more intuitive to describe your problem in natural language and let the computer do the rest.
So, as other posters have pointed out GUIs are gaining ground, but let's not forget the power of text to represent the world. The sooner I can talk to my computer the better!
BTW, does anyone know a good speech to text tool that I can use from a command line? I'm using Festival to go the other direction and need something to complete the loop.
He's absolutely right. (Score:2)
But before I get to that, lemme try to explain why this is such a big problem, because many of y'all don't seem to get it, judging from some of the posts so far.
But, you might say, we don't want all those Winblows lusers! We don't care how *smart* you have to be to use Linux, cause we only want us l33t genii (hey! We're even l33t l4at3n speakers!) to be able to use it! Because we all know that, once you learn how to use it (and it doesn't even take me that long, cause I'm so smart), OSS is more flexible, powerful, and faster to use than some "end-user tested" crap.
And 10 years ago, you'd have been right. The problem here is that the above argument is no longer true, and most Linux users/coders don't even know it. Now there's no argument that OSS for Unix/Unix-alikes has had some of the best text-based UI's around. (Or, in the case of xemacs, I suppose we should say "primarily keyboard-based" rather than "text-based".) Sure, most of them are abolute hell to learn (the poster above who suggested that you could adequately teach a newbie to use vi by sitting them down in front of it and pressing 'i' notwithstanding), but, once you get used to them, you realize how incredibly intelligently they were designed. Things like vi/vim, emacs, and the various CLI shells; you'd need a dedicated teacher, a book, or a hell of a lot of patience with man pages to figure any of them out, but once you do you find that they're extraordinarily quick to use, ridiculously full featured, and amazingly robust. As my Harley Hahn Unix book says ad nauseum, "hard to learn but easy to use." And if you're any sort of geek--someone who's going to spend most of your time on a computer, such that the steep learning curve isn't too relevant--then it's not such a bad design philosophy.
Thing is, most of us have realized that a GUI has the possibility to make anyone more productive, even Tom Christiansen's proverbial "vi wizard" [slashdot.org]. However, the people making GUI tools/wm's/environments for Linux (note: not that I'm one of them; not that I'm a good enough programmer to contribute a single line; and really truly not to take anything away from their impressive achievements) seem to have figured that we could use a dose of the old paradigms, a huge helping of superficial Windows UI plagarism, and some skinability (neato!) and have a kickass UI.
Not even close.
Again, Gnome and KDE are incredible achievements for what they are, and are constantly getting better. But. As they stand now, their UI is clearly substandard. KDE is like Win9x, but flakier (from a UI perspective, not a stability one, of course), with less consistent (though more extensive) preferences panels, unconsistent app UI's, much less polish, and an awful excuse for perhaps the most important functionality of a GUI--being able to seemlessly share data between different apps. Gnome has the benefit of not being such a slavish copy of Windows, but is otherwise even worse in all of the above categories.
And what do most Linux users do about it? a) Complain about how GUI's are for wimps anyways, or b) Stick on some badass skins and note how, since E has much more functionality than any Windows wm, it is invariably a better UI.
Meanwhile, what do MS and Apple do about it? They spend millions of dollars a year hiring UI experts and, more importantly, empirically testing thousands of potential interfaces on end-users to find which are better.
Notice I didn't say "easier to learn". I said "better". As has been noted elsewhere in this thread, the Windows UI paradigm isn't any more intuitive from first principles than the KDE paradigm (duh; they're nearly identical). But that's not the point. What good UI testing is all about isn't how easy it is for someone who's never seen a computer before to use, but rather, assuming the user already has adequate familiarity with the paradigm, a) how flexible, robust, and powerful is it; b) how intuitive is it to do an action which is part of the paradigm but which the user has never actually done before; and c) how fast, easy, and nonobtrusive is it to do the sorts of tasks that the user does again and again.
By all these criteria, something like the Unix CLI passes with flying colors. And, by all these criteria, Gnome and KDE, and the programs that run on them, in their current states are quite a bit behind Windows, which in turn lags behind MacOS. (Note: I've never been a Mac user, and have always generally disliked them for several reasons: bad under-the-hood technology, horrible overpricing, lack of good, fast software, one damn button, deceitful marketing. However, I'm just beginning to come around to how well designed their UI is (compared to the alternatives), and damn if OS X doesn't look incredible. But I digress.)
But, you all say, that's what's so great about Open Source! If there's anything wrong with Open Source Software, then someone will fix it! The problem is, very few people notice that anything's wrong. That is, you don't notice how unproductive the way you're doing things is until you see a better implementation. And even then you probably won't notice--it might take someone with a stopwatch showing you how much faster you work the new way than the old way. A quick story to perhaps illustrate what I mean: couple days ago I was procrastinating writing a huge (and overdue) paper, by reading that analysis of Aqua [asktog.com] by Tog, the guy who essentially led the Mac UI development. And, since I was trying awful hard to procrastinate (and because once you start reading him, Tog's pretty interesting), I decided to click the link to this article on Fitts' Law [quailwood.com], where I read Tog's advice to Word for Windows users: switch to full screen mode to get the Fitt's Law advantages of infinite depth behind your menus, and switch to large icons to speed up finding the right one.
Well, never one to miss a chance to not write my paper, I did some informal tests. And, even though the ideas had never occurred to me (not because I didn't know full screen mode and large icons existed, but just because since it *looks* so much more professional in maximized window mode with by big toolbars full of small icons (which, as I run 1280x1074, really are small), it had to be more productive that way), I realized pretty quickly that they actually improved my productivity. Or, would have, except that in full screen mode the menus, while at the top of the screen and thus with infinite depth, don't show up until you mouse over them (Tog doesn't seem to mind this sort of thing; I find it annoying as hell. But in any case, it's worth pointing out that this is proof Tog isn't designing for ignorant first-time users--because you certainly can't expect any first-time users to know about items which are hidden until you mouse over them--but rather for ease of use for people who know what they're doing); and since MS decided *not* to include higher res bitmaps for all the icons in large icon mode, they looked too damn ugly for me to keep them. But it is true that I could find the right one a lot quicker; and that I'd never realized how my tiny icons were slowing me down until I tested it.
Ok, I'm hideously rambling, so lemme try to sum up. Text-based Unix interfaces were everything that today's Linux GUI's aren't: consistent, robust, quick (from a UI standpoint, not a technological one), and intuitive for those who already knew the paradigm. A simple example is pipes--a simple concept which gives the CLI its amazing power and flexibility--and their GUI analogues, object models like COM, CORBA, etc. designed to allow intuitive sharing of information between apps--which, to put it mildly, work much better on mainstream OS's than on Gnome and KDE. A more interesting problem (because we at least agree on the fact that our object models need improvement, and I have no doubt that they'll catch up pretty soon) is inconsistent and just-plain-badly-designed interfaces--a failure to take advantage of principles--like Fitts' Law--that the other guys have learned from psychological research and intensive user-testing. Perhaps the most difficult problem is the lack of consistency across apps.
The question is, how do we fix this. In regards our lack of UI research, I have a good deal of hope. After all, Red Hat has the money to hire some serious UI people and psychologists and testers and whatnot to get Gnome on equal footing with Windows and Macs; Corel or others could do the same for KDE. Hell, they might even come up with some *new* GUI paradigms, instead of just copying the two rather flawed ones out there, often badly.
The problem is that the strength of OSS lies not in the high-name projects which can now afford serious funding, but rather in all the little ones that provide all the little functionalities we know and love. How to get all of those projects to understand, and furthermore, abide by complex UI standards--when at the moment they can't even agree on standard menu shortcut keys--is a huge problem. Furthermore, there's the fact that a different choice of widget toolkits inevitably imposes a different UI paradigm. Finally, we have the fact that Linux geeks rightly love customizing our systems to the fullest extent; as Apple has clearly realized, with their apparent decision not to allow OS X any skins other than Aqua, customization is the enemy of consistency.
Hopefully this absurdity of a long post has convinced y'all that these UI issues are important, because they really affect how productive any user is--but elite users *especially*. I do think that OSS can come up with a better response to the UI issue than poorly understand copies of the existing GUIs. However, I'm not sure exactly how, and our work so far in this area has not been encouraging...
Re:BS... (Score:2)
Even under Linux at home, I use Wine with my Forte Agent to cruise the newsgroups. The Linux application community is getting there, but still has a ways to go.
Re:Bingo! (Score:2)
But please note, I didn't say everyone has to be an expert at everything. People having an attitude of "don't want to learn" (particularly, expressed arrogantly) - or of "I don't need to know now make it all simple" are still morons, though. And actually that'd apply to the visitor to the doctor, not to the doc him/herself...
Bingo! (Score:2)
Sure, the Gnomes and KDE's of the world put a prettier face on some of it, but most programs that have a thought-out UI in the Open Source world are just retreads of existing non-free programs' interfaces. So the GIMP is an example of a nice interface? It's pretty much a Photoshop clone that has some differences, but there are more similarities than not. Skins and chrome are cool, and a nice way for power users to spice up their user experience, but if we want to see World Domination anytime soon, we need to understand the needs of the ordinary user. They don't need or want a cool skin - they need a straightforward interface that works the way they need it to and that they can use out of the box.
Saying "once they learn how to use bash properly" doesn't cut it - If an average non-power user has to get that far they'll give up. Period. They don't want to learn, nor should they have to. For Linux to succeed as a desktop OS, it needs to be possible to perform all the necessary user tasks without ever requiring a command line or editing a
- -Josh Turiel
Case in Point (Score:2)
This is exactly the attitude that the author was pointing out as the reason why the acceptance of Open Source software on the user's desktop is still quite a ways off.
The point is: end users want innovative, flashy looks, and Good Design(tm). They freeze up like a deer in the headlights when they see a $ or a #. Geeks are happy with command prompts and tend to assume everyone else is to, or they assume implementing skins will solve everyone's problem. Skins are no substitute for good UI design. Thanks for illustrating the author's point.
Things the UI makes me use commercial software for (Score:2)
1. Text Editing
This is obviously a touchy issue for many. For me, they need a good GUI. On Linux, I use gEdit, which is buggy and feature-poor, but OK. When I want to get real work done, though, I use OpenStep's Edit.app. And yes, I know how to use vi, emacs, joe, and even ed when I must.
2. CD Burning
To burn an audio CD on my Linux system, I spent 2 hours(!) reading manuals -- for xcdroast, cdrdao, etc. Then I went to burn, and still screwed it up. I rebooted in Windows, used the free utility that came with my CD-RW, and was ready to burn in 5 minutes. Success.
3. Copying and Pasting
Anytime I will have to do a lot of copying and pasting between apps, I switch to an OS where the keys for doing so are always the same. And where there's a real clipboard.
4. Matlab
Octave successfully duplicates the command line interface, but (surprise!) has nothing like a more convenient notebook interface.
5. Paint programs
Sure, the GIMP is all right. But (unless I'm missing some motherlode of plugins) it is feature-poor, and the interface for some tools in nonstandard.
Is there ANY free software that makes you want to switch to it, just for the UI?
- Brian
Your fundamental mistake (Score:5)
Re:Flawed premise (Score:2)
"Easy to use" is not the same as "good". It certainly does *not* follow that one implies the other for all cases. After all, if I went for the easy option all the time, I'd be running MacOS X or '95 on a notebook in BED.
Approach it wondering what it is; if you don't like it, so be it. Don't approach it with an agenda that "it's not easy, I don't understand therefore it's ITS fault not mine". This kind of thing gets right up my snout.
I knew there was a reason I persist in using Debian - I can cope with the breaks of living at the cutting edge of 'unstable' all the time, for the benefits I get. "If you don't get the benefits, or you don't want to put in the effort, go away and don't moan."
Re:BS... (Score:2)
FreeBSD
The patch to let FreeBSD recognize my Ethernet card
The patch to enable DHCP so the box can talk to my router
And then see if I can get X up.
This is freaking light-years beyond what my mom can do. I wouldn't do it if I didn't have a business need to have a Unix system at my house. The effort/reward ratio from the user's perspective sucks, frankly, and I think the article is spot-on, point for point. Major improvements in usability and interface need to happen before any Unix can even begin to think of breaking out of the server ghetto.
gomi
Rules for writing BS about OS (Score:2)
I don't see a single tangible suggestion in this thing. What good does saying Open Source user interfaces suck, if you cannot say "Doing X in program Y is too complicated for the average user, it should be done like A and B." And don't forget to just say, "Fix it."
2) Don't do research. Open source is for geeks, that's all you need to know. Everyone knows what a geek is, so draw upon that.
Ignore any email addresses, news groups, mail lists, web sites, icq numbers, etc in the documentation of a program. There is no way to get in contact with the software authors, just give up right now. It is a closed society. If you are not a geek and willing to watch X-Files all night long, you will have no impact on anything. They are all sitting in their darkened basements admiring Natalie Portman while writing these programs.
3) Honor Microsoft, they are the only ones who can do anything right.
What do they do right? Who knows, but it is correct, and open source programs will never be able to touch them. Why don't these programmers just go to work for Microsoft? Then we'll have usable programs with every feature in the world, but actually work, philosophies be damned.
4) Users are the be-all when it comes to designing programs.
Any feature not included is a snub to users everywhere. After all, what other reason to users upgrade to the latest Windows/Office/etc program than the myriad of features listed on the boxes that they will never use. Eat up more and more hard drive space, but include them, all of them, and more, there isn't enough in that program. What you ask needs adding? I don't know, but add it, and don't stop there. Add something else too! Dammit I want a program that's usable, can't you get that through your head?!
5) Why is this grass in my yard still green?
There is no flexibility in that, and I have written paper upon paper imploring God/Mother Nature/whoever to change it. If grass is to be accepted by the vast majority of users, it must be willing to bend a little.
Well now (Score:2)
It doesn't make sense to say that open source software is inherently hard to use. None of the graphical programs that came with my Mandrake 6.1 are hard to use even by Windows standards.
Are textmode tools hard to use? Maybe if you've never used them before. They're certainly not the paragon of interface design according to Tognazzini, but for many applications they're the only thing that makes sense. It may in fact be no harder to learn than Windows if you have never used a computer before; you don't approach the system with any preconceived notions about how the interface works.
Is vim hard to use? Maybe if you're used to Notepad or MS Word. It's all relative.
I was a bit frustrated with Linux at first because I had been a Windows user for a long time and had not used DOS for many moons. Administering your computer can be a little tough when all you have is a prompt and you have no idea where Linux puts everything. After you readjust, it all makes sense.
In fact, the only major real hurdle with open source software is having to compile it. Much software is available only in source format. I have had any number of compiles fail on me, and it's too much to ask of a typical user to have him poke around in the code or go out and grab some missing dependencies, even though I personally am capable of trying to fix the problem.
There should some sort of automated package tool that provides the both the performance benefits of compiling with the automation of a binary install. Rather than distributing a binary package, distribute a package containing a source tree and instructions for the package tool that allow it to automatically compile and install the program, without having a bunch of extra source files laying around if you don't want them.
JD
Re:It will happen eventually (Score:2)
For example, you could run programs like this using a strictly audio setup, or fold application menus into a dynamic system menu scheme (useful for cell phones, for example).
The point is you take the construction of the UI layout out of the hands of the developer and put them into the hands of a) the system and/or b) true UI designers.
I think we're aiming for the same thing, but I think that your idea of a "cross-platform UI" is one that looks exactly the same on every platform (something like Java). My idea of a cross-platform UI is one that's abstracted a lot further and can be rendered in a fashion that suits the goals of the operating environment.
Re:Completely Disagree (Score:2)
I think Open Source does a fine job with straight programming without elaborate user interfaces. For it to enter the popular realm, it needs to have user interface work done WITH it, not IN it. I think Microsoft realizes this and spends a lot of time with the user interface as a priority because they know it's important... and we get pissed at them just because the programming is crappy at times. (Part of the problem with bloatware is that fewer bugs cause more damage, so we see a bluescreen when Paintbrush has some awkward error... but we see the error, not every little step along the way that MS did perfect). I think MS has the right approach in terms of UI/GUI, and maybe a lesser quality but acceptable approach to the programming, and if you want to beat MS, then you can't take the right programming approach and the crummy UI approach (treat it as programming, not separate interface design). Do you think that the same guy who wrote the programming for an ATM machine was the same guy who put it together by hand? No. If that was the case, ATMs would suck.
wrong, it's actually economics... (Score:2)
There's nothing about the vaunted Windows or the Mac end-user UIs that can't run on Unix and those UIs will show up there eventually when the open unixes complete their takeover of programmer mindshare. They're not there yet because
Whether they will show up in an open sourced form or not depends on the economics of those markets, whether they
I hope they are open source and I hope there's no monopoly, but open source is the "programmers UI" and GUIs are the end-user's UIs and they are on orthogonal coordinates.
--------------
So, all that said, as two asides:
Re:The author's fundamental mistake (Score:3)
Check out the truckload of arguments dumped here as to why we really don't need GUIs, and then go ask that dumb secretary sitting next to you at the office what she'd think about a PC that gives here "# >" when booted. Yeah, maybe Gnome and KDE is listening to what their users' complaints are.. but the majority of the geeks definitely isn't. I think that's what he was saying.
Re:A list (Score:2)
But still, the OSS community have failed when it comes to GUIs. Because there is not single rule of thumb how to use X programs, except copy/paste (Which works fairly well for text, but while it is possible to, no one has implemented it for pictures and other data). It is totally impossible to use X without a mouse - there is no way, that works in all programs, to switch focus from one button to another, or press a button, with the keyboard, like it is in Windows. But GTK and QT is on their way, they only have to team up together to form a uniform user environment build up by heterogeneous programming tools.
--The knowledge that you are an idiot, is what distinguishes you from one.
Re:Rules for writing BS about OS (Score:2)
I believe it's exactly the sarcasm about users that the author was trying to illustrate. The traditional view of developers is that users suck, and rightly so. Users should simply get it, or they're too stupid to use the program anyway.
Now I think that's a pretty healthy attitude for developers to have, which is why there should also be specialized UI/useability analysts/designers involved whenever you create a system or a tool for a widely distributed (skill, experience) user-base, rather than depending on the developers creating the underlying functionality.
In order to compete on the desktop - which seems to be very important to the Linux community - there has to be a completely new focus on useability and UI. That's the point of the article.
Re:keep it simple, stupid (Score:3)
Yes, efficiency in a UI is good. Flexibility, consistency, clarity and information density are also good. There are lots of qualities that make a particular UI implementation useful. The trick is to include as many of those as possible into the product.
The interface for Cisco routers, as anyone who's ever seen it can tell you, is quite simple. Efficient. Able to be used via a low-bandwidth text interface, which is perfect. But is it easy to use? Not for most people. Microsoft Bob, at the other end of the spectrum, was incredibly inefficient, but useable by most people above age 5.
I think the main point is not whether XF86 or a particular WM has a good interface, since those programs are mainly MAKING the interface for the rest of the system. Does Sendmail have a good interface? Does vi? Only when considered seperately and by somebody who is very familiar with them.
For the VAST majority of OSS projects there are no common use models. That makes learning the software more difficult. On the MacOS, most software conforms to several systemwide standards. CMD-Q is quit, CMD-O is open, etc. Windows is almost as good. These types of things make a system AS A WHOLE easier to use. That is the challenge for OSS project leaders - find a way to make their learning curve work for them.
Re:Bingo! (Score:2)
This much is true, and not a problem. Just don't expect those of us who know our shells from our eggwhisks to pander to the sheer pathetic attitude of "don't want to learn".
> nor should they have to. For Linux to succeed as a desktop OS,
Disagree from here on, though. People *should* both have to, and want to, learn how to use something for what it's worth in the first place, otherwise get lost by all means. Open-Source hasn't got where it is today by getting a committee-load of morons together saying "we can't help, won't help, make it pretty pictures for us".
I suggest you also have a strange idea of linux either "succeeding" or (in general) "winning". It doesn't win by having more (l)users; it wins by being *better* than the alternatives (not "the competition"), and having a user-base giving out a *quality* signal, not a quantity one.
How can "power and flexibility of linux be the fatal flaw"? They're right up amongst its major strengths, apart from stability and all that.
Gnome UI, mailing lists and feedback (Score:2)
One of Mike's key theories seems to be that there is no sense of feedback between the end-user and the coder in Open Source development. I would refute this fairly strongly - often the people I know using Open Source tools have fairly widespread experience of other User Interfaces, ranging often both across multiple platforms and going way, way back to before the days when the Graphical User Interface first raised it's head above the primordial digital soup. Now this experience does not make any of these people a UI expert, nor does it necessarily mean that the programs they write have well designed User Interfaces. It does however give us the possibility of recognising good UI design when we get to experience it, and also the possibility of influencing the design of the User Interface in later releases.
In these days of expanding user-base for Linux, and the push to provide a more newbie-friendly environment to work in (which, by the way, I totally support), good User Interface design is getting to be much more important. There are various resources appearing, from the Gnome UI Improvement project [gnome.org] and its mailing list, along with the work that the KDE people are putting together with KDE 2.0, which are testament to the need to try and learn from the many graphical interfaces out there and to innovate as well. Having a well designed UI need not reduce the speed at which the experienced user uses their machine, while allowing the novice some hope of making progress.
Innovation is often overlooked in designing a new UI. As soon as you stray from, say, the way MS Windows does something, people jump up and down worrying that new users will be confused by a different method. I'd disagree - just because it has been done that way before is not, in itself, reason to continue doing it. A good UI *must* be intuitive and logical at some level - simply copying the existing behaviour of other window managers will not end up with a coherent project. At the moment, the graphical user interface is a mess of conflicting ideologies. We all have experienced the frustration of 'Drag-and-drop' when it isn't a universal quality - for example in Windows, you can (sometimes...!) drag a file into a program to load it, but you can't drag that file out to save it or pass it to another application to work on it in a different way. And I don't mean using the clip board either, although that may be the route that would be used to effect such a transfer, it shouldn't be obvious to the user that that is how it happened.
Taking the best paradigms for working with a graphical user interface and making it all stick together in a cohesive fashion is a task of iteration, experience and reiteration between the end-user and the coder. Since in the Open Source world the user may also wear the coders hat, this should be the ideal environment in which to create and refine the most useable graphical interface on any platform, as long as we keep our sights on some central game plan of Useability and not merely on creating a feature-rich tick list of things our programs can do.
Cheers,
Toby Haynes. Need to talk to users? Ok, sure, voice chat is kinda fun sometimes, anyway. Need to try it this way instead of that bad old way? Ok, that makes sense. Need to read a about it? Just give me the URL!. Need to give prizes for the best user interface designs? Come on, somebody with more money than geekness please step forward to sponsor the contest. The only hard part of this is recognizing the need. Achille's heel? Far from it, it's just another hill to climb.
The fact is that it's much harder to make good end-user software than it is to make good infrastructure software.
Re:Shortcomings of the new Open Source UIs (Score:2)
Re:keep it simple, stupid (Score:2)
What will happen is that these so called geeks will (have to) find another outlet for their programming creativity. The geeks and protestors of today will be the establishment of tomorrow. That's how it's always been, and that's how it always will be.
----------------------------------------------
congratulations you proved the articles main point (Score:3)
"It's faster for me"
Haha, you confirm the article in every respect. You rant a bit about speed and efficiency and you completely ignore usability.
Of course it is faster if you know what to type and you happen to be able to type fast. I saw my grandfather use a PC a few weeks back and I assure you, this wouldn't be as fast for him as it is for you.
The article's point was that OSS developers are brilliant programmers but are also completely clueless about providing a usable GUI.
Look at linux. It took until the late nineties before people were starting to realize that the traditional userinterface (X + really crappy windowmanager) sucked for somebody unwilling to deal with a commandline. What did they do? After years of complaining about MS windows and it's user interface the best they can do is cloning it. We have to wait for Steve Jobs to see some actual innovation (about which a certain GNOME hacker manages to say that he can do the same since he can do transparency).
Interesting sidenote: this also applies to mozilla. If you read the newsgroups people are complaining about the user interface. A common reply to these complaints (you guessed it!): write a skin for it. Mozilla is brilliant except for it's userinterface. Little thought has been put in it so far. At best it is netscape 4 with skins + some features of internet explorer. Where's the innovation?
Re:The author's fundamental mistake (Score:2)
But that's the wrong question, isn't it? The important question (for this discussion) is how many users aren't the developers of the software?
OF COURSE the developers should be users of their software! That's obvious. But I keep reading comments from people who seem to think that only developers have a right to suggest features.
This is incredibly short-sighted if Open Source/Free Software is be the dominant product. For years I've been hearing people say that they just want their computer to "get out of their way" so they can do their job. Now it seems many people here want these people to become programmers instead.
Of course, programming is a wonderful and usefull thing to learn, but not everybody wants to do that. Should Open Source/Free Software be limited to just geeks? I didn't think so.
Re:The author's fundamental mistake (Score:4)
I think this was the best point the author made. For an experienced administrator, typing in a quick and dirty text command is the quickest way to do things. For someone who is keeping her life as non-computer oriented as possible, though, text interfaces, key combinations, and scripts/rc files are all hopelessly arcane and arbitrary.
That's why windowing metaphors are so powerful-- you are hijacking training that people already have in the Real World and using it to ease the adjustment. That's why complex procedures are laid out in wizards: to guide people through it without having to remember an algorithm.
If you are trying to let someone do something complex, you want to make them learn as little as possible in order to do it. That doesn't mean they are ignorant, it means that we are imposing on them to spare as little brainspace as possible. A doctor doesn't make us learn medicine-- we shouldn't force him to learn software engineering.
This is all pretty obvious, IMO. But it is easy to forget. As coders, we know way more than the average person about computers. We don't need metaphors-- we can handle the raw abstractions. So in the name of flexibility and consistency, we get straight into what often amounts to minor coding in its own right.
This is compounded by feature creep. People are paying for features they won't use on their Office apps, and then wonder why their desktop is so crowded, and why everything has to be filed away in menus.
Finally, since abstraction is our living, we are using it too much where we shouldn't-- the user's experience. Making something universal, making it systematic, often good things. But sometimes you have to hide the underlying elegance in favor of being usable by people who don't know code.
emacs is not what we want. Enlightenment isn't, either, though it is ridiculously great. Nor is Word, really. MS's interface advantage isn't that they're particularly good, they are just what people are used to now. What we have to start thinking about is what is making Linux suddenly so popular to the Rest of the World: What do users want? Linux addresses security, stability, standardization and upgrade issues that the business world probably assumed were inevitable.
What we have to do is what PARC did years ago. Say "What are these computers for?" "What can they let people do which they couldn't do before?" and "How can computers help people do what they're doing now more efficiently?".
Answer those-- and don't think that companies like Apple aren't asking those questions every day-- and people will wonder why MS and others can't make an operating system which doesn't need to be unravelled by a PhD in CS. Rather than saying that about us, which is what they're doing. In almost every other area, Open Source has proved to be better. I think that, if people really start focusing on it, we can take the lead on it. | http://news.slashdot.org/story/00/01/28/116240/open-sources-achilles-heel?sdsrc=prevbtmprev | CC-MAIN-2014-52 | refinedweb | 20,861 | 70.43 |
Swing
Skill Level: Introductory
Michael Abernethy (mabernet@us.ibm.com) Team Lead IBM
29 Jun 2005
This hands-on introduction to Swing, the first in a two-part series on Swing programming, walks through the essential components in the Swing library. Java developer and Swing enthusiast Michael Abernethy guides you through the basic building blocks and then assists as you build basic but functional Swing application. Along the way you'll learn how to use models to ease the process of dealing with the data.
Section 1. Before you start
About this tutorial.
Introduction to Swing © Copyright IBM Corporation 1994, 2008. All rights reserved.
Page 1 of 38
developerWorks®
ibm.com/developerWorks:
• JDK 5.0.
• An IDE or text editor. I recommend Eclipse (see Resources for more information on Eclipse).
• The swing1.jar for the flight reservation system.
Section 2. Introduction to Swing
Introduction to UIs
Before you start to learn Swing, you must address the true beginner's question:
What is a UI? Well, the beginner's answer is a "user interface." But because this tutorial's goal is to ensure you are no longer a mere beginner, we need a more advanced definition than that.
So, I'll pose the question again: What's a UI? Well, you could define it by saying it's the buttons you press, the address bar you type in, and the windows you open and close, which are all elements of a UI, but there's more to it than just things you see on the screen. The mouse, keyboard, volume of the music, colors on the screen, fonts used, and the position of an object compared to another object are all included in the UI. Basically, any object that plays a role in the interaction between the computer and the user is part of the UI. That seems simple enough, but you'd be surprised how many people and huge corporations have screwed this up over the years. In fact, there are now college majors whose sole coursework is studying this
Introduction to Swing Page 2 of 38
interaction.
Swing's role
Swing is the Java platform's UI -- it acts as the software to handle all the interaction between a user and the computer. It essentially serves as the middleman between the user and the guts of the computer. How exactly does Swing do this? It provides mechanisms to handle the UI aspects described in the previous panel:
• Keyboard: Swing provides a way to capture user input.
• Colors: Swing provides a way to change the colors you see on the screen.
• The address bar you type into: Swing provides text components that handle all the mundane tasks.
• The volume of the music: Well
Swing's not perfect.
In any case, Swing gives you all the tools you need to create your own UI.
MVC
Swing even goes a step further and puts a common design pattern on top of the basic UI principles. This design pattern is called Model-View-Controller (MVC) and seeks to "separate the roles." MVC keeps the code responsible for how something looks separate from the code to handle the data separate from the code that reacts to interaction and drives changes.
Confused? It's easier if I give you a non-technical example of this design pattern in the real world. Think about a fashion show. Consider this your UI and pretend that the clothes are the data, the computer information you present to your user. Now, imagine that this fashion show has only one person in it. This person designed the clothes, modified the clothes, and walked them down the runway all at the same time. That doesn't seem like a well-constructed or efficient design.
Now, consider this same fashion show using the MVC design pattern. Instead of one person doing everything, the roles are divided up. The fashion models (not to be confused with the model in the acronym MVC of course) present the clothes. They act as the view. They know the proper way to display the clothes (data), but have no knowledge at all about how to create or design the clothes. On the other hand, the clothing designer works behind the scenes, making changes to the clothes as necessary. The designer acts as the controller. This person has no concept of how to walk a runway but can create and manipulate the clothes. Both the fashion models and the designer work independently with the clothes, and both have an
Page 3 of 38
area of expertise.
That is the concept behind the MVC design pattern: Let each aspect of the UI deal with what it's good at. If you're still confused, the examples in the rest of the tutorial will hopefully alleviate that -- but keep the basic principle in mind as you continue:
Visual components display data, and other classes manipulate it.
JComponent
The basic building block of the entire visual component library of Swing is the JComponent. It's the super class of every component. It's an abstract class, so you can't actually create a JComponent, but it contains literally hundreds of functions every component in Swing can use as a result of the class hierarchy. Obviously, some concepts are more important than others, so for this tutorial, the important things to learn are:
• JComponent is the base class not only for the Swing components but also for custom components as well (more information in the "Intermediate Swing" tutorial).
• It provides the painting infrastructure for all components -- something that comes in handy for custom components (again, there's more info on this subject in "Intermediate Swing").
• It knows how to handle all keyboard presses. Subclasses then only need to listen for specific keys.
• It contains the add() method that lets you add other JComponents. Looking at this another way, you can seemingly add any Swing component to any other Swing component to build nested components (for example, a JPanel containing a JButton, or even weirder combinations such as a JMenu containing a JButton).
Section 3. Simple Swing widgets
JLabel
The most basic component in the Swing library is the JLabel. It does exactly what you'd expect: It sits there and looks pretty and describes other components. The
Introduction to Swing Page 4 of 38
image below shows the JLabel in action:
The JLabel
Not very exciting, but still useful. In fact, you use JLabels throughout applications not only as text descriptions, but also as picture descriptions. Any time you see a picture in a Swing application, chances are it's a JLabel. JLabel doesn't have many methods for a Swing beginner outside of what you might expect. The basic methods involve setting the text, image, alignment, and other components the label describes:
• get/setText(): Gets/sets the text in the label.
• get/setIcon(): Gets/sets the image in the label.
• get/setHorizontalAlignment(): Gets/sets the horizontal position of the text.
• get/setVerticalAlignment(): Gets/sets the vertical position of the text.
• get/setDisplayedMnemonic(): Gets/sets the mnemonic (the underlined character) for the label.
• get/setLabelFor(): Gets/sets the component this label is attached to; so when a user presses Alt+mnemonic, the focus goes to the specified component.
JButton
The basic action component in Swing, a JButton, is the push button you see with the OK and Cancel in every window; it does exactly what you expect a button to do -- you click it and something happens. What exactly happens? Well, you have to define that (see Events for more information). A JButton in action looks like this:
The JButton
The methods you use to change the JButton properties are similar to the JLabel methods (and you'll find they're similar across most Swing components). They control the text, the images, and the orientation:
Page 5 of 38
• get/setText(): Gets/sets the text in the button.
• get/setIcon(): Gets/sets the image in the button.
• get/setDisplayedMnenomic(): Gets/sets the mnemonic (the underlined character) that when combined with the Alt button, causes the button to click.
In addition to these methods, I'll introduce another group of methods the JButton contains. These methods take advantage of all the different states of a button. A state is a property that describes a component, usually in a true/false setting. In the case of a JButton, it contains the following possible states: active/inactive, selected/not selected, mouse-over/mouse-off, pressed/unpressed. In addition, you can combine states, so that, for example, a button can be selected with a mouse-over. Now you might be asking yourself what the heck you're supposed to do with all these states. As an example, go up to the Back button on your browser. Notice how the image changes when you mouse over it, and how it changes when you press it. This button takes advantage of the various states. Using different images with each state is a popular and effective way to indicate to a user that interaction is taking place. The state methods on a JButton are:
• get/setDisabledIcon()
• get/setDisabledSelectedIcon()
• get/setIcon()
• get/setPressedIcon()
• get/setRolloverIcon()
• get/setRolloverSelectedIcon()
• get/setSelectedIcon()
JTextField
The basic text component in Swing is the JTextField, and it allows a user to enter text into the UI. I'm sure you're familiar with a text field; you had to use one to enter your user name and password to take this tutorial. You enter text, delete text, highlight text, and move the caret around -- and Swing takes care of all of that for you. As a UI developer, there's really little you need to do to take advantage of the
Introduction to Swing Page 6 of 38
JTextField.
In any case, this what a JTextField looks like in action:
The JTextField
You need to concern yourself with only one method when you deal with a JTextField
-- and that should be obvious -- the one that sets the text: get/setText(), which gets/sets the text inside the JTextField.
JFrame
So far I've talked about three basic building blocks of Swing, the label, button, and text field; but now you need somewhere to put them. They can't just float around on the screen, hoping the user knows how to deal with them. The JFrame class does just that -- it's a container that lets you add other components to it in order to organize them and present them to the user. It contains many other bonuses, but I think it's easiest to see a picture of it first:
The JFrame
A JFrame actually does more than let you place components on it and present it to
the user. For all its apparent simplicity, it's actually one of the most complex components in the Swing packages. To greatly simplify why, the JFrame acts as bridge between the OS-independent Swing parts and the actual OS it runs on. The JFrame registers as a window in the native OS and by doing so gets many of the
Page 7 of 38
familiar OS window features: minimize/maximize, resizing, and movement. For the purpose of this tutorial though, it is quite enough to think of the JFrame as the palette you place the components on. Some of the methods you can call on a JFrame to change its properties are:
• get/setTitle(): Gets/sets the title of the frame.
• get/setState(): Gets/sets the frame to be minimized, maximized, etc.
• is/setVisible(): Gets/sets the frame to be visible, in other words, appear on the screen.
• get/setLocation(): Gets/sets the location on the screen where the frame should appear.
• get/setSize(): Gets/sets the size of the frame.
• add(): Adds components to the frame.
A simple application
Like all "Introduction to x" tutorials, this one has the requisite HelloWorld demonstration. This example, however, is useful not only to see how a Swing app works but also to ensure your setup is correct. Once you get this simple app to work, every example after this one will work as well. The image below shows the completed example:
HelloWorld example
Your first step is to create the class. A Swing application that places components on a JFrame needs to subclass the JFrame class, like this:
Introduction to Swing Page 8 of 38
public class HelloWorld extends JFrame
By doing this, you get all the JFrame properties outlined above, most importantly native OS support for the window. The next step is to place the components on the screen. In this example, you use a null layout. You will learn more about layouts and layout managers later in the tutorial. For this example though, the numbers indicate the pixel position on the JFrame:
public HelloWorld()
{
super(); this.setSize(300, 200); this.getContentPane().setLayout(null); this.add(getJLabel(), null); this.add(getJTextField(), null); this.add(getJButton(), null); this.setTitle("HelloWorld");
}
private javax.swing.JLabel getJLabel() { if(jLabel == null) { jLabel = new javax.swing.JLabel(); jLabel.setBounds(34, 49, 53, 18); jLabel.setText("Name:");
return jLabel;
private javax.swing.JTextField getJTextField() { if(jTextField == null) { jTextField = new javax.swing.JTextField(); jTextField.setBounds(96, 49, 160, 20);
return jTextField;
private javax.swing.JButton getJButton() { if(jButton == null) { jButton = new javax.swing.JButton(); jButton.setBounds(103, 110, 71, 27); jButton.setText("OK");
return jButton;
Now that the components are laid out on the JFrame, you need the JFrame to show up on the screen and make your application runnable. As in all Java applications, you must add a main method to make a Swing application runnable. Inside this main method, you simply need to create your HelloWorld application object and then call setVisible() on it:
public static void main(String[] args)
HelloWorld w = new HelloWorld(); w.setVisible(true);
Page 9 of 38
And you're done! That's all there is to creating the application.
Section 4. Additional Swing widgets
JComboBox
In this section we'll cover all the other components in the Swing library, how to use
them, and what they look like, which should give you a better idea of the power Swing gives you as a UI developer.
We'll start with the JComboBox. A combo box is the familiar drop-down selection, where users can either select none or one (and only one) item from the list. In some versions of the combo box, you can type in your own choice. A good example is the address bar in your browser; that is a combo box that lets you type in your own choice. Here's what the JComboBox looks like in Swing:
The JComboBox
The important functions with a JComboBox involve the data it contains. You need a way to set the data in the JComboBox, change it, and get the users' choice once they've made a selection. You can use the following JComboBox methods:
• addItem(): Adds an item to the JComboBox.
• get/setSelectedIndex(): Gets/sets the index of the selected item in JComboBox.
• get/setSelectedItem(): Gets/sets the selected object.
• removeAllItems(): Removes all the objects from the JComboBox.
• remoteItem(): Removes a specific object from the JComboBox.
JPasswordField
A slight variation on the JTextField is the JPasswordField, which lets you hide all the
characters displayed in the text field area. After all, what good is a password
Introduction to Swing Page 10 of 38
everyone can read as you type it in? Probably not very good one at all, and in this day and age when your private data is susceptible, you need all the help you can get. Here's how a JPasswordField looks in Swing:
The JPasswordField
The additional "security" methods on a JPasswordField change the behavior of a JTextField slightly so you can't read the text:
• get/setEchoChar(): Gets/sets the character that appears in the JPasswordField every time a character is entered. The "echo" is not returned when you get the password; the actual character is returned instead.
• getText(): You should not use this function, as it poses possible security problems (for those interested, the String would be kept in memory, and a possible heap dump could reveal the password).
• getPassword(): This is the proper method to get the password from the JPasswordField, as it returns a char[] containing the password. To ensure proper security, the array should be cleared to 0 to ensure it does not remain in memory.
JCheckBox/JRadioButton
The JCheckBox and JRadioButton components present options to a user, usually in a multiple-choice format. What's the difference? From a practical standpoint, they aren't that different. They behave in the same way. However, in common UI practices, they have a subtle difference: JRadioButtons are usually grouped together to present to the user a question with a mandatory answer, and these answers are exclusive (meaning there can be only one answer to the question). The JRadioButton's behavior enforces this use. Once you select a JRadioButton, you cannot deselect it unless you select another radio button in the group. This, in effect, makes the choices unique and mandatory. The JCheckBox differs by letting you select/deselect at random, and allowing you to select multiple answers to the question.
Here's an example. The question "Are you a guy or a girl?" leads to two unique answer choices "Guy" or "Girl." The user must select one and cannot select both. On the other hand, the question "What are your hobbies?" with the answers "Running," "Sleeping," or "Reading" should not allow only one answer, because people can
Page 11 of 38
have more than one hobby.
The class that ties groups of these JCheckBoxes or JRadioButtons together is the ButtonGroup class. It allows you to group choices together (such as "Guy" and "Girl") so that when one is selected, the other one is automatically deselected.
Here's what JCheckBox and JRadioButton look like in Swing:
JCheckBox and JRadioButton
The important ButtonGroup methods to remember are:
• add(): Adds a JCheckBox or JRadioButton to the ButtonGroup.
• getElements(): Gets all the components in the ButtonGroup, allowing you to iterate through them to find the one selected.
JMenu/JMenuItem/JMenuBar
The JMenu, JMenuItem, and JMenuBar components are the main building blocks to developing the menu system on your JFrame. The base of any menu system is the JMenuBar. It's plain and boring, but it's required because every JMenu and JMenuItem builds off it. You use the setJMenuBar() method to attach the JMenuBar to the JFrame. Once it's anchored onto the JFrame, you can add all the menus, submenus, and menu items you want.
The JMenu/JMenuItem difference might seem obvious, but is in fact underneath the covers and isn't what it appears to be. If you look at the class hierarchy, JMenu is a subclass of JMenuItem. However, on the surface, they have a difference: You use JMenu to contain other JMenuItems and JMenus; JMenuItems, when chosen, trigger actions.
The JMenuItem also supports the notion of a shortcut key. As in most applications you've used, Swing applications allow you to press Ctrl+(a key) to trigger an action as if the menu item itself was selected. Think of the Ctrl+X and Ctrl+V you use to cut and paste.
In addition, both JMenu and JMenuItem support mnemonics. You use the Alt key in
Introduction to Swing Page 12 of 38
association with a letter to mimic the selection of the menu itself (for example, pressing Alt+F then Alt+x closes an application in Windows).
Here's what a JMenuBar with JMenus and JMenuItems looks like in Swing:
JMenuBar, JMenu, and JMenuItem
The important methods you need for these classes are:
• JMenuItem and JMenu:
• get/setAccelerator(): Gets/sets the Ctrl+key you use for shortcuts.
• get/setText(): Gets/sets the text for the menu.
• get/setIcon(): Gets/sets the image used in the menu.
• JMenu only:
• add(): adds another JMenu or JMenuItem to the JMenu (creating a nested menu).
JSlider
You use the JSlider in applications to allow for a change in a numerical value. It's a quick and easy way to let users visually get feedback on not only their current choice, but also their range of acceptable values. If you think about it, you could provide a text field and allow your user to enter a value, but then you'd have the added hassle of ensuring that the value is a number and also that it fits in the required numerical range. As an example, if you have a financial Web site, and it asks what percent you'd like to invest in stocks, you'd have to check the values typed into a text field to ensure they are numbers and are between 0 and 100. If you use a JSlider instead, you are guaranteed that the selection is a number within the required range.
In Swing, a JSlider looks like this:
The JSlider
Page 13 of 38
The important methods in a JSlider are:
• get/setMinimum(): Gets/sets the minimum value you can select.
• get/setMaximum(): Gets/sets the maximum value you can select.
• get/setOrientation(): Gets/sets the JSlider to be an up/down or left/right slider.
• get/setValue(): Gets/sets the initial value of the JSlider.
JSpinner
Much like the JSlider, you can use the JSpinner to allow a user to select an integer value. One major advantage of the JSlider is its compact space compared to the JSlider. Its disadvantage, though, is that you cannot easily set its bounds.
However, the comparison between the two components ends there. The JSpinner is much more flexible and can be used to choose between any group of values. Besides choosing between numbers, it can be used to choose between dates, names, colors, anything. This makes the JSpinner extremely powerful by allowing
you to provide a component that contains only predetermined choices. In this way, it
is similar to JComboBox, although their use shouldn't be interchanged. You should
use a JSpinner only for logically consecutive choices -- numbers and dates being the most logical choices. A JComboBox, on the other hand, is a better choice to present seemingly random choices that have no connection between one choice and the next.
A JSpinner looks like this:
The JSpinner
The important methods are:
Introduction to Swing Page 14 of 38
• get/setValue(): Gets/sets the initial value of the JSpinner, which in the basic instance, needs to be an integer.
• getNextValue(): Gets the next value that will be selected after pressing the up-arrow button.
• getPreviousValue(): Gets the previous value that will be selected after pressing the down-arrow button.
JToolBar
The JToolBar acts as the palette for other components (JButtons, JComboBoxes, etc.) that together form the toolbars you are familiar with in most applications. The toolbar allows a program to place commonly used commands in a quick-to-find location, and group them together in groups of common commands. Often times, but not always, the toolbar buttons have a matching command in the menu bar. Although this is not required, it has become common practice and you should attempt to do that as well.
The JToolBar also offers another function you have seen in other toolbars, the ability to "float" (that is, become a separate frame on top of the main frame).
The image below shows a non-floating JToolBar:
Non-floating JToolBar
The important method to remember with a JToolBar is: is/setFloatable(), which gets/sets whether the JToolBar can float.
JToolTip
You've probably seen JToolTips everywhere but never knew what they were called. They're kind of like the plastic parts at the end of your shoelaces -- they're everywhere, but you don't know the proper name (they're called aglets, in case you're wondering). JToolTips are the little "bubbles" that pop up when you hold your mouse over something. They can be quite useful in applications, providing help for difficult-to-use items, extending information, or even showing the complete text of an item in a crowded UI. They are triggered in Swing by leaving the mouse over a component for a set amount of time; they usually appear about a second after the mouse becomes inactive. They stay visible as long as the mouse remains over that
Page 15 of 38
component.
The great part about the JToolTip is its ease of use. The setToolTip() method is a method in the JComponent class, meaning every Swing component can have a tool tip associated with it. Although the JToolTip is a Swing class itself, it really provides no additional functionality for your needs at this time, and shouldn't be created itself. You can access and use it by calling the setToolTip() function in JComponent.
Here's what a JToolTip looks like:
A JToolTip
JOptionPane
The JOptionPane is something of a "shortcut" class in Swing. Often times as a UI developer you'd like to present a quick message to your users, letting them know about an error or some information. You might even be trying to get some quick data, such as a name or a number. In Swing, the JOptionPane class provides a shortcut for these rather mundane tasks. Rather than make every developer recreate the wheel, Swing has provided this basic but useful class to give UI developers an easy way to get and receive simple messages.
Here's a JOptionPane:
A JOptionPane
The somewhat tricky part of working with JOptionPane is all the possible options you can use. While simple, it still provides numerous options that can cause confusion. One of the best ways to learn JOptionPane is to play around with it; code it and see what pops up. The component lets you change nearly every aspect of it: the title of
Introduction to Swing Page 16 of 38
the frame, the message itself, the icon displayed, the button choices, and whether or not a text response is necessary. There are far too many possibilities to list here in this tutorial, and your best bet is to visit the JOptionPane API page to see its many possibilities.
JTextArea
The JTextArea takes the JTextField a step further. While the JTextField is limited to one line of text, the JTextArea extends that capability by allowing for multiple rows of text. Think of it as an empty page allowing you to type anywhere in it. As you can probably guess, the JTextArea contains many of the same functions as the JTextField; after all, they are practically the exact same component. However, the JTextArea offers a few additional important functions that set it apart. These features
include the ability to word wrap (that is, wrap a long word to the next line instead of cutting it off mid-word) and the ability to wrap the text (that is, move long lines of text
to the next line instead of creating a very long line that would require a horizontal
scroll bar).
A JTextArea in Swing looks like you'd probably expect:
A JTextArea
The important methods to enable line wrapping and word wrapping are:
• is/setLineWrap(): Sets whether the line should wrap when it gets too long.
• is/setWrapStyleWord(): Sets whether a long word should be moved to the next line when it is too long.
Page 17 of 38
JScrollPane
Building off of the example above, suppose that the JTextArea contains too much text to contain in the given space. Then what? If you think that scroll bars will appear automatically, unfortunately, you are wrong. The JScrollPane fills that gap, though, providing a Swing component to handle all scroll bar-related actions. So while it might be a slight pain to provide a scroll pane for every component that could need it, once you add it, it handles everything automatically, including hiding/showing the scroll bars when needed.
You don't have to deal with the JScrollPane directly, outside of creating it using the component to be wrapped. Building off the above example, by calling the JScrollPane constructor with the JTextArea, you create the ability for the JTextArea to scroll when the text gets too long:
JScrollPane scroll = new JScrollPane(getTextArea()); add(scroll);
This updated example looks like this:
JScrollPane example
The JScrollPane also exposes the two JScrollBars it will create. These JScrollBar components also contain methods you can use to change their behavior (although they are outside the scope of this tutorial).
The methods you need to work with JScrollPane are:
• getHorizontalScrollBar(): Returns the horizontal JScrollBar component.
Introduction to Swing Page 18 of 38
• getVerticalScrollBar(): Returns the vertical JScrollBar component.
• get/setHorizontalScrollBarPolicy(): This "policy" can be one of three things: Always, Never, or As Needed.
• get/setVerticalScrollBarPolicy(): The same as the horizontal function.
JList
The JList is a useful component for presenting many choices to a user. You can think of it as an extension to the JComboBox. JList provides more choices and adds the capability for multiple choices. The choice between a JList and JComboBox often comes down to these two features: If you require multiple choices or if the options include more than 15 choices (although that number is not a general rule), you should always choose a JList.
You should use the JList in conjunction with the JScrollPane, as demonstrated above, because it can present more options than its space can contain.
The JList contains the notion of a selection model (also seen in JTables), where you can set your JList to accept different types of choices. These types are the single selection, where you can only select one choice, the single interval selection, where you can only select contiguous choices, but as many as desired, or the multiple interval selection, where you can select any number of choices in any combination.
The JList is the first of what I call the "complex components," which also include the JTable and the JTree that allow a large amount of custom changes, including changing the way the UI looks and how it deals with data. Because this tutorial only strives to cover the basics, I won't get into these more advanced functions, but it's something to remember when working with these components -- they present a more difficult challenge than those components I've introduced up to this point.
The JList appears like this in Swing:
The JList
Page 19 of 38
There are many functions in the JList to deal with the data, and as I said, these just touch the surface of everything required to work in detail with the JList. Here are the basic methods:
• get/setSelectedIndex(): Gets/sets the selected row of the list; in the case of multiple-selection lists, an int[] is returned.
• get/setSelectionMode(): As explained above, gets/sets the selection mode to be either single, single interval, or multiple interval.
• setListData(): Sets the data to be used in the JList.
• get/setSelectedValue(): Gets the selected object (as opposed to the selected row number).
JTable
Think of an Excel spreadsheet when you think of a JTable and that should give you
a clear picture of what the JTable does in Swing. It shares many of the same
characteristics: cells, rows, columns, moving columns, and hiding columns. The JTable takes the idea of a JList a step further. Instead of displaying data in one column, it displays it in multiple columns. Let's use a person as an example. A JList would only be able to display one property of a person -- his or her name for instance. A JTable, however, would be able to display multiple properties -- a name, an age, an address, etc. The JTable is the Swing component that allows you to provide the most information about your data.
Unfortunately, as a trade-off, it is also notoriously the most difficult Swing component to tackle. Many UI developers have gotten headaches trying to learn every detail of
a JTable. I hope to save you from that here, and just get the ball rolling with your
Introduction to Swing Page 20 of 38
JTable knowledge.
Many of the same concepts in JLists extend to JTables as well, including the idea of different selection intervals, for example. But the one-row idea of a JList changes to the cell structure of a JTable. This means you have different ways to make these selections in JTables, as columns, rows, or individual cells.
In Swing, a JTable looks like this:
A JTable
Ultimately, a majority of the functionality of a JTable is beyond the scope of this tutorial; "Intermediate Swing" will go into more detail on this complex component.
JTree
The JTree is another complex component that is not as difficult to use as the JTable but isn't as easy as the JList either. The tricky part of working with the JTree is the required data models.
The JTree takes its functionality from the concept of a tree with branches and leaves. You might be familiar with this concept from working with Internet Explorer in Windows -- you can expand and collapse a branch to show the different leaves you can select and deselect.
You will most likely find that the tree is not as useful in an application as a table or a list, so there aren't as many helpful examples on the Internet. In fact, like the JTable, the JTree does not have any beginner-level functions. If you decide to use JTree,
Page 21 of 38
you will immediately be at the intermediate level and must learn the concepts that go with it. On that note, the example application does not cover the JTree, so unfortunately, neither the beginner nor the intermediate tutorial will delve into this less popular component.
However, there are times when a tree is the logical UI component for your needs. File/directory systems are one example, as in Internet Explorer, and the JTree is the best component in the case where data takes a hierarchical structure -- in other words, when the data is in the form of a tree.
In Swing, a JTree looks like this:
A JTree
Section 5. Swing concepts
Layouts, models, and events, oh my!
Now that you know about most (but definitely not all) of the components you can use to make a UI, you have to actually do something with them. You can't just place them randomly on the screen and expect them to instantly work. You must place them in specific spots, react to interaction with them, update them based on this
Introduction to Swing Page 22 of 38
interaction, and populate them with data. More learning is necessary to fill in the gaps of your UI knowledge with the other important parts of a UI.
Therefore, let's examine:
• Layouts: Swing includes a lot of layouts, which are classes that handle where a component is placed on the application and what should happen to them when the application is resized or components are deleted or added.
• Events: You need to respond to the button presses, the mouse clicks, and everything else a user can do to a UI. Think about what would happen if you didn't -- users would click and nothing would change.
• Models: For the more advanced components (Lists, Tables, Trees), and even some easier ones such as the JComboBox, models are the most efficient way to deal with the data. They remove most of the work of handling the data from the actual component itself (think back to the earlier MVC discussion) and provide a wrapper for common data object classes (such as Vector and ArrayList).
Easy layouts
As mentioned in the last section, a layout handles the placement of components on the application for you. Your first question might be "why can't I just tell it where to go by using pixels?" Well, you can, but then you'd immediately be in trouble if the window was resized, or worse, when users changed their screen resolutions, or even when someone tried it on another OS. Layout managers take all those worries away. Not everyone uses the same settings, so layout managers work to create "relative" layouts, letting you specify how things should get resized relative to how the other components are laid out. Here's the good part: it's easier than it sounds. You simply call setLayout(yourLayout) to set up the layout manager. Subsequent calls to add() add the component to the container and let the layout manager take care of placing it where it belongs.
Numerous layouts are included in Swing nowadays; it seems there's a new one every release that serves another purpose. However, some tried-and-true layouts that have been around forever, and by forever, I mean forever -- since the first release of the Java language back in 1995. These layouts are the FlowLayout, GridLayout, and BorderLayout.
The FlowLayout lays out components from left to right. When it runs out of space, it moves down to the next line. It is the simplest layout to use, and conversely, also the least powerful layout:
Page 23 of 38
setLayout(new FlowLayout()); add(new JButton("Button1")); add(new JButton("Button2")); add(new JButton("Button3"));
The FlowLayout at work
The GridLayout does exactly what you'd think: it lets you specify the number of rows and the number of columns and then places components in these cells as they are added:
setLayout(new GridLayout(1,2)); add(new JButton("Button1")); add(new JButton("Button2")); add(new JButton("Button3"));
The GridLayout at work
Introduction to Swing Page 24 of 38
The BorderLayout is still a very useful layout manager, even with all the other new ones added to Swing. Even experienced UI developers use the BorderLayout often. It uses the notions of North, South, East, West, and Center to place components on the screen:
setLayout(new BorderLayout()); add(new JButton("Button1"), "North"); add(new JButton("Button2"), "Center"); add(new JButton("Button3"), "West");
The BorderLayout at work
GridBagLayout
Page 25 of 38
While the examples from above are good for easy layouts, more advanced UIs need a more advanced layout manager. That's where the GridBagLayout comes into play. Unfortunately, it is extremely confusing and difficult to work with, and anyone who has worked with it will agree. I can't disagree either; but despite its difficulties, it's probably the best way to create a clean-looking UI with the layout managers built into Swing.
Here's my first bit of advice: In the newest versions of Eclipse, there's a built-in visual builder that automatically generates the required GridBagLayout code needed for each screen. Use it! It will save countless hours of fiddling around with the numbers to make it just right. So while I could use this slide to explain how GridBagLayout works and how to tweak it to make it work best, I'll just offer my advice to find a visual builder and generate the code. It will you save hours of work.
Events
Finally, we get to one of the most important parts of Swing: dealing with events and reacting to interaction with the UI. Swing handles events by using the event/listener model. This model works by allowing certain classes to register for events from a component. This class that registers for events is called a listener, because it waits for events to occur from the component and then takes an action when that happens. The component itself knows how to "fire" events (that is, it knows the types of interaction it can generate and how to let the listeners know when that interaction happens). It communicates this interaction with events, classes that contain information about the interaction.
With the technical babble aside, let's look at some examples of how events work in Swing. I'll start with the simplest example, a JButton and printing out "Hello" on the console when it is pressed.
The JButton knows when it is pressed; this is handled internally, and there's no code needed to handle that. However, the listener needs to register to receive that event from the JButton so you can print out "Hello." The listener class does this by implementing the listener interface and then calling addActionListener() on the JButton:
// Create the JButton JButton b = new JButton("Button"); // Register as a listener b.addActionListener(new HelloListener());
class HelloListener implements ActionListener
// The interface method to receive button clicks public void actionPerformed(ActionEvent e)
{ System.out.println("Hello");
Introduction to Swing Page 26 of 38
A JList works in a similar way. When someone selects something on a JList, you
want to print out what object is selected to the console:
// myList is a JList populate with data myList.addListSelectionListener(new ListSelectionListener()
{ public void valueChanged(ListSelectionEvent e)
{ Object o = myList.getSelectedItem(); System.out.println(o.toString());
);
From these two examples, you should be able to understand how the event/listener
model works in Swing. In fact, every interaction in Swing is handled this way, so by understanding this model, you can instantly understand how every event is handled
in Swing and react to any possible interaction a user might throw at you.
Models
Now, you should know about the Java Collections, a set of Java classes that handle data. These classes include an ArrayList, a HashMap, and a Set. Most applications use these classes ubiquitously as they shuttle data back and forth. However, one limitation arises when you need to use these data classes in a UI. A UI doesn't know how to display them. Think about it for a minute. If you have a JList and an ArrayList of some data object (such as a Person object), how does the JList know what to display? Does it display the first name or both the first name and the last name?
That's where the idea of a model comes in. While the term model refers to the larger scope, in this tutorial's examples I use the term UI model to describe the classes that components use to display data.
Every component that deals with a collection of data in Swing uses the concept of a model, and it is the preferable way to use and manipulate data. It clearly separates the UI work from the underlying data (think back to the MVC example). The model works by describing to the component how to display the collection of data. What do I mean by describing? Each component requires a slightly different description:
• JComboBox requires its model to tell it what text to display as a choice and how many choices exist.
• JSpinner requires its model to tell it what text to display, and also what the previous and next choices are.
Page 27 of 38
•
JList also requires its model to tell it what text to display as a choice and how many choices exist.
JTable requires much more: It requires the model to tell it how many columns and rows exist, the column names, the class of each column, and what text to display in each cell.
JTree requires its model to tell it the root node, the parents, and the children for the entire tree.
Why do all this work, you might ask. Why do you need to separate all this functionality? Imagine this scenario: You have a complicated JTable with many columns of data, and you use this table in many different screens. If you suddenly decide to get rid of one of the columns, what would be easier, changing the code in every single JTable instance you used, or changing it in one model class you created to use with every JTable instance. Obviously, changing fewer classes is better.
Model examples
Let's take a look at how a model works by using it with an easy example, the JComboBox. In the previous slide of a JComboBox, I showed you how to add items to the data by calling setItem(). While this is acceptable for simple demonstrations, it isn't much use in a real application. After all, when there are 25 choices, and they are continually changing, do you really want to loop through them each time calling addItem() 25 times? Certainly not.
The JComboBox contains a method call setModel() that accepts an instance of the ComboBoxModel class. You should use this method instead of the addItem() method to create the data in a JComboBox.
Suppose you have an ArrayList with the alphabet as its data ("A", "B", "C", etc.):
MyComboModel model = new MyComboModel(alphaList); myComboBox.setModel(model);
public class MyComboModel implements ComboBoxModel
private List data = new ArrayList(); private int selected = 0;
public MyComboModel(List list) {
data = list;
public void setSelectedItem(Object o)
Introduction to Swing Page 28 of 38
selected = data.indexOf(o);
public Object getSelectedItem()
return data.get(selected);
public int getSize()
return data.size();
public Object getElementAt(int i)
return data.get(i);
The great part about using a model is that you can reuse it over and over again. As an example, say the JComboBox's data needs to change from letters of the alphabet to the numbers 1 to 27. You can achieve this change in one simple line that uses the new List of data to populate the JComboBox without using additional code:
myComboBox.setModel(new MyComboModel(numberList));
Models are a beneficial feature in Swing as they provide the ability for code reuse and make dealing with data much easier. As is often the case in large-scale applications, the server-side developers create and retrieve the data and pass it to the UI developer. It's up to the UI developer to deal with this data and display it properly, and models are the tools to get this done.
Section 6. Putting it all together
Example application
After all these examples, you probably want to see this stuff in action. Enough with the pretty pictures. Let's get down to a concrete example that ties together everything you've learned so far in this tutorial.
Here's the concept for the example application: a simple flight reservation system. It lets the user type in a departure and arrival city and then press a button to search. It has a fake database with flights stored in it. This database can be searched, and a table is used to display the results of the search. Once the table populates, users can select flights from the table and buy tickets by changing the number of tickets
Page 29 of 38
they desire and clicking a button.
It's a seemingly simple application that allows you to see all the parts of Swing in practice. This example application should answer any questions you might have had from previous sections. Before I start, let's look at the finished product:
Step 1: Lay out the components
As I mentioned earlier, there's little need to learn complex layouts because you can use a visual editor.
Step 2: Initialize the data
The application can't work without data. Let's think about what kind of data you need in this application. First, you need a list of cities to choose from for the departure and destination cities. Then, you need a list of flights to search.
For this example, I use some fake data because the focus of the application is on Swing not on the data. You create all the data in the DataHandler class. This class manages the departure and destination cities and also handles flight search and record retrieval.
Introduction to Swing Page 30 of 38
The cities are stored as simple Strings. The flights however, are stored in data objects called Flights that contain fields for the departure city, destination city, flight number, and number of available tickets.
Now, with all that red tape out of the way, let's get back to the application.
Step 3: Handling events
Let's examine the application and consider what actions must take place. First, you need to know when a user presses the Search button, so you can search the data for flights. Second, you need to know when a user selects the table of records to prevent possible errors when a user tries to buy a record with no records selected. Finally, you must be aware of when a user presses the Purchase button to send the purchasing data back to the data handler class.
Let's start with the Search button. As outlined above, you must call the addActionListener() method on the button to register for events from a button press. To keep things simple, I use the FlightReservation class to listen for all possible events. Here's the code to handle the Search button press:
String dest = getComboDest().getSelectedItem().toString(); String depart = getComboDepart().getSelectedItem().toString(); List l = DataHandler.searchRecords(depart, dest); flightModel.updateData(l);
The two cities are gathered from the combo boxes and used to search the records for the corresponding flights. Once the flights are found, they are passed to the table's table model; more on how the table models work below. But know that once the table model has the updated data, it will display the results.
Next, let's examine what happens when a user presses the Purchase button:
Object o = flightModel.getData().get(getTblFlights().getSelectedRow()); int tixx = Integer.parseInt(getTxtNumTixx().getText()); DataHandler.updateRecords(o, tixx);
Now, conversely, when a user presses the Purchase button, the table model figures out which flight the user selected and then passes this record and the number of tickets the user wishes to purchase to the data handler.
Finally, you need to error-check and ensure that someone doesn't try to purchase a ticket without selecting a flight in the table. The easiest way to do this is to disable the components a user would use to purchase tickets (the text field and button) and only enable them when a user selects a row:
Page 31 of 38
boolean selected = getTblFlights().getSelectedRow() > -1; getLblNumTixx().setEnabled(selected); getTxtNumTixx().setEnabled(selected); getBtnPurchase().setEnabled(selected);
Step 4: Models
Next, let's look at the models you use to handle all the data flying back and forth in this application. By analyzing the application and going through this demo, you should clearly see that you need two models: a model for the JComboBoxes and a model for the JTable.
Let's begin with the easiest, the JComboBox's model. I won't paste the code in here because it's the same as the example a few slides ago (and in fact can be used for any of your JComboBoxes). There are some important things to note though:
Remember the advantage of using models, and you'll see it in use here. Although you only have one model class, you reuse it by creating two instances of it and supplying one to each of the JComboBoxes. That way both instances can handle their own data, but of course, you only write one class to do it. Here's how you set it up:
comboModel1 = new CityComboModel(DataHandler.getCities()); comboModel2 = new CityComboModel(DataHandler.getCities());
Let's move on to the JTable's model. This model is a bit more complicated than the JComboBox and requires a little more inspection. Let's start with your knowledge of the ComboBoxModel and see what you need to add for a JTable. Because a JTable contains data like a ComboBox, but in multiple columns, you need a lot more information from the model dealing with the column information. So, in addition to knowing the number of rows of data, you need to know the number of columns, the column names, and the value at an individual cell, instead of just the object itself. This allows you to not only display a data object, but also to display fields of a data object. In the case of this example, you don't display the Flight object; you instead display the fields of a departure city, destination city, flight number, and the number of tickets available. Below is the code you use to create the TableModel and how to set it on the JTable:
flightModel = new FlightTableModel(); getTblFlights().setModel(flightModel);
Because of the amount of code you need to create a TableModel , I'll hold off on pasting it here, but will instead direct you to the source code of the sample application (see Resources to download the code) to take a closer look at how it works. Also, this has really just touched on the TableModel. As I mentioned earlier,
Introduction to Swing Page 32 of 38
the JTable is the most difficult and complex component to work with in Swing and its parts, including the TableModel, are not any easier. That said, I will revisit the TableModel in "Intermediate Swing" in addition to other JTable functionality.
Step 5: Bells and whistles
Users have come to expect a certain amount of bells and whistles in any application, both as extra functionality and also as a way to prevent the occurrence of errors. In this example, although the basic functionality of searching for a flight and purchasing tickets works, you haven't addressed possible errors that might happen. For error-proofing, you need to add an error message when a user attempts to order more tickets than are available. How do you display errors? If you think back to the slide on the JOptionPane, Swing has a ready-made component for this type of instant feedback.
Let's look at the error condition and see what triggers the error message:
try
catch (Exception ex) { // display error message here
DataHandler.updateRecords(o, tixx);
Now let's take care of the error message. Remember the JOptionPane and its plentiful amount of options. Let's lay out the options you want in your error message before you decide what kind of JOptionPane to create. It should be an error message and not an informative message. Use a simple title such as "Error." The detailed message consists of what the exception says. And finally, the users have made an error, so simple OK and Cancel buttons should suffice.
Here's the code to create that exact JOptionPane:
JOptionPane.showConfirmDialog(this, ex.getMessage(), "Error", JOptionPane.OK_CANCEL_OPTION, JOptionPane.ERROR_MESSAGE);
And here's what it looks like:
Example error message
Page 33 of 38
And in the end
Hopefully, you now understand how everything described in this tutorial came together to form a basic but functional Swing application. Unfortunately, I can't squeeze every possible line of code from the example application into these slides, although I encourage you to look through the example application to see for yourself how it all developed.
Especially note how I used the models to ease the process of dealing with the data. Using some rather simple code, you can handle any kind of data that both the JTable and JComboBox can receive. Also, note the event/listener relationships -- how the components that interact all take on the event/listener relationship, meaning that interested parties (other components especially) must register for these events.
Finally, if you decide to continue to the intermediate tutorial, I will build off your knowledge of this existing flight reservation system. So, if you choose to move to the next phase, please be sure you understand how that system works in its basic form.
Section 7. Summary
Summary
This tutorial introduced you to Swing and the concepts you need to know to create a basic, functional user interface. After completing this tutorial, you should have accomplished several things:
• You should be familiar with the Swing components. You should be able to recognize the components when you see them on the screen and apply them in your own applications by using their basic functions. In addition,
Introduction to Swing Page 34 of 38
you should have a general understanding of when you should use the components (for example, when to use a JCheckBox vs. when to use a JRadioButton).
• You learned how to lay out these components on the screen with layout managers. Layout managers have existed since Java first came out, and unfortunately, the most powerful layout manager is also the most difficult to use. However, visual editors (such as the one found in Eclipse) make the layout process incredibly easy by creating all the layout code automatically.
• You learned about the event model, mainly how Swing uses the event/listener model in all of its components to allow one component to accept user interaction and pass this interaction on to other classes. These other classes register for events from the component and take appropriate action when they receive events. The event/listener model is used throughout Swing, and you should learn it in depth to better work with every Swing component.
• You learned about data models and how they fit into the MVC architecture in Swing. Models allow components to display data without knowing anything about the data itself. They also let you reuse models, allowing multiple components that display similar data to use the same model, eliminating the need to create an original model for each instance of the component. In large-scale applications, models serve as the "translation" between server-side data objects and client-side Swing components.
It's important and vital to understand that this tutorial is not all-encompassing. It's not even meant to cover all the basics of Swing. There's just far too much to squeeze into this tutorial to give beginners a thorough introduction to Swing. I've hopefully pointed out the crucial elements, including the most commonly used components and their most commonly used functions. You should be aware that probably about twice as many components exist in Swing, and I encourage you to look through Swing documentation to cover the ones I missed. Also, I've barely touched upon the functions of most of the components; each Swing component has dozens, even up to a hundred, functions you can potentially use.
But I don't want to rain on your parade. By finishing this tutorial you've learned quite enough to build a majority of your Swing applications, and the knowledge gained from this tutorial serves as a solid foundation if you explore the other functionalities and components Swing offers.
Next steps
As you know by now, this tutorial has a companion called "Intermediate Swing,"
Page 35 of 38
which will build upon the knowledge you've gained in this introduction with an examination the more difficult concepts you need to understand to make your application more polished and powerful. These concepts include:
• More advanced JTable features including the table properties, more sophisticated TableModel management, the TableRenderer to change the appearance of the JTable, and how to sort the table columns.
• Threading and how it fits into Swing. Because users cannot accept an interface that locks up when it hits the database, Swing must use separate threads for longer operations.
• How to create custom components. If you feel limited by what Swing offers you now, I'll show you how to create components that might look or behave differently than the ones built into Swing.
• Custom look-and-feels. I will discuss how to completely change the look and feel of an application through two methods: one creates a new custom UI look-and-feel and the other uses Synth, a skinnable look-and-feel.
Introduction to Swing Page 36 of 38
Resources
Learn
• Download swing1.jar which contains the source code for the Hello World application and the flight reservation system.
• Don't miss the companion tutorial, "Intermediate Swing", which builds on the basics covered in this tutorial.
• Visit the Sun tutorial on Swing, which is a good follow-up to this tutorial and covers others components not explained here.
• Read the Swing Javadoc to see all the possible functions you can call on your Swing components.
• The JavaDesktop Web page offers the newest techniques in Swing.
• The Client-side Java programming discussion forum is another good place for assistance with Swing.
• developerWorks has published many articles on Swing:
• John Zukowski's Magic with Merlin series and Taming Tiger series cover Swing and related topics regularly.
• Michael Abernethy has penned several more advanced Swing-related articles, including"Ease Swing development with the TableModel Free framework" (developerWorks, October 2004), Go state-of-the-art with IFrame" (developerWorks, March 2004), and"Advanced Synth" (developerWorks, February 2005)
• The developerWorks Open source zone has an entire section devoted to Eclipse development.
• You'll find articles about every aspect of Java programming, including all the concepts covered in this tutorial, in the developerWorks Java technology zone.
Get products and technologies
• eclipse.org is the official resource for the Eclipse development platform. Here you'll find downloads, articles, and discussion forums to help you use Eclipse like a pro.
Discuss
• Get involved in the developerWorks community by participating in developerWorks blogs.
Page 37 of 38
About the author
Michael Abernethy Michael Abernethy is currently the test team lead on the IBM WebSphere System Management team based in Austin, TX. Prior to this assignment, he was a Swing UI developer at multiple customer locations.
Introduction to Swing Page 38 of. | https://ru.scribd.com/document/59689978/j-Intswing-PDF | CC-MAIN-2020-50 | refinedweb | 10,175 | 60.95 |
This program should accept array of integers and integers that indicate the number of elements in the array. the function should then determine the median of the array. Using pointer notation. I did research to see if someone had similar problem like me, but their was only other one, it was from a person that was way more advanced and was doing a lot more with her median function and other calculations, to confusing for my knowledge.
This is what i have created.
#include <iostream> using namespace std; //Function Prototypes. void showArrayPtr(double*, int); void showMedian(double, int); void arrSelectSort(double*, int); int main() { double *NUMBERS; //To dynamically allocate an array. int numValues; //To find out how many spaces to set aside. cout << "How many numbers would you like to enter? "; cin >> numValues; //To dynamically allocate an array large enough to hold that many numbers. NUMBERS = new double[numValues]; if( NUMBERS == NULL) return 0; //To get users input for numbers. cout << "Enter the numbers below.\n"; for (int count = 0; count < numValues; count++) { cout << "Enter value #" << (count + 1) << ": "; cin >> NUMBERS[count]; //Validate the input. while (NUMBERS[count] <= 0) { cout << "zero or negative numbers not allowed.\n"; cout << "Enter value #" << ( count + 1 ) << ": "; cin >> NUMBERS[count]; } } //Sort the numbers in the array pointer arrSelectSort (NUMBERS, numValues); //Will display them in sorted order. cout << "The numbers in ascending order are: \n"; showArrayPtr(NUMBERS, numValues); return 0; } void arrSelectSort(double *array, int size) { int startScan, minIndex; double(double *array, int size) { for( int count=0; count < size; count++) cout << array[count] << " "; cout << endl; } void showMedian(double *array, int size) { arrSelectSort(double *NUMBERS, int numValues); int medIndex = 0; double median = 0; //If median is half way between. if ((array % 2) !=0) { medIndex = ((numValues - 1) / 2); median = array[medIndex]; } else { medIndex = numValues / 2; median = ((array[medIndex] + array[medIndex +1]) /2); } //Will display the median of the numbers. cout << "The median of those numbers is: \n" << median << endl; }
I think everything is ok, up until the void showMedian part or the last part that is where i get errors like double should be preseded by ), Function thoes not take 0 arguments, illegal left operator for %, syntax error ), and numValues undeclared identifier. Some honestly don't make sense to me because if i fix those syntax errors it will be for no reason, and the others, i thought i identified what every value is, apparently not. Could you guys assist as to pointing out what im doing wrong. Much appreciation. | https://www.daniweb.com/programming/software-development/threads/206257/find-median-using-pointers | CC-MAIN-2018-43 | refinedweb | 412 | 52.49 |
New firmware release, version 1.6.0.b1
Hello everyone,
A new firmware release is available. We spend quite some upgrading to the new IDF which has solved several bugs around WiFi and Bluetooth stability. We have focused mainly on bug fixes and stability improvements. These are the release notes:
- Add parameter to enable/disable WLAN power save mode. Bump version to 1.6.0.b1.
- Update all libraries to match the new IDF.
- Update to the new IDF. Move most SX1272 callbacks out of IRAM.
- Add descriptor to characteristics for notifications.
- Fix failing US915 tests related to channels mask MAC commands.
- Remove bind method for Sigfox sockets.
- Remove unused async parameter from the Sigfox cmd structure.
- Correct LoRa US915 remaining channel mask and use MP_ERROR consistently.
- Fix the channel counting algorithm for US-915 channels. Thanks to @cmsedore for his contribution!
If you don't have the latest updater tool, please make sure to get it from here: (Under Firmware Updates)
Cheers,
Daniel
- stacymarkel Banned last edited by
This post is deleted!
This post is deleted!
- crankshaft last edited by
@crankshaft we have seen your issues and we are working on fixing them, but it takes time, the ESP-IDF is in an unstable state which doesn't make things easy.
- crankshaft last edited by
@daniel - as you requested, I re-posted my issues / bugs on github duplicating the posts I already made on the this forum, and surprise, surprise, no replies on the github issues either.
I am very disappointed that despite posting my issues, nobody has taken a few minutes to verify / validate or even ask me any questions regarding these issues, I think that you really need to review your priorities.
I have spent the last week porting my code to ESP8266 and I face none of the issues that I posted on this forum regarding guru mediation crashes, intermittent loss of socket connections etc etc etc.
Remember that it takes a long time to develop a good reputation, and very little time to loose it !
:-(
@daniel said in New firmware release, version 1.6.0.b1:
@JF002 yes please do report it on Github with a piece of code to reproduce the issue easily (include your boot.py as well if possible). Thanks!
Here it is :)
@JF002 yes please do report it on Github with a piece of code to reproduce the issue easily (include your boot.py as well if possible). Thanks!
@Neo said in New firmware release, version 1.6.0.b1:
also still have this bug... I was hoping it would be resolved in this update (1.3.6), but it doesn't seem to be the case... Do you want us to fill a bug on github?
Thanks!
@Neo I agree it's super CRITICAL. It's on the top of our priorities at this moment. We'll do our best to resolve it ASAP.
Cheers,
Daniel
confirm the issue with bluetooth callback disabled in 1.6.0.b1 is solved in 1.6.3.b2.
thanks @daniel
@daniel said in New firmware release, version 1.6.0.b1:
could it be that your file system is corrupted from previous firmware versions? Could you try:
import os
os.mkfs('/flash')
to format the file system and start from scratch...
Cheers,
Daniel
formated and tested on 1.6.1.b1 and still file corruption :(
now i am testing 1.6.3
on the left you have oryginal file - on the right corrupted
UPDATE
flashed to 1.6.3
formated and error is same -
but!
It is not visible at start see steps:
- write e.g. test.py (copy normal onewire content twice to this file on computer)
- copy this file to wipy throught ftp
- download it and see if it is ok - yes it is ok
- disable power (totally)
- power on wipy
- connect to it by ftp and download test.py - now it is corrupted
UPDATE2
It looks like it happened not always
after few try storing file and restart i finally have file ok (strange)
I'm having big FTP stability problems after updating to this version. When I upload stuff it fails about half the time. Anyone else having problems with that? The problem is especially bad when uploading multiple files in one go.
@Neo On my LopY I have the same behavior. It is not necessary to iterate until end. Put gc.collect() after that sleep and .... that's it.
@Daniel I changed the LopY with new one + upgrade to 1.6.3b1 and I ran only your script. Now the output is the same as yours... .but it ALWAYS hanging after the mem displays 400. Here is the output:
Running
Soft resetting the LoPy
Memory 55280
Memory 54720
Memory 54160
Memory 53600
Memory 53040
Memory 52480
Memory 51920
Memory 51360
Memory 50800
Memory 50240
Memory 49680
Memory 49120
Memory 48560
Memory 48000
Memory 47440
Memory 46880
Memory 46320
Memory 45760
Memory 45200
Memory 44640
Memory 44080
Memory 43520
Memory 42960
Memory 42400
Memory 41840
Memory 41280
Memory 40720
Memory 40160
Memory 39600
Memory 39040
Memory 38480
Memory 37920
Memory 37360
Memory 36800
Memory 36240
Memory 35680
Memory 35120
Memory 34560
Memory 34000
Memory 33440
Memory 32880
Memory 32320
Memory 31760
Memory 31200
Memory 30640
Memory 30080
Memory 29520
Memory 28960
Memory 28400
Memory 27840
Memory 27280
Memory 26720
Memory 26160
Memory 25600
Memory 25040
Memory 24480
Memory 23920
Memory 23360
Memory 22800
Memory 22240
Memory 21680
Memory 21120
Memory 20560
Memory 20000
Memory 19440
Memory 18880
Memory 18320
Memory 17760
Memory 17200
Memory 16640
Memory 16080
Memory 15520
Memory 14960
Memory 14400
Memory 13840
Memory 13280
Memory 12720
Memory 12160
Memory 11600
Memory 11040
Memory 10480
Memory 9920
Memory 9360
Memory 8800
Memory 8240
Memory 7680
Memory 7120
Memory 6560
Memory 6000
Memory 5440
Memory 4880
Memory 4320
Memory 3760
Memory 3200
Memory 2640
Memory 2080
Memory 1520
Memory 960
Memory 400:
@daniel It is still hanging. There is diff between your output and my output.... and I know why. My script is embedded in the app importing some additional modules.... that why mem is starting from 20K and not 50K as yours. Some of these modules have some static data and some of them have thread locks and Wifi data. Let me check.... the circle radius it will be reduced if I will take it one by one.
@daniel It is strange that yesterday using this script the LoPy was hanging ... today I re powered and I didn't reproduce it anymore. It is true that yesterday I did a lot of operation before.
However the entire solution is still hanging and I need to understand why on 1.5.0 this is working and 1.6.x is not working.
@daniel said in New firmware release, version 1.6.0.b1:
bytes = self._socket.recv(512)
One hint if i can suggest something
what if you send something to this socket and this line recive some data?
bytes = self._socket.recv(512)
is this still the same result?
I can test it self only in the evening (and only on different socket - Wipy2) - but maybe result is different and you can find something interesting
If not then we can only wait for @Neo to provide some sample
@crankshaft good point
- crankshaft last edited by crankshaft
@daniel - can you make a new announcement when a new beta is released, the topic of this current thread is ... version 1.6.0.b1 however I think it is now version 1.6.3.b1 ?
You have to trawl and check every post to see if a new release has been made and it's not very productive.
Or at least update the header | https://forum.pycom.io/topic/664/new-firmware-release-version-1-6-0-b1/72 | CC-MAIN-2019-39 | refinedweb | 1,284 | 64.71 |
I posted this in response to a forum question the other day and thought I’d share the code here.
The following example shows you how you can clear a VideoDisplay control’s content using the
videoPlayer property in the
mx_internal namespace.
Full code after the jump.
Option 1: Call the
videoDisplay.mx_internal::videoPlayer.clear() method directly from our MXML file:
<?xml version="1.0" encoding="utf-8"?> <!-- --> <mx:Application xmlns: <mx:VideoDisplay <mx:Button <mx:Button </mx:Application>
View source is enabled in the following example.
Option 2: Extend the VideoDisplay class in ActionScript, and add a custom
clear() method which calls the
mx_internal::videoPlayer.clear() method:
package { import mx.controls.VideoDisplay; import mx.core.mx_internal; public class MyVideoDisplay extends VideoDisplay { public function MyVideoDisplay() { super(); } public function clear():void { pause(); mx_internal::videoPlayer.clear(); } } }
Then, in our MXML file, we simply add our MyVideoDisplay custom component and call our
clear() method directly:
<?xml version="1.0" encoding="utf-8"?> <!-- --> <mx:Application xmlns: <custom:MyVideoDisplay <mx:Button </mx:Application>
Option 3: Create a custom component in MXML that extends the VideoDisplay control. Again, we add our own custom
clear() method which calls the
mx_internal::videoPlayer.clear() method:
<?xml version="1.0" encoding="utf-8"?> <!-- --> <mx:VideoDisplay xmlns: <mx:Script> <![CDATA[ import mx.core.mx_internal; public function clear():void { pause(); mx_internal::videoPlayer.clear(); } ]]> </mx:Script> </mx:VideoDisplay>
20 thoughts on “Clearing the video on a Flex VideoDisplay control”
Why don’t you have to use
use namespace mx_internal;
or better yet, when do I have to use that statement.
thanks for this tutorial btw. i was trying to clear a videodisplay yesterday.
adam,
Excellent tip! I totally forgot about the
use namespacesolution. It really could apply to any of the solutions above, but since pption 1 is probably the most easily copy-paste-able:
Peter
very helped thank you
great video display control tips.
I may apply this tips on one of my site project.
thanks
This was a great find to be able to clear out the VideoDisplay object. However, I have found that when your video has been encoded with an alpha channel, this clear method does not work. Anyone have any ideas of why this method would not work with transparent video?
How to add volume control? Is it possible?
gustavo,
The VideoDisplay control has a
volumeproperty that you can set to a value between 0 and 1 which sets the volume.
I can probably post an example in a couple minutes: “Setting the volume on a VideoDisplay control in Flex”.
Peter
peterd,
Thanks, thanks so much!
But, there is also a bug in Flash Player:
Sometimes Video.clear not works as we expecting.
Hi Peter,
Great article. One question if you could help, please?
I am trying to play a flv file which is pushed from blazeDS server I am using Cairngorm/modelLocator to get the array of UploadVideo objects:
UploadVideo.as
public var videoName:String;
public var video:ByteArray;
in my mxml I declare a mx:List and assign the arrayCollection objects to my list’s dataProvider
videosOfStudents = ArrayCollection() // of UploadVideo objects
and this is my VideoDisplay component: (videoName is the name of the flv file)
When I click cntlDisp.play(); I get a following error:
(TypeError)#0
errorID = 1009
message = “Error #1009: Cannot access a property or method of a null object reference.”
but when I add a flv file locally it works fine.
Any idea?
Thanks.
Vardan
Hi Peter,
Sorry, some of my code did not appear properly because of the tags:
UploadVideo.as
public var videoName:String;
public var video:ByteArray;
videosOfStudents = ArrayCollection() // of UploadVideo objects
mx:List id=”cntlMovie” dataProvider=”{modelLocator.videosOfStudents}” width=”300″
mx:VideoDisplay id=”cntlDisp” source=”{cntlMovie.selectedItem.videoName}” width=”100%” height=”100%”
The output I get is:
(TypeError)#0
errorID = 1009
message = “Error #1009: Cannot access a property or method of a null object reference.”
Thanks again.
Vardan
This tip is not working perfectly, I recommend instead, in order to clear the video, to use the visible property (with a Fade effect is terrible!).
Moreover, use mx_internal stuff is not recommended since it can be remove for future versions of the framework.
——————————————————–
eBuildy, the web2.0 specialists!
Anyone know how to clear a videoDisplay that has been showing a live stream? .pause() does not seem to have any effect
@jerome,
If
clear()doesn’t work, maybe try setting a null source.
Peter
setting a null source does not seem to have any effect either. I’m using a Video attached to a VideoDisplay (something like VideoDisplay.addChild(Video)
ok nevermind I got clear() working. I had to use .clear() on the Video object that was attached to the VideoDisplay
Hi Jerome.
Thanx for the idea, but in my code don´t work and I don´t now why.
My code is this
private function catchCam():void{
this._vid=new Video();
this._vid.width=320;
this._vid.height=240;
this.vd_Cam.addChild(this._vid);
this.cam=Camera.getCamera(this.cbCam.selectedIndex.toString());
if (!this.cam) {
this._vid.attachCamera(null);
this._vid.clear();
}else{
this._vid.attachCamera(this.cam);
}
}
note: vd_Cam it´s a videoDisplay Object.
The problem is that when execute
this._vid.attachCamera(null);
this._vid.clear();
The camera never deatach of the videoDisplay Object and never clear the videoDisplay Object. By the way debugging the code i´m sure that this sentences are executed.
Why is this happening to me?, any idea?
Thanx
Why don’t extend a UIComponent class to create a custom Video Component .. According to my personal opinion VideoDisplay component of Flex 3.5 sucks .
is it supported on Flex3?
because I copied the 1st example and paste but I got an error, it is not working!!
sir,
help me out please..
i am developing video chatting..using asp.net .
i am using fluorine fx streaming server.
iam going to integrate flex with asp.net….
can you guide me for my project…
i want to knoe how to set up the server…using iis..
thank you…. | http://blog.flexexamples.com/2008/01/15/clearing-the-video-on-a-flex-videodisplay-control/ | CC-MAIN-2018-39 | refinedweb | 1,001 | 58.99 |
This article is for Scala programmers who are curious about the next features in Scala 3. In this, we are discussing particularly Match Type.
Pattern Matching is one of the most powerful construct tools in Scala. One can say it is a powerful form of switch statements of Java or C++. We will get to know what are the advancement done in Scala 3 from Scala 2.
The Problem Statement in Scala 3:
Let’s assume you are working on a library for standard data types(e. g. Int, String, Lists), and you want to write a piece of code that extracts the last constituent part of a bigger value:
- assuming Big-Int is made of digits, the last part is the last digit
- the last part of a String is a Char
- the last part of a list is the element on its last position
def lastDigitOf(number: BigInt): Int = (number % 10).toInt def lastCharOf(string: String): Char = if string.isEmpty then throw new NoSuchElementException else string.charAt(string.length - 1) def lastElemOf[T](list: List[T]): T = if list.isEmpty then throw new NoSuchElementException else list.last
Now one thing that is you notice that all these functions are doing a similar task to extracting the last element. So why not reduce this API into one large single API which does this for us. Besides that you would want to think about the future, perhaps extending this same logic to completely unrelated types as well.
Can you Unify these methods in Scala 2?
No, Scala 2 cannot take these methods which will have a single signature but will execute different pieces of code. But the good news is, it is possible in Scala 3
Scala 3:
In Scala 3, we can define a type member which can take different forms; i.e. reduce to different concrete types; depending on the type argument we’re passing:
type ConstituentPartOf[T] = T match case BigInt => Int case String => Char case List[t] => t
This is called a match type. Think of it like a pattern match done on types, by the compiler. The following expressions would all be valid:
val aNumber: ConstituentPartOf[BigInt] = 2 val aCharacter: ConstituentPartOf[String] = 'a' val anElement: ConstituentPartOf[List[String]] = "Scala"
Now let’s see how match types can help solve our first-world problem. Because all the previous methods have the meaning of “extract the last part of a bigger thing”, we can use the match type we’ve just created to write the following all-powerful API:
def lastComponentOf[T](thing: T): ConstituentPartOf[T] = thing match case b: BigInt => (b % 10).toInt case s: String => if (s.isEmpty) throw new NoSuchElementException else s.charAt(s.length - 1) case l: List[_] => if (l.isEmpty) throw new NoSuchElementException else l.last
This method, in theory, can work with any type for which the relationship between T and ConstituentPartOf[T] can be successfully established by the compiler. So if we could implement this method, we could simply use it on all types we care about in the same way:
val lastDigit = lastComponentOf(BigInt(53728573)) // 3 val lastChar = lastComponentOf("Scala") // 'a' val lastElement = lastComponentOf((1 to 10).toList) // 10
Why use Match Type?
One of the question that comes to our mind is why don’t we go with regular inheritance based object oriented programming. Because its easy to code and will do the same functionality.
Because if you write code against an interface, e.g.
you lose the type safety of your API, because the real instance is returned at runtime. At the same time, the returned types must all be related, since they must all derive from a mother-trait.
Also the lastComponentOf method allows the compiler to be flexible in terms of the returned type, depending on the type definition
Conclusion:
We learned about match types, which are able to solve a very flexible API unification problem. I’m sure some of you will probably dispute the seriousness of the problem to begin with, but it’s a powerful tool to have in your type-level arsenal, when you’re defining your own APIs or libraries.
References: | https://blog.knoldus.com/scala-3-introduction-to-match-types/ | CC-MAIN-2021-43 | refinedweb | 692 | 69.82 |
C++ Tutorial
C++ Flow Control
Math functions in C++
Math functions are some established library functions related to mathematical calculations. All the C++ math functions are defined under math.h header file. C++ provides a strong library function and math functions are some of them.
We need to include math.h header file to work with math functions. Now let’s move towards the math function and why they are use inside our program.
Trigonometric math functions
sin(x) — sin(x) determine the sine of angle x.
cos(x) — cos(x) can calculate the cosine of x.
tan(x) — tan(x) is used to determine tangent of x.
asin(x) — asin(x) determine the inverse sine of x.
acos(x) — acos(x) can calculate the inverse cosine of x.
atan(x) — atan(x) can calculate the inverse tangent of x.
Power related math functions
sqrt(x) — sqrt(x) is used to calculate square root of x.
pow(x, y) — pow(x, y) is used to calculate xy.
cbrt(x) — cbrt(x) is used to calculate the cubic root of x.
hypot(x, y) — hypot(x, y) determines the hypotenuse of a right angled triangle
Max, min and difference related math functions
fmax(x, y) — fmax(x, y) finds the smallest number between x and y.
fmin(x, y) — fmin(x, y) finds the minimum number between x and y.
fdim(x, y) — fdim(x, y) determine the positive difference of x and y.
Hyperbolic math functions
sinh(x) — sinh(x) finds the hyperbolic sine of x.
cosh(x) — cosh(x) determines the hyperbolic cosine of x.
tanh(x) — tanh(x) can determine the hyperbolic tangent of x.
asinh(x) — asinh(x) can determine the arc hyperbolic sine of x.
acosh(x) — acosh(x) can find the arc hyperbolic cosine of x.
atanh(x) — atanh(x) calculates the arc hyperbolic tangent of x.
Integer related math functions
floor(x) — floor(x) returns the largest rounds value less than or equal x.
ceil(x) — ceil(x) returns the smallest rounds value greater than or equal x.
round(x) — round(x) can round the value of x in natural way.
fmod(x, y) — fmod(x, y) determines the reminder of x / y.
reminder(x, y) — reminder(x, y) calculate the reminder of x / y.
remquo(x, y) — remquo(x, y) calculates both the reminder and quotient of x /y.
fabs(x) — fabs(x) is used to find absolute value of x.
abs(x) — abs(x) is also used to find the absolute value of x.
Exponential math functions
exp(x) — exp(x) is used to find ex.
log(x) — log(x) can determine the logarithm of x.
log10(x) — log10(x) is also determine the logarithm of x.
logb(x) — logb(x) is also find the normal logarithm of x.
modf() — modf() function is used to break a number into an integer and a fractional part.
exp2(x) — exp2(x) can determine the exponential of x of base 2.
log2(x) — log2(x) is used to fine the logarithm of x of base 2.
expm1(x) — expm1(x) can determine ex-1.
log1p(x) — log1p(x) is used to find logarithm of (x + 1).
Macro math functions
isinf(x) — isinf(x) is used to check whether x is infinite or not.
isfinite(x) — isfinite(x) is used to check that x is finite or not.
isnan(x) — isnan(x) is used to check x is nan or not.
isnormal(x) — isnormal(x) checks that x is normal or not.
signbit(x) — signbit(x) checks the sign of x is negative or not.
isgreater(x, y) — isgreater(x, y) is used to check whether x is greater than y or not.
isless(x, y) — isless(x, y) is used to check whether x is less than y or not.
isgreaterequal(x, y) — isgreaterequal(x, y) checks whether x is greater than or equal to y or not.
islessequal(x, y) — islessequal(x, y) checks whether x is less than or equal y or not.
tgamma(x) — tgamma(x) is used to find the gamma functions value of x.
lgamma(x) — lgamma(x) is used to calculate the logarithm of gamma function of value x.
C++ program using math functions
// c++ program using math functions #include <iostream> #include <math.h> using namespace std; int main(){ int x = 90; int y = 20; cout << sin(x) << endl; // sine of x cout << cos(x) << endl; // cosine of x cout << tan(x) << endl; // tangent of x cout << pow(2, 3) << endl; // power 3 of 2 cout << sqrt(25) << endl; // square root of 25 = 5 cout << fmax(x, y) << endl; // which is maximum between x and y cout << fmin(x, y) << endl; // which is minimum between x and y cout << floor(3.8233) << endl; cout << ceil(3.8233) << endl; cout << round(3.8233) << endl; return 0; }
Output of this program
| https://worldtechjournal.com/cpp-tutorial/all-math-functions-in-cpp/ | CC-MAIN-2022-40 | refinedweb | 810 | 76.11 |
Overview components
All Delphi VCL and C# .NET components which can be found on this page are open source and written by myself. The older .NET components and classes were developed with Visual Studio 2005 and 2008 and they work with .NET 2.0, 3.0 and 3.5. The old Delphi components work in Delphi 4 to 2009.
The ExcelExport component, which is also updated and tested in the latest Delphi XE and 10.2 Tokyo versions, can also be registered to support its further development.
The .NET NuGet packages with LINQ to Excel, Outlook and OneNote are also up-to-date and available for the latest .NET version (4.61, VS2017).
LINQ to Outlook, OneNote and Excel
2.0 (C#, .NET 4.6)
2.0 (C#, .NET 4.6)
The ScipBe.Common.Office.
Version 2.0 (June 2017) - open source - C# & NET 4.6
Entity Framework extensions
1.0 (C#, .NET 3.5)
1.0 (C#, .NET 3.5)
The ScipBe.Common.Geocoding namespace contains the a set of extension methods which can be used to extend the standard ADO.NET Entity Framework classes (ObjectQuery, EntityObject, ObjectStateManager, ...).
Version 1.0 (January 2009) - open source - C# & NET 3.5
Google Geocoder
1.1 (C#, .NET 3.5)
1.1 (C#, .NET 3.5)
The ScipBe.Common.Geocoding namespace contains the GoogleGeoCoder class. This class uses the Google Maps Geocoding HTTP REST service to retrieve geographical data like the latitude and longitude of a given address.
Version 1.1 (December 2008) - open source - C# & NET 3.5
FontComboBox and FontManager
1.1 (C#, .NET 2.0)
1.1 (C#, .NET 2.0)
This library contains 3 .NET 2.0 components for managing and displaying fonts. The FontComboBox displays a list of all installed Windows fonts and optionally the most recently used fonts and the predefined fonts of a theme. The FontManager can be used to manage all these fonts.
Version 1.1 (April 2007) - open source - C# & NET 2.0
FileDrop
1.1 (C#, .NET 2.0)
1.1 (C#, .NET 2.0)
This component makes it easy to implement dragging and dropping files from the Windows Explorer to your .NET application. Just link a control or a form to this FileDrop component and set the list of allowed file extensions.
Version 1.1 (December 2006) - open source - C# & NET 2.0
TscExcelExport
4.3 (Delphi VCL)
4.3 (Delphi VCL)
This TscExcelExport component is an advanced, powerful but easy to use component which enables you to export all records of a dataset from Borland/Codegear/Embarcadero Delphi to Microsoft Excel. Many features are provided to change the layout, use conditional formatting, to add totals, to create groups, to set a filter, ... The component works in Delphi 5, 6, 7, 2006, 2007, 2009, 2010, XE, XE2, XE3, XE4, XE5, XE6, XE7, XE8, 10 Seattle, 10.1 Berlin and 10.2 Tokyo (32 and 64 bit) and it supports all Excel versions from 97 to 2016.
Version 4.29 (September 2017) - freeware for non-commercial use - Delphi VCL
TscFontCombobox
1.1 (Delphi VCL)
1.1 .
Version 1.1 (December 2004) - open source - Delphi VCL
TscDataList
1.1 (Delphi VCL)
1.1 (Delphi VCL)
Delphi collection component to store data (in design time) like SQL statements or values (Names and Values property of TStrings). The Find methods can be used the find items very quickly.
Version 1.1 (November 2002) - open source - Delphi VCL
TscFileDrop
1.1 (Delphi VCL)
1.1 (Delphi VCL)
By adding this small Delphi component to your form, you can accept the files which are dropped from Windows Explorer to your form. You can also specify which file extensions are allowed.
Version 1.1 (March 2008) - open source - Delphi VCL
TscSystemInfo
1.6 (Delphi VCL)
1.6 (Delphi VCL)
With this component you can retrieve the most important system information like the Windows version, the username, the name of the PC, the speed of the CPU, the available memory, Delphi and Office version, ...
Version 1.6 (December 2007) - open source - Delphi VCL
TscScrollingText
1.2 (Delphi VCL)
1.2 (Delphi VCL)
This component shows scrolling text from bottom to top or from top to bottom. While the text is scrolling new lines can be added. This component is derived from a TGraphicControl.
Version 1.2 (January 2002) - open source - Delphi VCL
TscLinkLabel
1.1 (Delphi VCL)
1.1 (Delphi VCL)
A label which highlights when you move the mouse. You can connect a file, email address or webpage to this component.
Version 1.1 (December 1999) - open source - Delphi VCL | http://www.scip.be/index.php?Page=Components&Lang=EN | CC-MAIN-2018-34 | refinedweb | 762 | 68.67 |
05 October 2012 05:29 [Source: ICIS news]
SINGAPORE (ICIS)--A shortage of truck drivers has increased land transport rates from United Arab Emirates (UAE) to ?xml:namespace>
The cost of trucks lifting from UAE to Jordan for polyethylene (PE) and polypropylene (PP) resins are typically priced at $60.00-70.00/tonne (€46.20-53.90/tonne), but the shortage of drivers resulted in land transport costs to surge to as high as $145/tonne, they said.
Most truck drivers for this UAE-Jordan route are Syrians, market sources said.
“We do not know why there is a sudden disappearance of these drivers, but this is really pushing up the offers for resins,” a Jordanian trader said.
“The political crisis in
Late on 4 October, offers for high density PE (HDPE) film were at $1,490-1,550/tonne
“The jump in land freights is just part of the many bullish factors driving resins prices up,” a separate GCC-based polyolefins maker said.
Most players said they are not clear how long this shortage of drivers will last.
“We are really unsure whether our parcels will reach on time, so we just have to keep buying to maintain our supply chain,” a Jordanian converter | http://www.icis.com/Articles/2012/10/05/9601266/shortage-of-truck-drivers-causes-land-freights-to-jordan-to.html | CC-MAIN-2014-10 | refinedweb | 205 | 64.64 |
Nearly all the area of a high-dimensional sphere is near the equator. And by symmetry, it doesn’t matter which equator you take. Draw any great circle and nearly all of the area will be near that circle. This is the canonical example of “concentration of measure.”
What exactly do we mean by “nearly all the area” and “near the equator”? You get to decide..
This result is hard to imagine. Maybe a simulation will help make it more believable.
In the simulation below, we take as our “north pole” the point (1, 0, 0, 0, …, 0). We could pick any unit vector, but this choice is convenient. Our equator is the set of points orthogonal to the pole, i.e. that have first coordinate equal to zero. We draw points randomly from the sphere, compute their latitude (i.e. angle from the equator), and make a histogram of the results.
The area of our planet isn’t particularly concentrated near the equator.
But as we increase the dimension, we see more and more of the simulation points are near the equator.
Here’s the code that produced the graphs.
from scipy.stats import norm from math import sqrt, pi, acos, degrees import matplotlib.pyplot as plt def pt_on_sphere(n): # Return random point on unit sphere in R^n. # Generate n standard normals and normalize length. x = norm.rvs(0, 1, n) length = sqrt(sum(x**2)) return x/length def latitude(x): # Latitude relative to plane with first coordinate zero. angle_to_pole = acos(x[0]) # in radians latitude_from_equator = 0.5*pi - angle_to_pole return degrees( latitude_from_equator ) N = 1000 # number of samples for n in [3, 30, 300, 3000]: # dimension of R^n latitudes = [latitude(pt_on_sphere(n)) for _ in range(N)] plt.hist(latitudes, bins=int(sqrt(N))) plt.xlabel("Latitude in degrees from equator") plt.title("Sphere in dimension {}".format(n)) plt.xlim((-90, 90)) plt.show()
Not only is most of the area near the equator, the amount of area outside a band around the equator decreases very rapidly as you move away from the band. You can see that from the histograms above. They look like a normal (Gaussian) distribution, and in fact we can make that more precise.
If A is a band around the equator containing at least half the area, then the proportion of the area a distance r or greater from A is bound by exp( -(n-1)r² ). And in fact, this holds for any set A containing at least half the area; it doesn’t have to be a band around the equator, just any set of large measure.
Related post: Willie Sutton and the multivariate normal distribution
7 thoughts on “Nearly all the area in a high-dimensional sphere is near the equator”
It wasn’t obvious to me why normalizing a vector of N+1 normal samples ends up being uniformly distributed over S^n. Might make an interesting post?
By “area”, we are talking about the measure of codimension 1, right? e.g. length in case of the 1-sphere and volume in the case of the 3-sphere.
This would be in contrast to, say, embedding a 2-sphere in higher dimensional space and asking about area.
@Jonathan: It’s not obvious if you think about taking N+1 samples from univariate normals, but it makes more sense if you think about taking one sample from a multivariate normal of dimension N+1. They turn out to be the same, thanks to the magic of Gaussian distributions. But the latter is obviously spherically symmetric since the density only depends on ||x||.
@roice: You could state everything in terms of intrinsic geometry — Riemann metrics and all that — not considering how the sphere is embedded in an ambient space. It doesn’t change the results, but the details are more complicated.
My question was more about whether we were talking about an (n-1)-dimensional “area” or 2-dimensional area. Rereading and looking at your code more closely, I think the answer is (n-1)-dimensional area.
>.
Here’s an easy proof of this using Markov’s inequality. First note that it’s enough to bound the orthogonal distance from the hyperplane describing the great circle, since the mapping from that distance to angle doesn’t depend on dimension.
Let u_1 be a unit normal vector for the hyperplane describing the great circle, and extend it to an orthonormal basis u_1,…,u_n. Let X be the random vector on the sphere.
Because X is on the sphere, we have (u_1^T X)^2 + … + (u_n^T X)^2 = 1. Taking expectations and using symmetry, E[(u_1^T X)^2] = 1/n.
Using Markov’s inequality, P[|u_1^T X| >= a] = P[(u_1^T X)^2 >= a^2] <= (1/n)/(a^2) = 1 / (n a^2). So for fixed a, we can bring this probability arbitrarily small be increasing n.
@John – Lovely argument from spherical symmetry. Thanks.
a similar observation, about those N-variate normal distributions, is that almost all of them are close to the sphere of size √N …. which might be the central limit theorem applied to Gamma distributions. Funny corollary: most Huge Data Sets will have lots of spurious correlations. | https://www.johndcook.com/blog/2017/07/13/concentration_of_measure/ | CC-MAIN-2017-43 | refinedweb | 872 | 65.62 |
Invoke package init methods at compile time
WWW:
To install the port: cd /usr/ports/devel/p5-self-init/ && make install cleanTo add the package: pkg install p5-self-init
cd /usr/ports/devel/p5-self-init/ && make install clean
pkg install p5-self-init
PKGNAME: p5-self-init
distinfo:
SHA256 (self-init-0.01.tar.gz) = a58e6d73e4d68d55e8a56daddd725694bb3065371d183b99179af0a1180cdeff
SIZE (self-init-0.01.tar.gz) = 24006 TEST_DEPENDS
- Replace ../../authors in MASTER_SITE_SUBDIR with CPAN:CPANID macro.
See for details.
- Change maintainership from ports@ to perl@ for ports in this changeset.
- Remove MD5 checksum
- Cleaning MD5 in perl@'s ports.
Approved by: erwin@ (portmgr)
Fix WWW in pkg-descr to<MODULE> for unification.
No functional changes.
Sponsored by: p5 namespace
- only 13% of the p5- ports embed @comment $FreeBSD$:
so standarize and remove it
With Hat: perl@
- /mach/ -> /%%PERL_ARCH%%/
With Hat: perl@
Reassign ports from andrey@kostenko.name to perl@ due to lack of time.
Hat: portmgr
Invoke package init methods at compile time
WWW:
PR: 136707
Submitted by: Andrey Kostenko <gugu@veda.park.rambler.ru>
Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD
19 vulnerabilities affecting 107 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities | http://www.freshports.org/devel/p5-self-init/ | CC-MAIN-2017-43 | refinedweb | 202 | 50.02 |
Table of Contents
- What Are the Best API Clients?
- What is an API client?
- The Best API Clients
- 1. RapidAPI Design by Paw
- 2. Postman
- 3. Advanced REST Client
- 4. Nightingale
- 5. ReadyAPI (SoapUI Pro)
- 6. Runscope
- 7. Rest-Assured
- 8. Insomnia
- Summary: Paw vs Postman vs Advanced REST Client vs Nightingale vs ReadyAPI vs Runscope
- FAQs
- What is API?
- What are the general features of API testing tools?
- What is the importance of API testing in agile development?
- What are HTTP clients?
- Footnotes
What Are the Best API Clients?
With so many options, it’s easy to find yourself with a paradox of choice when choosing software. Indeed, this is the case with API clients.
Furthermore, best is a relative term. What you are looking for—knowing my search experience—is the most value for the time and money you’re willing to invest. And we derive value differently. Therefore, I hope to list the best API clients and discuss their features and benefits. And other relevant information.
By the end of the article, you’ll have a filtered view of API client options that either solves your search problem or opens the door to more questions that you need to answer before finding the right solution.
What is an API client?
API stands for Application Programming Interface. So, for example, let’s imagine a database running on a server (physical or virtual machine), and the database doesn’t have an API. Consequently, nothing outside of that server communicates with the database.
We decide to build an API for that database to access the data, so other servers and applications can send requests and reach the data.
The other servers and applications that use our API to access the database are referred to as clients. So you could say that our database is servicing their requests, making them the server’s clients.
What’s an API Client Library?
APIs use protocols to communicate with API clients, which makes network requests standardized and less complicated. The most well-known protocol, HTTP, shuttles data between the server and client, allowing them to transfer data effectively over the network using the API.
Employing another example, we can represent an HTTP API request for Javascript and Python with the code below:
GET Parameters: id (path): str. User's ID
The representation is the same because the network request uses HTTP regardless of the programming language.
Conversely, to create an HTTP request in Javascript and Python, the code could be:
Python:
import requests user_id = 123 url = f'{user_id}" response = requests.request("GET", url) print(response.text)
Javascript:
fetch("", { "method": "GET", }) .then(response => { console.log(response); }) .catch(err => { console.error(err); });
Notice that the code is different. This is because distinct programming languages, like Javascript and Python use varying syntax and libraries. Subsequently, API client libraries cater to specific programming languages.
Difference Between API Client and Client?
API clients have come to mean a software tool that sends HTTP requests. APIs use HTTP to communicate with different types of clients on the network. Therefore, they are agnostic to programming languages because—as we detailed above—programming languages use different syntax but generate the same kind of HTTP request.
In review:
- APIs allow client applications to retrieve data from servers.
- Client applications and servers communicate using HTTP requests.
- Client applications use language-specific code and syntax to generate HTTP requests.
- The different libraries that programming languages use to generate HTTP requests for a given API are API client libraries.
- API clients are software tools that generate and send HTTP requests to an API without biasing a specific programming language.
After understanding the API landscape and the statements above, you could start to intuit the benefits of using API client software.
Benefits of Using an API Client
Without the need to develop a full range of tests for each programming language, developers can focus on testing their API with HTTP requests. API clients are advantageous because they help us, among other things, generate HTTP requests for our API. A list of benefits may include:
Testing
API clients are also known as API testing tools because they are primarily used for testing the response data of an API route. These testing tools help teams perform functional API testing, load testing and help developers with exploratory testing.
API clients have become extremely useful with automated testing, monitoring automated tests, and implementing API test suites into CI/CD pipelines.
Documentation
API clients can automatically generate documentation based on specifications or saved API routes. Automatically updating documentation is a significant benefit to developers because the documentation updates when the code does.
Client Libraries
Some client tools can generate API client libraries for different programming languages and provide sample code. Again, this is a benefit to developers who always seem to be in a hurry.
Mock Server
Users can import API specifications directly into clients. If enough information is provided, some clients can create and host mock servers to test APIs before they are built. This can help speed up development time.
The key to most of the benefits of API client tools is automation. They can automatically generate tests, requests, documentation, and libraries, bringing substantial time savings to teams working on maintaining or building APIs.
Now that we have an understanding of what an API client is and the benefits let’s take a look at the best API clients available.
The Best API Clients
1. RapidAPI Design by Paw
RapidAPI Design by Paw is one of the latest products added to RapidAPI’s suite. Starting with their API marketplace, RapidAPI now has team collaboration, an enterprise hub, API testing, and an API client after they acquired Paw.
The acquisition allowed the cloud platform to extend its API management services to include API design features. Furthermore, Paw’s previous desktop application only supported macOS. Now, the desktop application is available on macOS, Windows, Linux, and the cloud.
Additionally, the desktop application and the cloud service sync so APIs, environments, and groups are shared. The cloud sync feature extends to collaboration with the ability for individuals and teams to support API versioning and branching. Updates are driven by push and pull requests in real-time.
API Client Features
RapidAPI Design by Paw has the features that most API clients have, and users expect. To start, you can import API specifications in RAML, API Blueprint, Postman Collection, etc.
You can build API requests inside an HTTP testing sandbox, where you can:
- Use custom HTTP methods
- Send requests in URL-encoded, multi-part, JSON, XML, or file/binary format
- Save dynamic values
- Write JS snippets or install Javascript-based extensions.
Authentication
The client has the ability to handle requests using a wide range of authentication or authorization methods:
- OAuth1 and OAuth2
- Basic auth
- Digest Auth
- Hawk
- AWS Sig v4
- JWT
- Cookie Auth
- Custom Protocols
If your API is concerned about authentication and data security, the software use end-to-end encryption for data handling.
2. Postman
Postman is another popular API client that’s widely used. However, most users interact with Postman through the desktop application that syncs environments, APIs, documentation, and more through Postman Cloud.
Users can import REST, GraphQL, or SOAP API schemas in RAML, WADL, OpenAPI, or GraphQL format.
With Postman, the user can generate (based on the API schema):
- Collections
- Mock servers
- API documentation
The Postman client supports common request modifications, including URL-encoded, multipart/form-data, raw body editing, and binary data. Furthermore, the user may utilize:
- dynamic, scoped, and session variables
- custom JS snippets
- custom headers
- custom route assertions
Similar to RapidAPI Design by Paw, Postman has a range of options for API client security:
- Bearer token
- Basic Auth
- Digest Auth
- OAuth1 and OAuth2
- Hawk
- AWS Signature
- NTLM
- Akamai Edgegrid
3. Advanced REST Client
“…clean user interface helps you focus on your API and not tooling”
Install, Advanced Rest Client Documentation
Advanced REST Client (ARC) was the first API client I personally ever used. I have since moved on, but it was a simple API client that helped get me started as a beginner.
The quote above is a justifiable representation of what you get when using ARC. It’s a straightforward interface with basic features:
- Documentation for RAML, OAS, and other common API specification languages
- Dynamic variables
- Environments
- Code snippet generation
However, its strength is also its weakness. It’s simple and free, but you may be limited (and especially your team) on the features. For example, there are no syncing, branching, or collaboration features that are helpful in other clients.
Finally, as a consequence, the tool is compatible with Mulesoft’s Anypoint Platform.
4. Nightingale
Nightingale is a Windows 10 focused API tool. You can send and inspect request and response data while using dynamic variables, environments, and workspaces.
Similar to ARC, the tool is free and available through the Windows App store. You can view the software on Github, but it is not an open-sourced app. In contrast to ARC, the tool allows users to generate mock servers.
Finally, you can batch multiple requests together to send API calls in collections.
5. ReadyAPI (SoapUI Pro)
ReadyAPI, formerly SoapUI Pro, is an API client that supports SOAP, REST, and GraphQL APIs. The ReadyAPI cloud service is a paid version of SoapUI, a popular open-source tool for developers maintained by Smartbear. The free version, SoapUI, supports basic API testing, mocking, and extending features like Java code generation from WADL files.
However, the paid version of ReadyAPI has many other features for testing, performance, and automation.
One of the noteworthy categories with increased features is testing. With a pro version, you can use assertions, inspect test logs, create reports, integrate with Git, run security scans, run automatic functional and security tests, plus more.
The tool can generate reports for analytics, monitoring, and of course, integrates with other Smartbear software products.
These features, among many others, are part of the Pro plan, but the pro plan is broken into three modules, each with its own pricing.
There is an API Test, Performance, and Virtualization Module. Currently, the pricing is broken down as:
- Test Module: $685/yr.
- Performance Module: $5,374/yr.
- Virtualization Module: $1,060/yr
Additionally, the modules may be bundled.
6. Runscope
Runscope is not very similar to other API client tools that we’ve discussed so far. Although the tool focuses on monitoring, it shares many of the same features as other tools.
With Runscope, you can set up API monitoring and make assertions for JSON and XML data. Also, you can test with dynamic variables validating HTTP headers, status codes, and response bodies.
The platform can write custom Javascript snippets and integrations that include Jenkins, Slack, HipChat, and PagerDuty.
Runscope has a series of pricing plans starting at $79/mo. This plan covers 250k API requests and allows for up to 5 users.
7. Rest-Assured
Rest-Assured is API validation and testing software for REST services in Java. The software is different in many ways from other client applications we have discussed so far. It’s written, in code, instead of being set up through a user interface or configured through a cloud platform.
The software is free, and the source code (along with testing examples) is available on Github.
On Github, the project is described as:
REST Assured is a Java DSL for simplifying testing of REST based services built on top of HTTP Builder. It supports POST, GET, PUT, DELETE, OPTIONS, PATCH and HEAD requests and can be used to validate and verify the response of these requests.
Usage, Rest-Assured, Github
You can use the documentation to discover how to:
- Extract response values from JSON, XML, Headers, cookies, and status codes.
- Specify request data, parameters, cookies, headers, etc.
- Configure authentication for Basic, Digest, Form, and OAuth
- Set up logging, proxies
Furthermore, the documentation details common integrations with Spring, Scala, and Kotlin.
8. Insomnia
Capping off the list of API clients is Insomnia. The client supports requests for REST, SOAP, GraphQL, and gRPC.
You can import API data in the supported formats: Insomnia, Postman v2, HAR, OpenAPI, Swagger, WSDL, and Curl.
Similar to RapidAPI Design by Paw and Postman, Insomnia supports version control sync and team collaboration.
With Insomnia, users can set up environment variables, chain requests, and dynamic variables. This allows users to set up test suites, assert JSON payloads, or assert other response data.
Inso (CLI)
Inso is a CLI tool to accompany Insomnia. You can use Inso to:
- lint the API spec
- Run tests
- Generate configs
- Export API spec
Authentication
Insomnia supports many different types of authentication methods for sending requests to an API.
- Basic Auth
- Digest Auth
- OAuth1 and OAuth2
- Bearer Token
- Microsoft NTLM
- AWS IAM v4
.netrcFile
- Hawk
Insomnia is open-source software that is available to download on Mac, Windows, and Linux. Therefore, installation and usage of the software are free.
Summary: Paw vs Postman vs Advanced REST Client vs Nightingale vs ReadyAPI vs Runscope
Below is a summary table with data from the API clients listed above1.
N/F = Not found
FAQs
What is API?
An API, Application Programming Interface, allows client applications to communicate with a server. Similar to how a user interacts with an application through a user interface, an API is for computer programs that don't need to see data exchange.
What are the general features of API testing tools?
API testing tools have features for testing, analytics, monitoring, documentation, and versioning.
What is the importance of API testing in agile development?
API testing is important because it helps developers refactor code and understand when requirements are met. Furthermore, API testing can help teams perform regression and functional testing for their application. So, as requirements change, they can use API testing to track their API's performance or functional impacts.
What are HTTP clients?
HTTP clients use the HyperText Transfer Protocol to exchange data with other applications or servers. Therefore, the terms HTTP clients and API clients are often used interchangeably.
Footnotes
1 All data compiled from software website or personal use | https://rapidapi.com/blog/best-api-clients/ | CC-MAIN-2021-31 | refinedweb | 2,343 | 55.64 |
Red Hat Bugzilla – Bug 163083
Statically linked C++ program using pthreads will segfault
Last modified: 2007-11-30 17:11:09 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.8) Gecko/20050511
Description of problem:
A C++ program that uses pthreads and the STL (or string) will segfault. If no STL header is included, the program does not crash.
This is apparently due to certain pthreads functions not being included in the output executable. This bug may duplicate #115157, and I apologize if so, but hopefully the included test case will be useful.
Version-Release number of selected component (if applicable):
gcc-4.0.0-8
How reproducible:
Always
Steps to Reproduce:
foo.cpp:
#include <pthread.h>
#include <string> // or <list> or <vector> etc
int main()
{
pthread_mutex_t lock;
pthread_mutex_init(&lock, NULL);
return 0;
}
1. Compile as follows: g++ -g -static foo.cpp -o foo -lpthread
2. Run it in gdb.
3. Note the address of pthread_mutex_init is 0.
Actual Results: GNU gdb Red Hat Linux (6.3.0.0-1.21rh)
This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db library "/lib/libthread_db.so.1".
(gdb) run
Starting program: /home/jason/t/foo
Reading symbols from shared object read from target memory...done.
Loaded system supplied DSO at 0xad7000
Program received signal SIGSEGV, Segmentation fault.
0x00000000 in ?? ()
(gdb) where
#0 0x00000000 in ?? ()
#1 0x08048232 in main () at foo.cpp:7
Expected Results: No segfault
Additional info:
The suggestion in #115157 to forcibly link in all of libpthread.a is a valid workaround.
Linking dynamically also avoids the problem, however this is not possible in certain situations (e.g. creating a profiled executable requires static linking to be useful).
Yes, this is a dup of #115157.
*** This bug has been marked as a duplicate of 115157 *** | https://bugzilla.redhat.com/show_bug.cgi?id=163083 | CC-MAIN-2017-09 | refinedweb | 310 | 61.73 |
I’ve been hearing more and more about GraphQL in recent years. As a developer primarily focused on the Microsoft stack, my response typically starts and ends with “oh, that sounds cool.” I mean, what options do we have in ASP.NET Core? This article will discuss what GraphQL is, its differences from REST, and how to create a GraphQL endpoint with ASP.NET Core.
Challenges with Traditional APIs
Let’s start with a common scenario—you’re in charge of building out a brand new web application. It won’t see a ton of usage, but you still want to impress your boss and develop a solid foundation for the future. To create a fluid UI, so you decide on a popular SPA framework for the frontend. You aren’t too interested in the details, something like Angular or React will do. For data access, you settle on building out a RESTful API. You’re happy with your decision, thinking REST APIs are flexible and will suit your needs as the application grows.
You build out the application, which consists primarily of CRUD views, and your boss LOVES it. In fact, your boss wants you to add some more functionality to the application. As you get the new requirements, you notice a few additional use-cases. There are now scenarios where you only need a subset of the data from the existing REST endpoints. In other cases, you need more fields than is present in the current API. This circumstance brings you to a decision point.
- Should you modify your API to return a union of the data that is required by all consumers?
- Should you create new APIs that contain the exact information you need in each scenario?
- Should you make a set of parameters to determine which of the fields to return?
The first option will result in bloated APIs. The other two do not feel very RESTful. You continually find yourself asking, “Should I modify my existing APIs or create new ones for each use-case?” Each time, you make your decision and move on, but ultimately you begin to wonder if relying on a RESTful API was the best choice in this case.
I have seen this situation occur many times in many different application architectures. In rich SPA frameworks, most of the application logic resides on the clientside. Does REST do the best job of bridging the gap between the clientside and the serverside in these client-heavy architectures?
What is GraphQL?
Rather than creating explicit APIs with defined request and responses,
GraphQL creates a schema that defines the data with which clients can query and interact. GraphQL is an API specification that Facebook publicly released in 2015 as an alternative to traditional RESTful APIs. Rather than encapsulating query logic behind the API, GraphQL allows clients to request and filter needed data. Applications with a rich client-side framework or APIs that need to support a wide range of clients are just a few of the great use-cases for GraphQL.
How Does GraphQL Work?
As previously stated, a primary component of a GraphQL API is the schema. The GraphQL type system defines an API’s Schema as a collection of objects and types. An invoice object, for example, might look something like this.
type Invoice { id: ID! date: Date total: Float! items: [InvoiceLine] }
Just like any object-oriented programming language, types consist of a set of fields. Fields themselves are assigned a type that can either be a predefined scalar type or another type in the schema. In the example above, we also see the ! and [] used to annotate the type. In GraphQL ! means the field is required, and [] means the field is an array.
In addition to custom defined types, two default types exist in GraphQL schemas.
- Query
- Mutation
These types act as an entry point into the API and define how a client can interact with the types. Below is an example of a standard Query type, which includes our previously described Invoice object.
type Query { invoices: Invoice }
For further reading, check out the documentation at grapql.org.
Getting Started with GraphQL in ASP.NET Core
Now that I’ve got you excited about GraphQL, I have some good news and bad news. The bad news is there is no support for GraphQL built directly into ASP.NET Core as of today. I suspect as more momentum builds behind GraphQL, this may change somewhere down the road. The good news is we do not have to wait to start using GraphQL in our ASP.NET Core applications. Thanks to the open-source community and the contributors to graphql-dotnet, we can get started today!
With a fresh new ASP.NET Core web application, we can begin installing a couple of NuGet packages into your project. The following commands will install GraphQL and all required dependencies.
dotnet add package GraphQL.Server.Transports.AspNetCore
Creating The Schema
Next, we can start creating our GraphQL schema. The best place to start is by making our graph types. Keeping with the accounting theme, an Invoice graph type that reflects our previous example would look like the following class.
public class InvoiceType : ObjectGraphType<Invoice> { public InvoiceType() { Field(t => t.Id); Field(t => t.Date, nullable: true); Field(t => t.Total); Field(t => t.Items, nullable: true); } }
After we create all our graph types, we need to make query types that define how clients can interact with our API. Below is an example of a query type that provides access to the invoice graph type.
public class AccountingQueryType : ObjectGraphType { public AccountingQueryType(IInvoiceRepository invoiceRepository) { Field<ListGraphType<InvoiceType>>( "invoices", resolve: x => invoiceRepository.Get()); } }
The last part of defining our GraphQL schema is creating an implementation of the
Schema class. This class will expose our previously described query type to our GraphQL API, as shown below.
public class AccountingSchema : Schema { public AccountingSchema(IDependencyResolver resolver) : base(resolver) { Query = resolver.Resolve<AccountingQueryType>(); } }
Configuring ASP.NET Core
With our schema defined, we need to configure the ASP.NET Core services and middleware to incorporate our GraphQL components. Starting with the
ConfigureServices method, we can configure GraphQL with basic settings using the following code.
public void ConfigureServices(IServiceCollection services) { // removed for brevity services .AddScoped<AccountingSchema>() .AddScoped<IDependencyResolver>(x => new FuncDependencyResolver(x.GetRequiredService)) .AddGraphQL(x => { x.ExposeExceptions = Environment.IsDevelopment(); x.EnableMetrics = Environment.IsDevelopment(); }) .AddGraphTypes(ServiceLifetime.Scoped); }
I typically use these as my default configuration options; however, the GraphQLOptions class exposes additional properties for more fine-grained control. Next, we have to add GraphQL to our request pipeline. As shown below, we can accomplish this by adding the UseGraphQL extension method with our GraphQL schema.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // removed for brevity app.UseGraphQL<AccountingSchema>("/graphql"); }
Experimenting with GraphQL
With a GraphQL API built, how can we test it to validate it behaves as expected? There are a couple of options. First, let’s take a look at the UI Playground package that is provided by qraphql-dotnet. The UI Playground can get incorporated directly into a GraphQL project by including the following nuget package.
dotnet add package GraphQL.Server.Ui.Playground --version 4.0.1
Once installed, we can add the UI Playground to our middleware pipeline. In most cases, we will not want to include it in production deployments, so we can optionally add the middleware, as shown below. For additional configuration options, see the available properties in the GraphQLPlaygroundOptions class.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // removed for brevity app.UseGraphQL<AccountingSchema>("/graphql"); if (Environment.IsDevelopment()) { app.UseGraphQLPlayground(new GraphQLPlaygroundOptions()); } }
With the latest changes, when we debug our application and browse to /ui/playground, we are presented with a helpful web page that allows us to interact with our GraphQL API.
If you don’t want to include the UI Playground in your application, there is another option. The very popular API collaboration tool Postman now supports GraphQL! Not only can you interact with your GraphQL API, but Postman also supports importing your GraphQL schema to enable autocomplete and other advanced features.
Summary
GraphQL is a powerful query language for APIs. It provides a lot of flexibility and puts more power in the hands of the client. For applications that have a lot of business logic client-side, GraphQL can be a great fit. While ASP.NET Core does not have native support, you can still get started building GraphQL APIs with the graphql-dotnet library.
If you’ve enjoyed this article, please share on social media! | https://espressocoder.com/2020/10/05/building-flexible-apis-with-graphql-and-asp-net-core/ | CC-MAIN-2021-31 | refinedweb | 1,418 | 58.08 |
STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS
STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS... of Action classes for your project. The latest version of struts provides classes... action. In this article we will see how to achieve this. Struts provides four
Aggreg
different kinds of actions in Struts
different kinds of actions in Struts What are the different kinds of actions in Struts
no action mapped for action - Struts
no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloWorld
Struts 2 Actions
request.
About Struts Action Interface
In Struts 2 all actions may implement...
Struts 2 Actions
In this section we will learn about Struts 2 Actions, which is a fundamental
concept in most of the web
Struts Action Chaining
Struts Action Chaining Struts Action Chaining Dispatch Action Example
Struts Dispatch Action Example
Struts Dispatch Action... function. Here in this example
you will learn more about Struts Dispatch Action
struts
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
STRUTS 1) Difference between Action form and DynaActionForm?
2) How the Client request was mapped to the Action file? Write the code and explain
Struts Built-In Actions
Struts Built-In Actions
... actions shipped with Struts APIs. These
built-in utility actions provide different...;
to combine many similar actions into a
single action hi
i would like to have a ready example of struts using"action class,DAO,and services"
so please help me
;Basically in Struts we have only Two types of Action classes.
1.BaseActions...Struts why in Struts ActionServlet made as a singleton what... only i.e ForwardAction,IncludeAction.But all these action classes extends Action
Implementing Actions in Struts 2
Implementing Actions in Struts 2
Package com.opensymphony.xwork2 contains the many Action classes and
interface, if you want to make an action class for you...;roseindia" extends="struts-default">
<action Tutorial
, Architecture of Struts, download and install struts,
struts actions, Struts Logic Tags... :
Struts provides the POJO based actions.
Thread safe.
Struts has support... the
information to them.
Struts Controller Component : In Controller, Action class
Struts Projects
ASAP.
These Struts Project will help you jump the hurdle of learning complex
Struts Technology.
Struts Project highlights:
Struts Project to make learning easy
Using Spring framework in your application
Project in STRUTS
struts
struts how to make one jsp page with two actions..ie i need to provide two buttons in one jsp page with two different actions without redirecting to any other page
Struts Forward Action Example
Struts Forward Action Example
...). The ForwardAction is one of the Built-in Actions
that is shipped with struts framework.....
Here in this example
you will learn more about Struts Forward
Action Configuration - Struts
Action Configuration I need a code for struts action configuration in XML
|
Struts
Built-In Actions |
Struts
Dispatch Action |
Struts
Forward... |
AGGREGATING ACTIONS IN STRUTS |
Aggregating Actions In Struts Revisited...
configuration file |
Struts
2 Actions |
Struts 2 Redirect Action
struts - Struts
Struts dispatchaction vs lookupdispatchaction What is struts...; Hi,Please check easy to follow example at
struts - Struts
struts when the action servlet is invoked in struts? Hi Friend,
Please visit the following link:
Thanks
struts
struts <p>hi here is my code can you please help me to solve...;
<p><html>
<body></p>
<form action="login.do">...*;
import org.apache.struts.action.*;
public class LoginAction extends Action
Struts-It
to create all Struts artifacts
like Form-bean, Action, Exception, etc.
Wizards for creating
Struts Project
Struts module...
Action class
other Struts-related classes like configuration
Struts 1 Tutorial and example programs
.
- STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS...;
Aggregating Actions In Struts Revisited -
In my previous article Aggregating Actions in Struts , I have given a brief idea of how...;gt;
<html:form
<pre>... RegisterAction extends Action
{
public RegisterAction()
{ Action Class
Struts Action Class What happens if we do not write execute() in Action class
Servlet action is currently unavailable - Struts
Servlet action is currently unavailable
Hi,
i am getting the below error when i run the project so please anyone can help me..
HTTP Status 503 - Servlet action is currently unavailable Action Classes
Struts Action Classes 1) Is necessary to create an ActionForm to LookupDispatchAction.
If not the program will not executed.
2) What is the beauty of Mapping Dispatch Action
Struts project in RAD
Struts project in RAD How to create a struts project in RAD -
How to build a Struts Project - Struts
How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion & tips
Struts2 Actions
is usually generated by a Struts
Tag.
Struts 2 Redirect Action
In this section, you will get familiar with struts 2 Redirect action...
Struts2 Actions
Struts2 Actions
Struts 2.2.1 - Struts 2.2.1 Tutorial
and testing the example
Advance Struts Action
Struts Action...
Implementing Actions in Struts
2
Chaining Actions in Struts
Configuring Actions in Struts application
Login Form Application
Action classes in struts
Action classes in struts how many type action classes are there in struts
Hi Friend,
There are 8 types of Action classes:
1.ForwardAction class
2.DispatchAction class
3.IncludeAction class
4.LookUpDispatchAction2 Actions
generated by a Struts
Tag. The action tag (within the struts root node of ... Action interface
All actions may implement....
However with struts 2 actions you can get different return types other than want to develop a struts application,iam using eclipse... you. hi,
to add jar files -
1. right click on your project.
2... functionality u want to use in your project. There is no standard list of jar how to solve actionservlet is not found error in dispatch action
Struts - Struts
Struts is it correct to pass the form object as arg from action to service
Struts Tutorials
into a Struts enabled project.
5. Struts Action Class Wizard - Generates Java... application development using Struts. I will address issues with designing Action... issues with Struts Action classes. Ok, let?s get started.
StrutsTestCase
code - Struts
code How to write the code for multiple actions for many submit buttons. use dispatch action - Framework
project and is open source. Struts Framework is suited for the application... using the View component. ActionServlet, Action, ActionForm and struts-config.xml... struts application ?
Before that what kind of things
Test Actions
Test Actions
An example of Testing a struts Action is given below using...;
<!DOCTYPE struts PUBLIC
"-//Apache Software Foundation//DTD Struts...;default" namespace="/" extends="struts-default">
<default
Struts - Struts
*;
public class UserRegisterForm extends ActionForm{
private String action="add...();
return errors;
}
public String getAction() {
return action;
}
public void setAction(String action) {
this.action = action;
}
public
Error - Struts
Error Hi,
I downloaded the roseindia first struts example and configured in eclips.
It is working fine. But when I add the new action and I create the url for that action then
"Struts Problem Report
Struts has detected...
Developing the Action Mapping in the struts-config.xml
Here, we
Struts - Struts
UserRegisterForm extends ActionForm{
private String action="add";
private...() {
return action;
}
public void setAction(String action) {
this.action = action;
}
public String getAddress 2 Tutorials - Struts version 2.3.15.1
Actions
Value Stack / OGNL
Results
View
Struts 2 Configurations
Learn about the different configuration options of the Struts 2 based
project... for making
developer work easy.
Removing default Struts 2 action suffix - How
action Servlet - Struts
action Servlet What is the difference between ActionServlet ans normal servlet?
And why actionServlet is required
STRUTS
STRUTS Request context in struts?
SendRedirect () and forward how to configure in struts-config.xml
struts 2 project samples
struts 2 project samples please forward struts 2 sample projects like hotel management system.
i've done with general login application and all.
Ur answers are appreciated.
Thanks in advance
Raneesh | http://roseindia.net/tutorialhelp/comment/18786 | CC-MAIN-2015-40 | refinedweb | 1,312 | 57.57 |
The QXmlSimpleReader class provides an implementation of a simple XML parser. More...
#include <QXmlSimpleReader>
Inherits QXmlReader.
Warning: This class is not reentrant.
The QXmlSimpleReader class provides an implementation of a simple XML parser.
This XML reader is suitable for a wide range of applications. It is able to parse well-formed XML and can report the namespaces of elements to a content handler; however, it does not parse any external entities. For historical reasons, Attribute Value Normalization and End-of-Line Handling as described in the XML 1.0 specification is not performed.
The easiest pattern of use for this class is to create a reader instance, define an input source, specify the handlers to be used by the reader, and parse the data.
For example, we could use a QFile to supply the input. Here, we create a reader, and define an input source to be used by the reader:
QXmlSimpleReader xmlReader; QXmlInputSource *source = new QXmlInputSource(file);
A handler lets us perform actions when the reader encounters certain types of content, or if errors in the input are found. The reader must be told which handler to use for each type of event. For many common applications, we can create a custom handler by subclassing QXmlDefaultHandler, and use this to handle both error and content events:
Handler *handler = new Handler; xmlReader.setContentHandler(handler); xmlReader.setErrorHandler(handler);
If you don't set at least the content and error handlers, the parser will fall back on its default behavior---and will do nothing.
The most convenient way to handle the input is to read it in a single pass using the parse() function with an argument that specifies the input source:
bool ok = xmlReader.parse(source); if (!ok) std::cout << "Parsing failed." << std::endl;
If you can't parse the entire input in one go (for example, it is huge, or is being delivered over a network connection), data can be fed to the parser in pieces. This is achieved by telling parse() to work incrementally, and making subsequent calls to the parseContinue() function, until all the data has been processed.
A common way to perform incremental parsing is to connect the readyRead() signal of.
Reads an XML document from input and parses it. Returns true if the parsing is completed successfully; otherwise returns false, indicating that an error occurred.
If incremental is false, this function will return false if the XML file is not read completely. The parsing cannot be continued in this case.
If incremental is true, the parser does not return false if it reaches the end of the input before reaching the end of the XML file. Instead, it stores the state of the parser so that parsing can be continued later when more data is available. In such a case, you can use the function parseContinue() to continue with parsing. This class stores a pointer to the input source input and the parseContinue() function tries to read from that input source. Therefore, you should not delete the input source input until you no longer need to call parseContinue().
If this function is called with incremental set to true while an incremental parse is in progress, a new parsing session will be started, and the previous session will be lost.
See also parseContinue() and QTcpSocket.
Continues incremental parsing, taking input from the QXmlInputSource that was specified with the most recent call to parse(). To use this function, you must have called parse() with the incremental argument set to true.
Returns false if a parsing error occurs; otherwise returns true, even if the end of the XML file has not been reached. You can continue parsing at a later stage by calling this function again when there is more data available to parse.
Calling this function when there is no data available in the input source indicates to the reader that the end of the XML file has been reached. If the input supplied up to this point was not well-formed then a parsing error occurs, and false is returned. If the input supplied was well-formed, true is returned. It is important to end the input in this way because it allows you to reuse the reader to parse other XML files.
Calling this function after the end of file has been reached, but without available data will cause false to be returned whether the previous input was well-formed or not.
See also parse(), QXmlInputSource::data(), and QXmlInputSource::next().(). | https://doc.qt.io/archives/qt-4.7/qxmlsimplereader.html | CC-MAIN-2021-17 | refinedweb | 744 | 61.56 |
In React 16, Facebook are deprecating the React.createClass() syntax for creating new components, preferring the new ES6 class method. Already from React 15.5.0, they start reminding you about this.
They favour replacing
var Greeting = React.createClass ();
with
class Greeting extends React.Component {}
You should absolutely upgrade to the new ES6 way but that involves some refactoring other parts of the app so if you just want the warnings to go away for now, you need to install the create-react-class npm module and then swap out the parts of your code where it says React.createClass for createReactClass. As they say, it really is a drop-in replacement.
Step 1.
yarn add create-react-class
or
$ npm install create-react-class —save
Step 2.
Old code
var MyComponent = React.createClass(….
New Code (remember to import the module)
import createReactClass from 'create-react-class';
var MyComponent = createReactClass({
Very simple but not clearly documented anywhere I could find. | https://andyfoster.net/using-create-react-class-in-your-react-apps/ | CC-MAIN-2020-50 | refinedweb | 160 | 59.3 |
31st August 2014 10:00 pm
Django Blog Tutorial - the Next Generation - Part 8
Hello again! In our final instalment, we’ll wrap up our blog by:
- Implementing a sitemap
- Optimising and tidying up the site
- Creating a Fabric task for easier deployment
I’ll also cover development tools and practices that can make using Django easier. But first there’s a few housekeeping tasks that need doing…
Don’t forget to activate your virtualenv - you should know how to do this off by heart by now!
Upgrading Django
At the time of writing, Django 1.7 is due any day now, but it’s not out yet so I won’t cover it. The biggest change is the addition of a built-in migration system, but switching from South to this is well-documented. When Django 1.7 comes out, it shouldn’t be difficult to upgrade to it - because we have good test coverage, we shouldn’t have much trouble catching errors.
However, Django 1.6.6 was recently released, and we need to upgrade to it. Just enter the following command to upgrade:
$ pip install Django --upgrade
Then add it to your
requirements.txt:
$ pip freeze > requirements.txt
Then commit your changes:
Implementing a sitemap
Creating a sitemap for your blog is a good idea - it can be submitted to search engines, so that they can easily find your content. With Django, it’s pretty straightforward too.
First, let’s create a test for our sitemap. Add the following code at the end of
tests.py:
Run it, and you should see the test fail:
Now, let’s implement our sitemap. The sitemap application comes with Django, and needs to be activated in your settings file, under
INSTALLED_APPS:
'django.contrib.sitemaps',
Next, let’s think about what content we want to include in the sitemap. We want to index our flat pages and our blog posts, so our sitemap should reflect that. Create a new file at
blogengine/sitemap.py and enter the following text:
We define two sitemaps, one for all the posts, and the other for all the flat pages. Note that this works in a very similar way to the syndication framework.
Next, we amend our URLs. Add the following text after the existing imports in your URL file:
Then add the following after the existing routes:
Here we define what sitemaps we’re going to use, and we define a URL for them. It’s pretty straightforward to use.
Let’s run our tests:
And done! Let’s commit our changes:
Fixing test coverage
Our blog is now feature-complete, but there are a few gaps in test coverage, so we’ll fix them. If, like me, you’re using Coveralls.io, you can easily see via their web interface where there are gaps in the coverage.
Now, our gaps are all in our view file - if you take a look at my build, you can easily identify the gaps as they’re marked in red.
The first gap is where a tag does not exist. Interestingly, if we look at the code in the view, we can see that some of it is redundant:
Under the
items function, we check to see if the tag exists. However, under
get_object we can see that if the object didn’t exist, it would already have returned a 404 error. We can therefore safely amend
items to not check, since that try statement will never fail:
The other two gaps are in our search view - we never get an empty result for the search in the following section:
So replace it with this:
We don’t need to check whether
query is defined because if
q is left blank, the value of
query will be an empty string, so we may as well pull out the redundant code.
Finally, the other gap is for when a user tries to get an empty search page (eg, page two of something with five or less results). So let’s add another test to our
SearchViewTest class:
Run our tests and check the coverage:
If you open
htmlcov/index.html in your browser, you should see that the test coverage is back up to 100%. With that done, it’s time to commit again:
Remember, it’s not always possible to achieve 100% test coverage, and you shouldn’t worry too much about it if it’s not possible - it’s possible to ignore code if necessary. However, it’s a good idea to aim for 100%.
Using Fabric for deployment
Next we’ll cover using Fabric, a handy tool for deploying your changes (any pretty much any other task you want to automate). First, you need to install it:
$ pip install Fabric
If you have any problems installing it, you should be able to resolve them via Google - most of them are likely to be absent libraries that Fabric depends upon. Once it’s installed, add it to your
requirements.tzt:
$ pip freeze > requirements.txt
Next, create a file called
fabfile.py and enter the following text:
Now, all this file does is push our changes to Github (or wherever else your repository is hosted) and to Heroku, and runs your migrations. It’s not a terribly big task anyway, but it’s handy to have it in place. Let’s commit our changes:
Then, let’s try it out:
$ fab deploy
There, wasn’t that more convenient? Fabric is much more powerful than this simple demonstration indicates, and can run tasks on remote servers via SSH easily. I recommend you take a look at the documentation to see what else you can do with it. If you’re hosting your site on a VPS, you will probably find Fabric indispensable, as you will need to restart the application every time you push up a new revision.
Tidying up
We want our blog application to play nicely with other Django apps. For instance, say you’re working on a new site that includes a blogging engine. Wouldn’t it make sense to just be able to drop in this blogging engine and have it work immediately? At the moment, some of our URL’s are hard-coded, so we may have problems in doing so. Let’s fix that.
First we’ll amend our tests. Add this at the top of the tests file:
from django.core.urlresolvers import reverse
Next, replace every instance of this:
response = self.client.get('/')
with this:
response = self.client.get(reverse('blogengine:index'))
Then, rewrite the calls to the search route. For instance, this:
response = self.client.get('/search?q=first')
should become this:
response = self.client.get(reverse('blogengine:search') + '?q=first')
I’ll leave changing these as an exercise for the reader, but check the repository if you get stuck.
Next, we need to assign a namespace to our app’s routes:
We then assign names to our routes in the app’s
urls.py:
You also need to amend two of your templates:
Let’s run our tests:
And commit our changes:
Debugging Django
There are a number of handy ways to debug Django applications. One of the simplest is to use the Python debugger. To use it, just enter the following lines at the point you want to break at:
Now, whenever that line of code is run, you’ll be dropped into an interactive shell that lets you play around to find out what’s going wrong. However, it doesn’t offer autocompletion, so we’ll install
ipdb, which is an improved version:
Now you can use
ipdb in much the same way as you would use
pdb:
Now,
ipdb is very useful, but it isn’t much help for profiling your application. For that you need the Django Debug Toolbar. Run the following commands:
Then add the following line to
INSTALLED_APPS in your settings file:
'debug_toolbar',
Then, try running the development server, and you’ll see a toolbar on the right-hand side of the screen that allows you to view some useful data about your page. For instance, you’ll notice a field called
SQL - this contains details of the queries carried out when building the page. To actually see the queries carried out, you’ll want to disable caching in your settings file by commenting out all the constants that start with
CACHE.
We won’t go into using the toolbar to optimise queries, but using this, you can easily see what queries are being executed on a specific page, how long they take, and the values they return. Sometimes, you may need to optimise a slow query - in this case, Django allows you to drop down to writing raw SQL if necessary.
Note that if you’re running Django in production, you should set
DEBUG to
False as otherwise it gives rather too much information to potential attackers, and with Django Debug Toolbar installed, that’s even more important.
Please also note that when you disable debug mode, Django no longer handles static files automatically, so you’ll need to run
python manage.py collectstatic and commit the
staticfiles directory.
Once you’ve disabled debug mode, collected the static files, and re-enables caching, you can commit your changes:
Optimising static files
We want our blog to get the best SEO results it can, so making it fast is essential. One of the simplest things you can do is to concatenate and minify static assets such as CSS and JavaScript. There are numerous ways to do this, but I generally use Grunt. Let’s set up a Grunt config to concatenate and minify our CSS and JavaScript.
You’ll need to have Node.js installed on your development machine for this. Then, you need to install the Grunt command-line interface:
$ sudo npm install -g grunt-cli
With that done, we need to create a
package.json file. You can create one using the command
npm init. Here’s mine:
Feel free to amend it as you see fit.
Next we install Grunt and the required plugins:
$ npm install grunt grunt-contrib-cssmin grunt-contrib-concat grunt-contrib-uglify --save-dev
We now need to create a Gruntfile for our tasks:
You’ll also need to change the paths in your base HTML file to point to the minified versions:
Now, run the Grunt task:
$ grunt
And collect the static files:
$ python manage.py collectstatic
You’ll also want to add your
node_modules folder to your
gitignore:
Then commit your changes:
Now, our
package.json will cause a problem - it will mean that this app is mistakenly identified as a Node.js app. To prevent this, create the
.slugignore file:
package.json
Then commit your changes and push them up:
If you check, your site should now be loading the minified versions of the static files.
That’s our site done! As usual I’ve tagged the final commit with
lesson-8.
Sadly, that’s our final instalment over with! I hope you’ve enjoyed these tutorials, and I look forward to seeing what you create with them. | https://matthewdaly.co.uk/posts/25/ | CC-MAIN-2020-24 | refinedweb | 1,856 | 70.33 |
#include <hallo.h> * Olivier Guyotot [Thu, Dec 05 2002, 08:18:40PM]: > hi, > I would like to cool my CPU in the same way CPUIdle does it under > windows : using the Hlt instruction when the CPU is idle. As far as I > know, it's possible to achieve this under linux with apm or acpi, but I > have never succeded. > > I am running a Debian unstable, kernel 2.4.19-k7 (binary package) with > an Athlon XP 1700+ on a MSI K7T266 Pro2. > I first tried with apm, loading the module with the following options: > apm power_off=1 idle_threshold=90 > This seems to work fine for the poweroff function, but that's all. I was > expecting to see a process "kapm_idle" running, but it never happened. > And looking at the sources of apm, I came across this: Won't help you much with Athlons. Get lvcool from. Gruss/Regards, Eduard. -- Do bl Sp ce is a v ry saf me hod of driv compr s ion | https://lists.debian.org/debian-user/2002/12/msg00925.html | CC-MAIN-2017-13 | refinedweb | 166 | 79.5 |
@Autowired annotation makes our lives easier. It also can result in decreased amount of code if we are using it on properties of the class. We need neither constructors nor setter methods. It looks great at first glance, but the benefits rarely come without cost. Today I want to make you aware of the costs that have to be paid. ...Read More »
Grouping, transforming and reduction with Java 8
1. Introduction ...Read More »
Kubernetes Namespaces, Resource Quota, and Limits for QoS in Cluster
By default, all resources in Kubernetes cluster are created in a default namespace. A pod will run with unbounded CPU and memory requests/limits. A Kubernetes namespace allows to partition created resources into a logically named group. Each namespace provides: a unique scope for resources to avoid name collisions policiesto ensure appropriate authority to trusted users ability to specify constraints for resource consumption This ...Read More »
Kotlin Month Post 2: Inheritance and Defaults
Intro ...Read More » ...Read More »
Iterate over all keys in a Redis Cluster
Redis provides a neat command to iterate over all keys on a node. It’s the SCAN command that is used to scan over keys and to return a cursor to resume then the scanning from the cursor position. Complexity comes in when using Redis Cluster. In the previous scenario, all keys are located on one Redis node. With Redis Cluster ...Read More »
12 Awesome Spring Data Tutorials to Kick-Start your Data Projects .. »
Give Enterprise APIs Visibility With Swagger And GrokOla
The Keyhole Labs team is excited to announce that GrokOla now offers Swagger integration. GrokOla users can now upload Swagger JSON files into GrokOla to have all API documentation centralized, searchable, and accessible from within their private GrokOla instance. This is the example Swagger UI Petstore server loaded into and accessible from GrokOla. Info:. ...Read More » | https://www.javacodegeeks.com/page/82/ | CC-MAIN-2016-44 | refinedweb | 307 | 56.05 |
/*
* This sample shows how to list all the names from the EMP table
*
* It uses the JDBC THIN driver. See the same program in the
* oci8 samples directory to see how to use the other drivers.
*/
// You need to import the java.sql package to use JDBC
import java.sql.*;
class Employee
{
public static void main (String args [])
throws SQLException
{
// Load the Oracle JDBC driver
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
// Connect to the database
// You must put a database name after the @ sign in the connection URL.
// You can use either the fully specified SQL*net syntax or a short cut
// syntax as ::. The example uses the short cut syntax.
Connection conn =
DriverManager.getConnection ("jdbc:oracle:thin:@dlsun511:1721:dbms733",
"scott", "tiger");
//));
}
} | http://www.dbasupport.com/forums/showthread.php?10002-How-to-test-JDBC-thin-client-connections&p=39680&mode=linear | CC-MAIN-2015-18 | refinedweb | 124 | 51.85 |
3595/handling-print-dialog-in-selenium-webdriver
I have to handle print dialog, which appears by clicking [CLTR+P] inside the browser).
I tried it using the below code:
Alert print = driver.switchTo().alert();
print.dismiss();
But this code was not working for me. How to handle these objects?
In Selenium WebDriver we are not able to handle these.
Alos, they are different across different Browsers/ Systems/ Programming Language settings, that's why there is no definite answer to this question.
It would be better if you can avoid the print dialog and try to take the screenshot of the page and print it using the Java tools.
You should try using waits for alerts ...READ MORE
The title of the page will not ...READ MORE
using OpenQA.Selenium.Interactions;
Actions builder = new Actions(driver); ..
AutoIt Window Information Tool do not recognize ...READ MORE
You can use the driver.executeScript() method to access the ...READ MORE
OR | https://www.edureka.co/community/3595/handling-print-dialog-in-selenium-webdriver | CC-MAIN-2019-22 | refinedweb | 157 | 70.29 |
#include <hallo.h> Santiago Vila wrote on Mon Sep 03, 2001 um 02:21:04AM: > I think we can consider the volume or the number of subscribers of the > debian-user mailing list, and compare it with the volume or the number > of subscribers of all other language-specific debian-user-foo lists > combined. Is there anybody who can do this count? > > Can you think of another way to estimate the proportion of users using > locales? I really wish to make the locales beeint installed by default, either allways, or with some user interacton (yes/no, which locale etc.). IMO this should be included as one of the first questions in baseconfig. Gruss/Regards, Eduard. | Date: Sat, 16 Jun 2001 15:43:06 -0400 | From: Ben Collins <bcollins@debian.org> | Subject: Re: Bug#101130: libc6 should depend on locales | To: Eduard Bloch <blade@debian.org>, 101130-done@bugs.debian.org | | On Sat, Jun 16, 2001 at 07:29:05PM +0200, Eduard Bloch wrote: | > Package: libc6 | > Version: 2.2.3-6 | > Severity: normal | > Tags: woody sid | > | | No. Simply put, libc6 does not require locales, so it is not going to | depend on it. The libc6 package works perfectly fine without locales. | Adding a depends for locales breaks policy. Also, it makes it larger for | base installs (some 10megs), which is rediculous for a new installation. | | I'm closing this bug. If you want suggests to be taken into account, use | a higher level interface such as dselect or deity. | | Ben | -- /"We are M$ of Borg. We will add your technology to our own crap. / /Your company will be bought. Open Source is futile!" / | https://lists.debian.org/debian-devel/2001/09/msg00176.html | CC-MAIN-2018-17 | refinedweb | 272 | 66.84 |
So I've been lurking in the forums, and thought I could write something up many of you might benefit from reading. Admittedly most of the inspiration for this came from tutorials and discussions surrounding application authentication. But make no mistake, this is not an authentication or Login tutorial. Though you might learn something from it to help there.
This is a tutorial to explain a simple way of decoupling implementation from its usage. For the benefit of those newer to programming, I'll briefly explain a bit.
When I say decoupling, I mean separating, or abstracting. Consider a car. I'm no mechanic, and have very limited knowledge of how they work. The technical bit however is very much abstracted. I don't need to know how it works. And because it's abstracted in a way that is standard.. you know Key to start, Gas to move, Brake to stop, steering wheel to turn; I can jump into any car, regardless of the vendor or model, and start driving.
With that in mind; why would I want a form, designed to take credentials from a user, and give them feedback have anything at all to do with the login process itself? If I decided to change how I log in to the application, I might very well have to teach it how to test credentials again, from scratch.
And I will stop here for a second. Because it is absolutely true, that if I decided to make a change like that, no matter what, I would have to program that from scratch. but, by exposing that new functionality in the same way, I don't need to change any part of my already working application, to be able to use this new login component.
I think it's time to move on; as we've pretty much hit the extent of my knowledge on cars. lol
In this example, I'm going to show you how to build an interface, that your application understands, and a login component pretty much using exactly the same logic I've seen in other tutorials and examples (which are very insecure), but I think it's a great place to start, because it proves my point. And it also offers up a great opportunity to further explain this concept, and how it can very minimally impact your application, while making amazing improvements to the application over all.Interfaces
Interfaces might better be called "usage contracts". They allow usage logic to make assumptions about how to make use of something, without knowing how they are implemented. A family sedan, and a pickup truck, if implemented in C#, might both implement the ISmallAutomaticVehicle interface. We as drivers, have usage logic that know how to use a steering wheel, gas, break and automatic shifter; clearly defined by this interface.
So, imagine this.
private ISmallAutomaticVehicle myVehicle;
I go to the dealer, and I buy a PickupTruck.
myVehicle = new PickupTruck();
but it wouldn't matter if I instead bought a Corolla.
myVehicle = new Corrolla();
Because they both implement ISmallAutomaticVehicle, and my DriveToWork method requires only that.
private void DriveToWork(ISmallAutomaticVehicle vehicle){ }
Have I explained this enough? I'd like to move on and Start showing how we could decouple a simple login provider, again using basically the same thought as seen in a few other places on the C# forums. If this isn't clear, leave some comments with questions, and I'll do my best to answer.Our Login Provider
The first thing we're going to do is create a new solution. I'm actually attaching mine to the thread, so you 're welcome to download that and follow along. Or you're also welcome to just read, interpret and do it that way.
Anyways, call the solution whatever you like, perhaps LoginExample. And in that solution, add a class library called LoginSupport.
In order to properly decouple the Login functionality from the application entirely, we need two things. We need an interface describing what a login provider can do, which is done using method signatures. And of course we need to define any custom types that both the login provider implementation, and your application will need to understand. For the purposes of this tutorial, we will use an enumeration (in a real system, you'd likely have several return types, and interfaces, but I just don't want to overcomplicate this).LoginResponse
First add a class to your LoginSupport Project. Call it LoginResponse. Change its declaration to enum, and remove the constructor. You're also going to add the enumeration values to it, and it should look like this.
using System;
namespace LoginSupport
{
public enum LoginResponse
{
Success,
Failure,
Locked
}
}
Explaining enumerations at this point is beyond the scope of this tutorial, but if you have questions, please feel free to ask.ILoginProvider
Next we need to create our interface. This is done in pretty much the same manner. Add a class to your project, called ILoginProvider. Change its declaration to interface, and add our method signatures. It should look like this.
using System;
namespace LoginSupport
{
public interface ILoginProvider
{
bool IsAuthenticated();
LoginResponse Login(string username, string password);
void Logout();
}
}
Creating a login provider is as simple as creating a new class, and implementing the interface. Compilation will fail until you implement everything in this interface.HardCodedLoginProvider
For the purposes of this tutorial, as stated, I'm going to make a dummy login provider. It's not secure, it doesn't follow any best practices, but it's enough to build an application using this ILoginProvider interface. Maybe in another tutorial, I'll build a DBLoginProvider.
Anyways, add a new class to the LoginSupport project, and call it HardCodedLoginProvider. No changes needed to its declaration. We're going to implement the interface, and fill it in with some supporting logic.
using System;
namespace LoginSupport
{
public class HardCodedLoginProvider : ILoginProvider
{
private const string _username = "testaccount";
private const string _password = "notsecure";
private bool _authenticated = false;
private bool _locked = false;
private int _loginAttempts = 0;
private int _maxLoginAttempts = 3;
public bool IsAuthenticated() {
return _authenticated;
}
public LoginResponse Login(string username, string password)
{
_loginAttempts++;
if (_loginAttempts > _maxLoginAttempts) {
_locked = true;
}
if (_locked) {
return LoginResponse.Locked;
}
if (username == _username && password == _password) {
_loginAttempts = 0;
_authenticated = true;
return LoginResponse.Success;
} else {
return LoginResponse.Failure;
}
}
public void Logout() {
_authenticated = false;
}
}
}
Roll your eyes, laugh it up. This is actually a great starting point if you only needed this type of really basic password protection. You could start building your application from the ground up, and then worry about the specifics of where you want to store credentials after. This might even be preferred during early development.
In fact, if your following patterns like this, there are some amazing libraries out there that implement something called dependency injection. They allow you to build additional components that implement common interfaces, that are configurable at deploy time.
Imagine your app in production, built to authenticate against SQL server. A need comes up to use Active Directory. So you build an Active Directory implementation, drop it on the production server, and change a one liner in the configuration file. That's it!
Really, dependency injection is beyond the scope of this tutorial, but if you were curious, go google it!Our Application
Ok, so we have our login component. So let's create a Windows Forms Application. Call it whatever you like, perhaps LoginTester. Once you have this completed, you need to add a reference to our LoginSupport project.Program.cs
The Program class is a great place to start here. For this example, we're going to use it to create a singleton instance of our login provider. To start, you should add a using statement to the top, indicating the namespace where our login code is located.
using System;
using System.Windows.Forms;
using LoginSupport;
After doing this, we will want to add a static ILogin variable to the Program Class. This can be anywhere in the class, but probably right on top of the Main method.
internal sealed class Program
{
public static ILoginProvider LoginProvider;
[STAThread]
private static void Main(string[] args)
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new MainForm());
}
}
The last thing we need to do to the Program class, is actually create an instance of our LoginProvider. Note that we set the type, to the interface ILoginProvider. We are going to create an instance of our HardCodedLoginProvider; and this is fine because it implements this interface.
I should also note, that if HardCodedLoginProvider also had other methods, not specified in ILogin, you wouldn't be able to execute them from this reference. You could direct cast back to HardCodedLoginProvider, but we typically wouldn't do that. That would defeat the whole point of decoupling the login logic, from the UI.
using System;
using System.Windows.Forms;
using LoginSupport;
namespace LoginTester
{
internal sealed class Program
{
public static ILoginProvider LoginProvider;
/// <summary>
/// Program entry point.
/// </summary>
[STAThread]
private static void Main(string[] args)
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
LoginProvider = new HardCodedLoginProvider();
Application.Run(new MainForm());
}
}
}
If and when you ever made a proper login provider, this is the only line in your application that will ever change. Instead of new HardCodedLoginProvider(), we might instead say new DBLoginProvider() or... new ActiveDirectoryLoginProvider(). Because we would build a provider to work with the interfaces we're already designed, and the application is built to work with them, the transition would be seamless.MainForm
The last part of code for this tutorial, is of course, the Form. In my project, I have MainForm, you probably have Form1. It doesn't matter what it's called, I just didn't use Visual Studio to set this tutorial up.
The only really important components you need on you form are, a username textbox, a password textbox and a submission button. In order for the code to be copy and paste able from this site, you need to ensure they are named the same. Though I trust you are all more than capable of adjusting control references in my code.
Control Description
Control Name
The username text box
usernameTextBox
The password text box
passwordTextbox
Login or Submit button
loginButton
Typically you would set a mask character for your password box. Don't worry about it for this test, if you don't want to.
The code supporting our Login component test might look like this.
void TestLoginMechanism() {
//First validate they entered information, so we dont REALLY annoy the user
if (!ValidateInputs()) {
MessageBox.Show("You must enter both a username and a password",
"User Input Validation Failure",
MessageBoxButtons.OK,
MessageBoxIcon.Exclamation,
MessageBoxDefaultButton.Button1);
return;
}
//We try the entered credentials against our login provider
//note we can reference that from the Program class, since we made it static.
var result = Program.LoginProvider.Login(usernameTextBox.Text, passwordTextBox.Text);
//Since we are simply explaining how this works, we test the response, and notify the user accordingly
if (result == LoginResponse.Success) {
MessageBox.Show("Successful Login Attempt!",
"Successful Login Attempt",
MessageBoxButtons.OK,
MessageBoxIcon.Information,
MessageBoxDefaultButton.Button1);
return;
}
if (result == LoginResponse.Failure) {
MessageBox.Show("Username and/or Password provided are incorrect. Please try again. But be careful, you will be locked out after 3 login attempts.",
"Failed Login Attempt",
MessageBoxButtons.OK,
MessageBoxIcon.Exclamation,
MessageBoxDefaultButton.Button1);
return;
}
if (result == LoginResponse.Locked) {
MessageBox.Show("This account has been locked out, please contact your system administrator.",
"Account Locked",
MessageBoxButtons.OK,
MessageBoxIcon.Stop,
MessageBoxDefaultButton.Button1);
return;
}
}
bool ValidateInputs() {
//basically if either the username or password are false, we want to NOT try to authenticate,
//this will allow us to correct the user
if (string.IsNullOrEmpty(usernameTextBox.Text)
|| string.IsNullOrEmpty(passwordTextBox.Text)) {
return false;
}
return true;
}
After you paste that in, again if your control names match, you just have to wire it up. So double click on your Login Button, so you can get a Click event handling stub. And you will want to call the TextLoginMechanism() from it.
void LoginButtonClick(object sender, System.EventArgs e)
{
//Since this is only explaining how to use a mechanism like this, we will use TestLoginMechanism(),
//rather than actually logging into the application.
TestLoginMechanism();
}
You could even throw in a Cancel button for good measure if you like.
void CancelButtonClick(object sender, System.EventArgs e)
{
//Just close the form
}
So what's really cool about all this, is we should now have a fully functioning application. And our login form is completely decoupled from the login components. In
fact, the login form has no say in how the security policies work (such as 3 failed login attempts), the type of encryption (if any)...
If we were to build that DB Provider, and implement the ILogin interface, the only line of code we would change in the application is the Program.cs, where we actually create the instance of our Login Provider.Summary
We covered a very little amount of ground here, in a lot of words. The whole point of your read here is to show firsthand, how decoupling units of functionality can make your code overall more maintainable, and much more scalable.
You would typically use similar techniques to abstract specific domains of code. Another example might be data access. You might even hear people use the term "layer", like "My data access layer".
Units of functionality don't always have to (nor should they always) use interfaces. Interfaces are put in place when you feel as though you might have components that from your application's perspective, do the same thing; you may want to swap them out for use in different situations, but underlying have very few similarities.
Another example where this might be useful, would be for logging output. Maybe you have an ILogWriter interface, which is implemented by a text file writer, a database writer, and XML writer which supports making calls to SOAP services and a windows event log writer. From your applications point of view, they log, they do the job; but from your customer's point of view, having the logs show up in a useful way is value add! (and you can change it with a single line of code)
Anyways, I've talked my face off.
Thanks for reading! Questions, comments, corrections, criticisms.. they're all welcome!
DecoupleTutorial.zip 51.13KB
317 downloads
Also, I wrote this while working in Sharp Develop. I converted everything over best I could to Visual Studio 2008 best I could, so should make an easy upgrade to whatever you're working with. Let me know if that didn't go so well, I'm not getting errors with VS 2008 any more. | http://forum.codecall.net/topic/78054-decoupling-application-logic-from-your-uis/ | CC-MAIN-2020-45 | refinedweb | 2,426 | 55.84 |
The getaddrinfo call causes an internal segmentation fault when called from
threads and the binary is linked with "-static". The documentation says the
function is thread safe. This should be also the case when linked with "-static"
since there is no exception mentioned.
The crash only occurs if the binary is executed on a multi core system, on a
single core system it does not crash. This seems to be a synchronization problem
inside the library, but somehow only in the static version.
To reproduce just use this small test program:
#include <stdio.h>
#include <netdb.h>
#include <pthread.h>
#include <unistd.h>
void *test(void *)
{
struct addrinfo *res = NULL;
fprintf(stderr, "x=");
int ret = getaddrinfo("localhost", NULL, NULL, &res);
fprintf(stderr, "%d ", ret);
return NULL;
}
int main()
{
for (int i = 0; i < 512; i++)
{
pthread_t thr;
pthread_create(&thr, NULL, test, NULL);
}
sleep(5);
return 0;
}
Compile with "g++ -o dnstest -static dnstest.cpp -lpthread" and then start.
Usually when linked with "-static" it crashes immediately, without it works fine.
This was verified with different glibc versions from Fedore 7, 11, CentOS 5.3,
Ubuntu 8.x and 9.x, SuSE 11.1 32bit and 64bit.
The glibc versions tested are from 2.6 to 2.10.
I see no reason why this only works if dynamically linked. The documentation
also does not mention any restrictions if linked statically.)
The segmentation fault happens on different addresses below the
_nss_files_gethostbyname4_r. This function shows a call to __libc_lock_lock in
the source, but this probably does not work!?
The assembler code shows calls to the phread_lock() function:
Dump of assembler code for function _nss_files_gethostbyname4_r:
0x00007ffff55d8d70 <_nss_files_gethostbyname4_r+0>: push %r15
0x00007ffff55d8d72 <_nss_files_gethostbyname4_r+2>: push %r14
0x00007ffff55d8d74 <_nss_files_gethostbyname4_r+4>: push %r13
0x00007ffff55d8d76 <_nss_files_gethostbyname4_r+6>: mov %rsi,%r13
0x00007ffff55d8d79 <_nss_files_gethostbyname4_r+9>: push %r12
0x00007ffff55d8d7b <_nss_files_gethostbyname4_r+11>: mov %rdi,%r12
0x00007ffff55d8d7e <_nss_files_gethostbyname4_r+14>: push %rbp
0x00007ffff55d8d7f <_nss_files_gethostbyname4_r+15>: mov %rdx,%rbp
0x00007ffff55d8d82 <_nss_files_gethostbyname4_r+18>: push %rbx
0x00007ffff55d8d83 <_nss_files_gethostbyname4_r+19>: mov %rcx,%rbx
0x00007ffff55d8d86 <_nss_files_gethostbyname4_r+22>: sub $0x88,%rsp
0x00007ffff55d8d8d <_nss_files_gethostbyname4_r+29>: cmpq
$0x0,0x20823b(%rip) # 0x7ffff57e0fd0 <fgetpos+2137728>
0x00007ffff55d8d95 <_nss_files_gethostbyname4_r+37>: mov %r8,0x30(%rsp)
0x00007ffff55d8d9a <_nss_files_gethostbyname4_r+42>: mov %r9,0x38(%rsp)
0x00007ffff55d8d9f <_nss_files_gethostbyname4_r+47>: je 0x7ffff55d8dad
<_nss_files_gethostbyname4_r+61>
0x00007ffff55d8da1 <_nss_files_gethostbyname4_r+49>: lea
0x208498(%rip),%rdi # 0x7ffff57e1240 <lock>
0x00007ffff55d8da8 <_nss_files_gethostbyname4_r+56>: callq 0x7ffff55d7020
<__pthread_mutex_lock@plt>
0x00007ffff55d8dad <_nss_files_gethostbyname4_r+61>: mov
0x2084d1(%rip),%edi # 0x7ffff57e1284 <keep_stream>
0x00007ffff55d8db3 <_nss_files_gethostbyname4_r+67>: callq 0x7ffff55d8700
<internal_setent>
0x00007ffff55d8db8 <_nss_files_gethostbyname4_r+72>: cmp $0x1,%eax
0x00007ffff55d8dbb <_nss_files_gethostbyname4_r+75>: mov %eax,0x5c(%rsp)
0x00007ffff55d8dbf <_nss_files_gethostbyname4_r+79>: je 0x7ffff55d8ded
<_nss_files_gethostbyname4_r+125>
0x00007ffff55d8dc1 <_nss_files_gethostbyname4_r+81>: cmpq
$0x0,0x20820f(%rip) # 0x7ffff57e0fd8 <fgetpos+2137736>
0x00007ffff55d8dc9 <_nss_files_gethostbyname4_r+89>: je 0x7ffff55d8dd7
<_nss_files_gethostbyname4_r+103>
0x00007ffff55d8dcb <_nss_files_gethostbyname4_r+91>: lea
0x20846e(%rip),%rdi # 0x7ffff57e1240 <lock>
0x00007ffff55d8dd2 <_nss_files_gethostbyname4_r+98>: callq 0x7ffff55d7040
<__pthread_mutex_unlock@plt>
0x00007ffff55d8dd7 <_nss_files_gethostbyname4_r+103>: mov 0x5c(%rsp),%eax
0x00007ffff55d8ddb <_nss_files_gethostbyname4_r+107>: add $0x88,%rsp
When debugging the _nss_files_gethostbyname4_r function with dynamic linking the
pthread_mutex_lock function is executed and can be stepped into. But statically
linked the step does not reveal that function is called at all even when the
disassemble looks like it should!?
You shouldn't link statically, there are many reasons why it is a bad idea.
If you for whatever strange reason still need it, you need to make sure you link
all of libpthread.a into your application (e.g. using -Wl,--whole-archive around
-lpthread), otherwise many things won't work as expected.
Ok, I will try that. But why is there no warning or information when statically
linking pthread library. The linker warns about he would need the library for
lookups but no warning at all about the pthread library.
The reason we used to link statically is that the binary should run on different
linux version including versions which use older libraries.
Is there another way to e.g. link dynamically with glibc-2.10 and run on systems
with only glibc-2.6?
Please read, by linking
statically you make the portability far worse. Unless you are creating a system
recovery tool that needs to work when shared libraries are hosed up, you should
link at least glibc libraries dynamically.
Ok, thank you for that information!
My problem with dynamic linking on a new linux system e.g. using glibc-2.11 the
binary won't start on older linux, it says: /lib64/libc.so.6: version
`GLIBC_2.7' not found. The application does not need any functions of that new
library, it would work fine with e.g. glibc-2.6. Is there a way to change the
minimum dependency of the library? It works when I compile on an old linux
system, it will run on new systems.
When compiling the application on windows I can define the minimum needed
version in a define and then I can only uses functions available at that version
and not newer functions. Can this be done with glibc, that the binary still
works with libraries definine e.g. GLIBC_2.6?
Thank you very much for helper so far!
Marius
This is no place to ask question.
On the other hand you haven't responded to the question whether linking in the
entire libpthread helps. I assume it does.
Hello!
I included the whole pbthread:
Using "-static -lpthread" or "-static /usr/lib64/libpthread.a" creates the same
binary. The lib pthread is also used by our code so should be included in the
binary.
But the binary create like above still has the problems!
Our solution is to link libc, libm and libpthread dynamic on a Ubuntu LTS 8.0.4
system. This binary works also on most other systems (with reasonable new glibc).
On strange thing is: if I compile dynamic the same on a Fedora 7 system and run
it on e.g. SLES 10 the binary breaks already in the loader with Floating
Exception. The binary compiled on Unbuntu with the same setup works fine. That
is strange (probable SLES has no standard glibc)
I have not bothered to actually trace this but I have a likely suspect.
As I understand it, resolution is handled by libnss_*.so, which are still
dynamically linked even if the executable is statically linked. They
presumably feature weak extern references to various pthread functions.
If pthreoads is dynamically linked, these references succeed. If
pthreads is statically linked then the pthread symbols are not reexported
to things loaded with dlopen() like the libnss libraries.
I don't know a good solution but perhaps -rdynamic has some role to play?
Or perhaps a less bloated libc than glibc could be used, one which has a
number of simple resolvers built in? The libnss resolvers on my Linux
system take up 275kB which is enough space for many other unixes to
implement an entire libc....
I have the same problem.
Sometimes call of getaddrinfo function in one of pthreads causes an segfault.
Application linked with -static flag. It should be linked statically because I
use it on systems without installed pthread libraries and don't have ability to
install it.
I didn't find any helpful suggestions in the thread. So, what should I do to fix
this problem?
The same crash happens if the host program is not compiled with "-pthread" and dynamically loads a module which is linked to libpthread.so and calls getaddrinfo() from multiple threads.
I will attache two example C files that show case this problem.
Created attachment 5325 [details]
Example module which calls getaddrinfo() from many threads.
Compile this example module with:
gcc -o crash_getaddrinfo.so -Wall -fPIC -shared -pthread crash_getaddrinfo.c
Created attachment 5326 [details]
Simple host program to dynamically load a module with dlopen().
Compile without -pthread:
gcc -ldl -Wall -o crash_main_no_pthread crash_main.c
Compile with -pthread:
gcc -ldl -Wall -o crash_main_pthread crash_main.c -pthread
By default the program will try to load a module named: /tmp/crash_getaddrinfo.so
I first ran into this problem when using a Lua C module (ZeroMQ bindings for Lua) that uses IO threads in the background. The only work-around is to either compile the Lua VM with -pthread (This shouldn't be required, since not all Lua scripts need pthread support) or to use "LD_PRELOAD=/lib/libpthread.so host_program".
I would prefer an option where the host program (Lua VM) didn't have to either be compiled with -pthread or wrapped in a script to preload libpthread.so.
Also the example program will even crash on a single-cpu(single-core) computer running Debian 6.0, glibc 2.11.2.
Created attachment 5327 [details]
Valgrind output shows some invalid reads into freed memory before the program crashes on a NULL pointer.
This problem seems to be caused by a race condition between the threads calling getaddrinfo(). With a small number of threads it doesn't always happen. Atleast the backtrace has always been the same.
(In reply to comment #12)
> The same crash happens if the host program is not compiled with "-pthread" and
> dynamically loads a module which is linked to libpthread.so and calls
> getaddrinfo() from multiple threads.
>
> I will attache two example C files that show case this problem.
A comment on this case is at:
my advise for now is to link your application against libpthread until somebody really digs into this and figures out what is supposed to work and how.
This bug is still reproducible in 2.18.90.
In a test case where the application doesn't link against libpthread, but a dlopen'd library does, parallel calls to getaddrinfo cause corruption in the IO layers and eventually a crash.
Even though libpthread.so.1 has been loaded the weak-ref-and-check idiom in the NSS code isn't working. The GOT entry stays zero and therefore the nss code skips doing any locking and we get serious corruption via get_contents->__GI_fgets_unlocked (doing unlocked file IO with multiple threads causes data races and corruption).
The skipped locks are in _nss_files_gethostbyname4_r (libnss_files.so). When the application is compiled with -lpthread the GOT entry has a non-zero value of 0x00007ffff77bc460 which is "0x7ffff77bc460 <__GI___pthread_mutex_lock>: sub $0x8,%rsp" and therefore correct. That entry is the GOT entry #40 with relocation: 000000000020bfd8 0000001a00000006 R_X86_64_GLOB_DAT 0000000000000000 __pthread_mutex_lock + 0.
If libpthread is loaded *after* libnss_files.so is loaded I don't see that there is anything you can do to make the NSS code use locks since the GOT relocation has already been processed. However in this case libpthread is loaded *before* libnss_files.so, but it appears as if the resolution scope prevents the symbols from libpthread being made available to libnss_files.so?
e.g.
20987: object=/home/carlos/build/glibc/nss/libnss_files.so.2 /build/glibc/nss/libnss_files.so.2 /home/carlos/build/glibc/libc.so.6 /home/carlos/build/glibc/elf/ld.so
Notice libnss_files.so.2 is in it's own scope without libpthread. As opposed to crash_getaddrinfo.so's scope with libpthread in it
e.g.
20987: object=/home/carlos/support/2013-11-22/crash_getaddrinfo.so /support/2013-11-22/crash_getaddrinfo.so /home/carlos/build/glibc/nptl/libpthread.so.0 /home/carlos/build/glibc/libc.so.6 /home/carlos/build/glibc/elf/ld.so
I don't know what's the right answer here. There are really only two resolution scopes, global and local, the scopes listed above are internal details of glibc's dyanmic loader. Why libpthread's symbols wouldn't be used for the relocation in libnss_files.so is what baffles me, one would have to track down the exact relocation and determine why the libpthread symbol isn't used.
I'm not working on this so I'm flipping this to NEW, but I thought I'd post what I saw during my analysis of a similar internal Red Hat bug.
Why is getaddrinfo trying to "optimize" out the locking for single-threaded programs anyway? Certainly the time spent in getaddrinfo is dominated by actual lookups, not by locking overhead.
(In reply to Rich Felker from comment #20)
> Why is getaddrinfo trying to "optimize" out the locking for single-threaded
> programs anyway? Certainly the time spent in getaddrinfo is dominated by
> actual lookups, not by locking overhead.
I can only assume it does this to avoid requiring libpthread. The actual lookups might also be very fast if they are resolved by /etc/hosts or some other local file-based NSS backend. Requiring the thread library would have a non-zero impact on performance for single-threaded applications. What other reason could there be for using the weak-ref-and-check idiom (which I know you don't like)?.
(In reply to Rich Felker from comment #22)
>.
No, you make a good point, and internally glibc already uses just plain futexes for __libc_lock_lock, but for non-libc modules like libnss_files.so.2 (loaded as part of the NSS plugin mechanism) the __libc_lock_lock defines redirect to __pthread_mutex_lock. I see no reason at the moment why they couldn't just use futexes for serializing threaded access. There was certainly no futex support when these NSS modules were written so it might be a legacy issue. Switching them over to futex locking would solve this problem and the uncontended lock case is an atomic operation that should always succeeds.
*** Bug 260998 has been marked as a duplicate of this bug. ***
Seen from the domain
Page where seen:
Marked for reference. Resolved as fixed @bugzilla.
This bug is still reproducible in 2.23(-r4, Gentoo) and 2.24(-9ubuntu2.2, Ubuntu 17.04).
What I really don't get is that it even crashes if I protect calls to gethostbyname() with a pthread mutex. However, if I call gethostbyname() alternatingly in two different threads (2s delay between each call, one thread 1s behind the other), I could not observe crashes.
Either way, the link-time
warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
should be severely upgraded to something like the following until this bug is fixed:
warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking, and may fail in spectacular ways in combination with pthreads when not yet fully understood additional conditions hold
Disregard that. On Debian 9 with glibc 2.24 I don't need multithreading at all to crash:
hsc@kos:~/tmp$ echo -e \
"#include <netdb.h>\nint main(void) { gethostbyname(\"foo\"); }" > foo.c && \
gcc -g -static foo.c && ./a.out
/tmp/ccxQLvv6.o: In function `main':
/fs/staff/hsc/tmp/foo.c:2: warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
Segmentation fault (core dumped)
hsc@kos:~/tmp$ gdb -c core
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
[...]
[New LWP 4447]
Core was generated by `./a.out'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000000000000000 in ?? ()
(gdb) bt
#0 0x0000000000000000 in ?? ()
#1 0x00007f29662df040 in ?? ()
#2 0x0000000000000001 in ?? ()
#3 0x0000000000000010 in ?? ()
#4 0x0000000000800000 in ?? ()
#5 0xffffffffffffffff in ?? ()
#6 0x00007f29662dec60 in ?? ()
#7 0x0000000180000000 in ?? ()
#8 0x0000000000000000 in ?? ()
I guess that warrants a separate bug report?
Bug 21975.
Can confirm on sys-libs/glibc-2.26-r3 on my Gentoo.
Everything's fine when linking dynamically, when linking statically, I receive the following trace:
#0 0x00007ffff5d70648 in internal_getent (stream=0x7fffd8000b10, result=result@entry=0x7ffff6778410, buffer=buffer@entry=0x7fffe0000e70 "", buflen=buflen@entry=1024,
errnop=errnop@entry=0x7ffff67796b0, herrnop=herrnop@entry=0x7ffff67783fc, af=2, flags=0) at nss_files/files-XXX.c:216
#1 0x00007ffff5d717c0 in _nss_files_gethostbyname3_r (name=0x7fffe0000b10 "api.telegram.org", af=2, result=0x7ffff6778410, buffer=<optimized out>, buflen=1024, errnop=0x7ffff67796b0,
herrnop=0x7ffff67783fc, ttlp=0x0, canonp=0x0) at nss_files/files-hosts.c:352
#2 0x00007ffff5d71941 in _nss_files_gethostbyname2_r (name=<optimized out>, af=<optimized out>, result=<optimized out>, buffer=<optimized out>, buflen=<optimized out>,
errnop=<optimized out>, herrnop=0x7ffff67783fc) at nss_files/files-hosts.c:389
#3 0x0000000000704ac3 in __gethostbyname2_r (name=0x7fffe0000b10 "api.telegram.org", af=af@entry=2, resbuf=resbuf@entry=0x7ffff6778410, buffer=<optimized out>,
buffer@entry=0x7fffe0000e70 "", buflen=buflen@entry=1024, result=result@entry=0x7ffff6778408, h_errnop=0x7ffff67783fc) at ../nss/getXXbyYY_r.c:316
#4 0x00000000004513e6 in networking::resolve_host (answer=0x7ffff6778410, hostname=...) at src/net.cpp:58
#5 networking::tcp_connect (host=...) at src/net.cpp:87
#6 0x00000000004524da in networking::tls_connect (host=...) at src/net.cpp:169 | https://sourceware.org/bugzilla/show_bug.cgi?id=10652 | CC-MAIN-2017-47 | refinedweb | 2,701 | 57.77 |
09 March 2012 08:51 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The change was due to a technical glitch at the facility, and the plant will be restarted on 11 May, as previously scheduled, the source said.
During the turnaround, Samsung Total will continue supplying to its contract buyers, but it will stop supplying spot cargo.
Traders in
South Korean No 100 and No 150 solvent oils were trading on 8 March at $1,170-1,200/tonne (€877-900/tonne) and $1,100-1,150/tonne FOB (free on board) Korea respectively, according to C1 Energy, an ICIS service in Ch | http://www.icis.com/Articles/2012/03/09/9539949/s-koreas-samsung-total-brings-forward-solvent-oil-unit.html | CC-MAIN-2015-06 | refinedweb | 103 | 57.1 |
I want to dereference a vector iterator that points to a vector of class objects whose members are pointers. I'm using a compound class (class 1) whose members are pointers to another class (class 2). Class 2 has an int member variable, which I want to retrieve. If I retrieve the variable using array indexing, everything works fine. But when I try to dereference the iterator, I get a compiler error (gcc version 4.0.1 (Apple Inc. build 5465).
Here is the code (example.cpp):
Error message (I've tagged this as code to keep out the smileys caused by punctuation):Error message (I've tagged this as code to keep out the smileys caused by punctuation):Code://example.cpp #include <iostream> #include <vector> using namespace std; class Class1{ public: Class1() : num(10) { /***/ } int num; int get_num() { return num; } }; class Class2{ public: Class2(Class1* C1obj) : C1ptr(C1obj) { /***/ } Class1* C1ptr; Class1* get_C1ptr() { return C1ptr; } }; int main() { int i; Class1* C1p; //pointer to objects of class Class1 Class2* C2p; //pointer to objects of class Class2 vector<Class1> C1v(5); //vector of 5 Class1 objects (calls constructor) vector<Class2*> C2v; //empty vector of pointers to Class2 objects vector<Class2*>::iterator p; //fill empty vector of type Class2 with pointers to objects whose members are of type Class1 for(i = 0; i < 5; i++) { C1p = &C1v[i]; C2p = new Class2(C1p); C2v.push_back(C2p); } //THIS WORKS PROPERLY for(i = 0; i < 5; i++) { cout << data[i]->get_C1ptr()->get_num() << endl; } //THIS CAUSES ERROR for(p = data.begin(); p != data.end(); ++p) { cout << *p->get_C1ptr()->get_num() << endl; } return 0; }
I could use the array indexing, but I want to understand why the iterator doesn't work. What's the proper way to dereference the iterator here?I could use the array indexing, but I want to understand why the iterator doesn't work. What's the proper way to dereference the iterator here?Code:example.cpp: In function ‘int main()’: example.cpp:37: error: request for member ‘get_C1ptr’ in ‘* p. __gnu_cxx::__normal_iterator<_Iterator, _Container>::operator-> [with _Iterator = Class2**, _Container = std::vector<Class2*, std::allocator<Class2*> >]()’, which is of non-class type ‘Class2*’
Mike | https://cboard.cprogramming.com/cplusplus-programming/106600-dereferencing-vector-iterators.html | CC-MAIN-2017-13 | refinedweb | 359 | 53 |
This section shows you how to use functions in an application. Some of the following code examples use ActionScript that resides in the FLA file, and other code examples place functions in a class file for comparison. For more information and examples on using functions in a class file, see Classes. For detailed information and instruction on how to write functions for a class file, see Example: Writing custom classes.
To reduce the amount of work you have to do, as well as the size of your SWF file, try to reuse blocks of code whenever possible. One way you can reuse code is by calling a function multiple times instead of creating different code each time. Functions can be generic pieces of code; you can use the same blocks of code for slightly different purposes in a SWF file. Reusing code lets you create efficient applications and minimizes the ActionScript code that you must write, which reduces development time.
You can create functions in a FLA file or a class file or write ActionScript code that resides in a code-based component. The following examples show you how to create functions on a timeline and in a class file.
The following example shows you how to create and call a function in a FLA file.
function helloWorld(){ // statements here trace("Hello world!"); };
This ActionScript defines the (user-defined, named) function called
helloWorld(). If you test your SWF file at this time, nothing happens. For example, you don't see the
trace statement in the Output panel. To see the
trace statement, you have to call the
helloWorld() function.
helloWorld();
This code calls the
helloWorld() function.
The following text is displayed in the Output panel:
Hello world!
For information on passing values (parameters) to a function, see Passing parameters to a function.
There are several different ways that you can write functions on the main timeline. Most notably, you can use named functions and anonymous functions. For example, you can use the following syntax when you create functions:
function myCircle(radius:Number):Number { return (Math.PI * radius * radius); } trace(myCircle(5));
Anonymous functions are often more difficult to read. Compare the following code to the preceding code.
var myCircle:Function = function(radius:Number):Number { // function block here return (Math.PI * radius * radius); }; trace(myCircle(5));
You can also place functions in class files when you use ActionScript 2.0, as the following example shows:
class Circle { public function area(radius:Number):Number { return (Math.PI * Math.pow(radius, 2)); } public function perimeter(radius:Number):Number { return (2 * Math.PI * radius); } public function diameter(radius:Number):Number { return (radius * 2); } }
For more information on writing functions in a class file, see Classes.
As you can see in the previous code sample, you don't need to place functions on a timeline. The following example also puts functions in a class file. This is a good practice to adopt when you create large applications by using ActionScript 2.0, because it lets you reuse your code easily in several applications. When you want to reuse the functions in other applications, you can import the existing class rather than rewrite the code from scratch or duplicate the functions in the new application.
class Utils { public static function randomRange(min:Number, max:Number):Number { if (min > max) { var temp:Number = min; min = max; max = temp; } return (Math.floor(Math.random() * (max - min + 1)) + min); } public static function arrayMin(num_array:Array):Number { if (num_array.length == 0) { return Number.NaN; } num_array.sort(Array.NUMERIC | Array.DESCENDING); var min:Number = Number(num_array.pop()); return min; } public static function arrayMax(num_array:Array):Number { if (num_array.length == 0) { return undefined; } num_array.sort(Array.NUMERIC); var max:Number = Number(num_array.pop()); return max; } }); // -3 trace("max: " + max); // 34
month: 7 min: -3 max: 34
Flash CS3 | http://www.adobe.com/livedocs/flash/9.0/main/00000757.html | crawl-002 | refinedweb | 636 | 57.67 |
We use the serial monitor to debug the Arduino code, but there are many other things you can do. We will discover in this article the main methods to print(send) character strings on the serial port. print and println to send plain text. The printf function to convert, format and combine several variables in the same character string. sprintf and snprintf to store the results in a variable.
Quick access to functions
print or println printf options printf sprintf write Macro F
How to open the serial port in the Arduino code?
Before being able to send messages);
Print text on the serial port with print or println
The print() function is the base function. It allows to send(print) a character or a character string without any particular formatting.
Characters are sent to the serial port one after the other.
To return the cursor to the line (as on a word processor), there are several solutions:
- Use the println() method
- Add \n to the end of the published string
- Combine the two
How to combine multiple variables with print or println
As in most other languages (Javascript, PHP ..), C ++ allows combining strings using the operator . The data must first be converted into a string using the String() function.
Here is an example which combines in the same character string an integer variable, a float variable (decimal number) and a boolean.
int INT_VAR = 32; float FLOAT_VAR = 32.23; bool BOOL_VAR = true; Serial.println("A string that combines an integer " + String(INT_VAR) + ", un decimal " + String(FLOAT_VAR) + " and a boolean" + String(BOOL_VAR));
Combine and format multiple variables into a string with printf
The printf method is a standard C++ output format. Espressif has implemented the printf method in the Serial class of the Arduino-ESP32 and ESP8266 framework. This is not the case with the Arduino framework.
How to use printf on Arduino with PrintEx
If you try to compile code developed for ESP32 or ESP8266 on an Arduino, you will get a compilation error if you write Serial.printf(). This is simply because the method was not developed for the Arduino framework.
There are examples of code all over the internet to do this, but the easiest way is to use Christopher Andrews’ PrintEx library. Its operation is almost as complete as the implementation made by Espressif for the ESP32 and ESP8266.
On PlatformIO, add the following option to your platformio.ini file. On the Arduino IDE, add the library from the manager as usual
lib_deps = chris--a/PrintEx @ ^1.2.0
Then, declare PrintEx at the start of your program
#include <PrintEx.h>
Finally, just extend (add the functions) to the Print class of the Arduino framework
setup() { Serial.begin(155200); PrintEx serial = Serial; serial.printf("Hello"); }
And now we can use the function like this
serial.printf("atmospheric pressure is %u hPa", pressure);
How to use printf() from ESP32, ESP8266 or PrintEx framework
The printf() method allows both to format one or more data and then combine them into a single string before sending it to the serial port. To do this, all you have to do is specify the position in the string using the character.
Serial.printf("atmospheric pressure is %u hPa", pressure);
The % specifier argument will then be replaced with the corresponding formatted value.
To format the output string as desired, it is possible to add additional options. The options are detailed in the following paragraphs.
%[flags][width][.precision][length]specifier
Here is an example that formats a decimal number with a single significant digit. The rounding is automatic.
%.1f
You can combine as many variables as you want. You just have to pass the variables in the same order as in the output string.
Specifier available
To be able to combine variables of different natures in the output string, you must indicate to the printf command the type of each data
In bold the specifiers not supported by PrintEx
Flag option
The flag allows you to add characters before the value:
- Force the addition of a sign (+ or -)
- Insert space
- Fill with zeros
- Fill with 0x for hexadecimal numbers
Option width
Specifies (or not) the number of characters to print
Precision option
Allows you to define the precision of the decimal numbers (the number of significant digits to print) or the maximum number of characters if it is a string.
Conversion options taken from this article.
Macro F
The macro F() which allows to move the string in the Flash memory in order to prevent the RAM of the microcontroller from being full if many messages are sent on the serial port for example.
Store the result of a string formatted with sprintf or snprintf in a variable (only for ESP32 or ESP8266)
The sprintf() method allows you to store the result of the conversion in a variable of type char[] .
The sprintf() command requires initializing the output variable with a size at least equal to the length of the string. It is simply the number of characters in the output string.
char output_sprintf[6]; sprintf(output_sprintf, "%.1f°C", temp); Serial.printf("Formatted value saved in the variable output %s \n", output_sprintf);
To combine several variables in the same string, you will need to use the snprintf() function which is written in a similar way
char output_snprintf[60]; snprintf(output_snprintf, sizeof(output_snprintf), "Température is %.1f°C, humidity is %.1f%% \n", temp, hum); Serial.printf("output_snprintf = %s \n", output_snprintf);
Print the content of a byte buffer with write()
You may need to use an array of bytes. This array could for example contain the numeric code of each character of a String(character string).
To print an array of bytes to the serial port, the print(), println() functions do not work because they expect a variable of type char, char[] or String.
To print the contents of a buffer with the write() function , all you have to do is iterate through the latter using a for loop().
String stringtocopy = "Arduino"; // Measure string length int buffer_size = stringtocopy.length(); Serial.printf("Buffer size: %u\n", buffer_size); // Create a buffer having the same size as the string byte buffer[buffer_size]; // Copy the contents of the string using the getBytes function // Attention, you must add 1 to the size of the string so as not to have a NULL character stringtocopy.getBytes(buffer, buffer_size + 1); Serial.println("Print buffer with write function"); // Each cell of the buffer is printed one by one using a for loop for (int i = 0; i < buffer_size; i++) { Serial.write(buffer[i]); }
Upload the Arduino code of the examples
Create a new sketch on the Arduino or PlatformIO IDE and upload the code to test all the examples presented previously.
#include "Arduino.h" #include <PrintEx.h> #define SERIAL_SPEED 115200 void setup() { Serial.begin(SERIAL_SPEED); PrintEx serial = Serial; //Wrap the Serial object in a PrintEx interface. Serial.println("\n=== print et println ===="); Serial.print("A"); Serial.print("B"); Serial.print("C"); Serial.println("\n======="); Serial.println("A demo with "); Serial.print("A line break "); Serial.println("\n---- Or -----"); Serial.print("A demo with \n"); Serial.print("a line break"); Serial.println("\n=== Concatenate with println ===="); int INT_VAR = 32; float FLOAT_VAR = 32.23; bool BOOL_VAR = true; Serial.println("<code>A string that combine integer" + String(INT_VAR) + ", decimal " + String(FLOAT_VAR) + " and boolean " + String(BOOL_VAR)); Serial.println("\n=== printf ===="); unsigned int x = 0x999b989; byte b = 120; word w = 63450; signed long l = 2147483646; // signed long -2,147,483,648 to 2,147,483,647 char c = 65; // A char s[] = "une chaîne de caractères"; float f = 99.57; double fdbl = 99.5769; #ifdef __AVR__ serial.printf("Hexa %x %X \n", x, x); serial.printf("Byte %u \n", b); serial.printf("Word %u \n", w); serial.printf("Long %lu \n", l); serial.printf("Caractère %c \n", c); serial.printf("%s \n", s); serial.printf("Variable float %f | %.2f \n", f, f); serial.printf("Variable double %f | %.2f \n", fdbl, fdbl); #else printf("Hexa %x %X \n", x, x); printf("Byte %u \n", b); printf("Word %u \n", w); printf("Long %u \n", l); printf("Caractère %c \n", c); printf("%s \n", s); printf("Variable float %f | %.2f \n", f, f); printf("Variable double %f | %.2f \n", fdbl, fdbl); #endif Serial.println("\n=== FAKE BME280 ===="); double temp = 18.68; double hum = 67.98; #ifdef __AVR__ serial.printf("Temperature is %.2f°C, humidity is %.1f%% \n", temp, hum); #else printf("Temperature is %.1f°C, humidity is %.1f%% \n", temp, hum); #endif #ifdef __AVR__ //printf("output_sprintf = %s \n", output_sprintf); serial.printf("Temperature %.1f°C, humidity is %.1f%% \n", temp, hum); #else char output_sprintf[6]; sprintf(output_sprintf, "%.1f°C", temp); printf("output_sprintf = %s \n", output_sprintf); #endif #ifdef __AVR__ #else char output_snprintf[60]; snprintf(output_snprintf, sizeof(output_snprintf), "Température is %.1f°C, humidity is %.1f%% \n", temp, hum); printf("output_snprintf = %s \n", output_snprintf); #endif Serial.println("\n=== write() buffer to Serial ===="); String stringtocopy = "Arduino"; int buffer_size = stringtocopy.length(); #ifdef __AVR__ serial.printf("Buffer size: %u\n", buffer_size); #else printf("Buffer size: %u\n", buffer_size); #endif byte buffer[buffer_size]; stringtocopy.getBytes(buffer, buffer_size + 1); Serial.println("Print buffer with write() function"); for (int i = 0; i < buffer_size; i++) { Serial.write(buffer[i]); } } void loop() { }
[env:uno] platform = atmelavr board = uno framework = arduino monitor_speed = 115200 lib_deps = chris--a/PrintEx @ ^1.2.0 [env:lolin_d32] platform = espressif32 board = lolin_d32 framework = arduino monitor_speed = 115200
Open the serial monitor to view the operation of the functions. Execution log retrieved from an ESP32.
=== print and println ==== ABC ======= A demo with a line break ---- Ou ----- A demo with a line break === Concatenate with println ==== A string that combine integer 32, decimal 32.23 and a boolean 1 === printf ==== Hexa 999b989 999B989 Byte 120 World 63450 Long <code>2147483646Char A a string Variable float 99.570000 | 99.57 Variable double 99.576900 | 99.58 === FAKE BME280 ==== Temperature is 18.7°C, humidity is 68.0% === Write buffer to Serial ==== Buffer size: 7 Print buffer with write function Arduino
Updates
2020/12/11 Add PrintEx library for Arduino (AVR Platform)
2020/10/22 | https://diyprojects.io/getting-started-arduino-functions-for-combining-formatting-variables-to-the-serial-port-esp32-esp8266-compatible/ | CC-MAIN-2021-49 | refinedweb | 1,669 | 57.67 |
7 May 12:04 2004
Re: GString and null
<jastrachan@...>
2004-05-07 10:04:37 GMT
2004-05-07 10:04:37 GMT
On 7 May 2004, at 09:31, Igor I. Nuzhnov wrote: > Hello, all > > For two years we use Velocity in our projects. > But there are so many bugs in Velocity!!! > So, we decide to write library with Velocity interface that translates > velocity templates into groovy source and than compiles that source > and run it. Coolbeans! Hey do you fancy wrapping up your velocity-style template engine as a pluggable groovy template engine? we've a default implementation which uses ASP / JSP style notation <% %> etc. Having a #if () velocity style would rock! > Now almost velocity syntax work, but we have some problems: > > In velocity there is an operator $!{expression} > If expression evaluated as null than no text inserted into result > stream > But in groovy result of ${expression} is "null" if expression is null. > > How can I achieve this velocity functionality? When you parse the template you could translate $!{expr} to be ${nonNullExpressionsAsText(expr)} and then have a method nonNullExpressionsAsText(value) { return (value != null) ? value.toString() : "" } Incidentally, several people have brought up the issue when using expressions inside strings inside Groovy itself, that handling nulls could be better (e.g. often people want to output nothing rather than 'null' for null expressions. So it might well be worth adding $!{expr} notation to groovy itself as it seems a neat idea. James ------- | http://permalink.gmane.org/gmane.comp.lang.groovy.user/1308 | CC-MAIN-2014-41 | refinedweb | 243 | 57.87 |
Starting with Eclipse Ditto milestone 0.1.0-M3 the Ditto team provides a sandbox which may be used by everyone wanting to try out Ditto without starting it locally.
Instructions
As Ditto makes use of OAuth2.0 in order to authenticate users the sandbox contains a “sign in with Google”
functionality. Ditto accepts the
id_token which is issued by Google as
Bearer token on authentication.
HTTP API documentation
The online HTTP API documentation of the sandbox implements the OAuth2.0 “authorization code”
flow.
Simply click the green
Authorize button, check the checkbox
openid and click the
Authorize button. Your browser will
ask you if the Ditto sandbox may access your Google identity which you should acknowledge.
Afterwards you should be authenticated with your Google user (and therefore your Google ID).
You can try out the API now. For example, expand the PUT /things/{thingId}
item in order to create a new
Thing, a digital twin so to say.
Scroll down to the parameters and enter the required ones (in this case the
thingId), for example:
org.eclipse.ditto.tjaeckle:my-first-thing
The ID must contain a namespace (in Java package notation) followed by a
: and an arbitrary string afterwards.
The body must be a JSON object, at least an empty one
{}.
Or it can be filled with arbitrary attributes and/or features, e.g.:
{ "attributes": { "someAttr": 32, "manufacturer": "ACME corp" }, "features": { "heating-no1": { "properties": { "connected": true, "complexProperty": { "street": "my street", "house no": 42 } } }, "switchable": { "properties": { "on": true, "lastToggled": "2017-11-15T18:21Z" } } } }
Programatically access the HTTP API
If you want to programatically (e.g. in a script running on a RaspberryPi) access the sandbox, we currently have to disappoint you. As there is no possibility to obtain a Google JWT with plain username/password and we currently have no other authentication provider configured in the sandbox, we have no possibility to authenticate a script. | https://www.eclipse.org/ditto/sandbox.html | CC-MAIN-2020-05 | refinedweb | 316 | 55.64 |
Hello,
I have a quick question:
Is there any way I can detect whether a Word document I have opened using Words.Document() is a dotx versus a docx file without knowing the file extension (i.e. opening via Stream)?
On the save side, setting the property IsTemplate to true allows me to save as dotx, but this property is not populated when the file is loaded as documented.
If there is no way to determine whether a loaded Stream is a template file, any plans on adding this in a future release?
Thanks in advance,
Scott
Hello,
Hi
<?xml:namespace prefix = o
Thanks for your request. Currently there is no way to detect whether the loaded document is a template. We will consider adding such feature. Your request has been linked to the appropriate issue. You will be notified as soon as it is resolved.
Best regards.
The issues you have found earlier (filed as 14547) have been fixed in this update.
This message was posted using Notification2Forum from Downloads module by aspose.notifier.
(49) | https://forum.aspose.com/t/detecting-whether-a-word-document-is-a-template-file/76315 | CC-MAIN-2022-33 | refinedweb | 176 | 66.54 |
First, we will modify the remote controlled car so that it can be interfaced with the Phidget controller. The easiest way to accomplish this is to modify the hand-held controlling unit to receive input from the Phidget controller instead of a human driver.
Start by unscrewing all screws on the back of the unit and then open up the controller.
If you have chosen the right type of controller, that is, one with digital inputs, you will see that the joysticks simply push down on contacts on the board, closing the circuit for forward/backward/left/right. We will be wiring these contacts to our Phidget board so when the relays/IO ports open and close, they will be closing and opening the circuits.
To do this, cut 8 equal lengths of wire of about 6 inches. Strip off a bit of insulation from each end.
Next, solder one end of each wire to each contact of the controller. You will need to connect one wire to both the ground and the active points of the contact as shown in the picture.
When all wires are soldered, insert the opposite ends into the screw terminals of the Phidget interface board.
Phidget 0/0/4 Interface Kit
You’ll notice on the Phidget board that there are 3 terminals per relay, labeled NO, XC, and NC, where “X” is a number from 0 to 3. These stand for “Normally Open”, “Relay X Common”, and “Normally Closed”. Since our circuits are normally open and closed when you press the joystick, you will want to connect one wire from each contact to the NO and the XC terminals of each relay. Wire the relays as follows:
0 – Forward
1 – Backward
2 – Left
3 - Right
Phidget 0/16/16 Interface Kit
This board is divided into two sections and is labeled as such: Inputs and Outputs. On the Output side, there are 16 outputs, numbered 0 through 15. There are also 8 common ground terminals labeled "G". As with the above board, you will want to connect one wire from each contact to a numbered output, and one to a common ground contact. Wire them using the same numbering scheme as above.
If you decide to add the network camera to the project continue reading. If you choose to skip the camera, move on to the next section.
The camera will need to be battery powered. The camera I used in this project is the Airlink101 AIC-250W. This camera requires 5V to operate. Getting 5V out of a series of standard batteries isn’t easy without a voltage regulator, so I cheated and decided to try pumping 6V (4 AA batteries at 1.5V each) to the device. I have run the cam for over 2 hours on one set of batteries with no ill effects.
To easily connect the batteries to the camera, I used a 4 AA battery holder and a power connector that matched the size of the connector on the included AC power supply. As stated above, the size required is a 5.5mm outer diameter with a 2.5mm inner diameter.
On the back of the camera, above the power connector, there is a symbol showing the positive and negative connections for the power connector.
The symbol shows that the negative terminal is the outside ring and the positive terminal is the inside. With that in mind, we will solder the black wire of the holder to the outside connector, and the red to the ring connector. The wires from the battery holder can be easily soldered onto the power connector as shown.
When all is said and done, you will have a battery holder with a DC power end that can be plugged directly into the camera. Pop 4 AA batteries into the holder and you should see the Power light on the camera turn on. I chose to mount the battery pack to the top of the camera using some double-sided tape as shown.
Finally, mount the camera to the vehicle with some double-sided tape or velcro..
Before we get started, it is essential that you have some familiarity with Microsoft Robotics Studio. Please read through the included help file as well as the base tutorials. I have also included several links at the bottom of this article with even more information on the internals of MSRS. The documentation and tutorials will be your best guidance for understanding what we are about to build.
At its core, the MSRS runtime aids in developing applications where concurrency and performance are key issues. In a robot where there are multiple sensors receiving input that needs to be concurrently handled, MSRS excels. While our remote controlled car may not fit into that mold exactly, it is a great way to delve into Robotics Studio and learn about what it has to offer. “RCCar”:
C#:
dssnewservice /service:RCCar
VB
dssnewservice /language:VB /service:RCCar
This will generate a folder with several items, including project and solution files. Note that it will lower-case RCCar to Rccar in various places. This is normal.
Open the generated RCCar.sln file in Microsoft C# 2005 Express Edition or Microsoft Visual Basic 2005 Express Edition. You will see that several files were generated.
RCCar.cs/.vb – Contains actual implementation of the RCCar service
RCCarTypes.cs/.vb – Contains list of messages and default state object
RCCar.manifest.xml – A description of the services started when the application is run
Take some time to review the base generated code. This will help as we move forward.
Open the RCCar.sln file to start working. The first thing we will need to do is access the service which controls our Phidget Interface Kit. Robotics Studio includes a built-in service for this device. To access it, first, set a reference to the PhidgetBoards.Y2006.M08.proxy namespace. This can be done by right-clicking on the project name in the Solution Explorer and selecting Add Reference from the context menu. When the Add Reference dialog appears, select PhidgetBoards.Y2006.M08.proxy and click the OK button.
To use the contents of this library in our code, we need to import the library. This can be done by adding the following line to the Rccar class.
C#
using phidgetinterfacekit = Phidgets.Robotics.Services.PhidgetInterfaceKitBoards.Proxy;using phidgetcommon = Phidgets.Robotics.Services.Proxy;
VB
Imports phidgetinterfacekit = Phidgets.Robotics.Services.PhidgetInterfaceKitBoards.ProxyImports phidgetcommon = Phidgets.Robotics.Services.Proxy
Next, we need to setup a partner relationship with the PhidgetInterfaceKit service and declare a port on which to communicate with it. At the top of the RccarService class implementation, add the following code to the. First, we must subscribe to the PhidgetInterfaceKit service. We will be subscribing to additional services later on, so create a SubscribeToServices method in the RccarService class and then call this method from the bottom of the Start method:
private void SubscribeToServices(){ _ikPort.SelectiveSubscribe(null, new phidgetinterfacekit.PhidgetInterfaceKitBoardsOperations());}
Private Sub SubscribeToServices() _ikPort.SelectiveSubscribe(Nothing, New phidgetinterfacekit.PhidgetInterfaceKitBoardsOperations())End Sub
This will subscribe to the PhidgetInterfaceKit service without requesting any notifications on events, as they will not be needed for this project.
Next, create an enumeration above the definition for the RccarService class for the possible directions:
public enum Direction{ Forward = 0, Backward, Left, Right, None}VB
Public Enum Direction Forward = 0 Backward Left Right NoneEnd Enum
The value of each item in the enumeration will correspond to the relay used for that direction as described above. Next, we will write a method named SetOutput to communicate with the Phidget Interface Kit and enable or disable the output relay as appropriate: relay should be opened or closed.
Next, we will need a user interface to interact with to control the robot. Right-click on the RCCar project and select Add -> Windows Form from the context menu. Name this form RemoteControlForm.cs/.vb.
<NOTE FOR VB USERS>
Due to an oddity in the dssproxy application which runs during the compilation process, one addition must be made to the form created in the Visual Basic project. To make this change, first click on the Show all files button on the Solution Explorer toolbar as shown:
Once that is done, you should see a "+" next to the RemoteControlForm.vb file. First, right-click on the RemoteControlForm.vb file and select View Code. Add the following namespace definition around the the class definition as shown:
Namespace Robotics.Rccarvb Public Class RemoteControlForm End ClassEnd Namespace
Next, open the RemoteControlForm.Designer.vb file using the same method above and add the same namespace definition around the class implementation. Once this is done, verify that the project compile and continue on.
</NOTE FOR VB USERS>
For the moment, we will setup the form with four buttons (up, down, left and right) and control the vehicle with those. Drag four buttons onto the form and label them appropriately. You can easily get arrow glyphs on the buttons by using the Marlett font and setting the Text property of each button to one of the following:
3 = Left
4 = Right
5 = Up
6 = Down
Name the buttons btnForward, btnBackward, btnLeft, and btnRight. In an effort to make things easy for determining which button is pressed, assign the values 0, 1, 2, and 3 to the Tag property of each button to match the enumeration created above. So, the Tag property of the forward button would be 0, backward would be 1, and so on. When you are done, you will have a form similar to the following:
This form will need to call the SetOutput method we created in our service above. Therefore, it will need an instance of the RccarService available to it. To easily accomplish this, the constructor for the form will be written to accept the current instance of the service and store it in a member variable for later use. Add the following code to RemoteControlForm.cs/.vb to facilitate this:
C#
RccarService _service = null;public RemoteControlForm(RccarService service){ InitializeComponent(); _service = service;}
Private _service As RccarServicePublic Sub New(ByVal service As RccarService) ' This call is required by the Windows Form Designer. InitializeComponent() ' Add any initialization after the InitializeComponent() call. _service = serviceEnd Sub
To properly drive the car, we need to listen for both the button being clicked (close the circuit) and the button being released (open the circuit). This can be done by hooking the MouseDown and MouseUp events on each button. Open RemoteControlForm.cs/.vb in the designer. Then, choose the events list in the properties window by clicking on the “lightning bolt” icon. This will toggle the event list into view.
Highlight all of the buttons on the form. Then, enter btn_MouseDown into the MouseDown event and btn_MouseUp in the MouseUp event. The IDE will automatically generate method stubs for each of those events.
Next, add the code for the MouseDown and MouseUp events. If you are using VB, you will need to import the System.Windows.Forms namespace into the RemoteControlForm code. The Tag property created above will be used from each button to determine which was clicked:
private void btn_MouseDown(object sender, MouseEventArgs e){ _service.SetOutput((Direction)int.Parse((sender as Button).Tag.ToString()), true);}private void btn_MouseUp(object sender, MouseEventArgs e){ _service.SetOutput((Direction)int.Parse((sender as Button).Tag.ToString()), false);}
Imports System.Windows.Forms...Private Sub btn_MouseDown(ByVal sender As System.Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles btnRight.MouseDown, btnLeft.MouseDown, btnForward.MouseDown, btnBackward.MouseDown _service.SetOutput(CType(Integer.Parse(CType(sender, Button).Tag.ToString()), Direction), True)End SubPrivate Sub btn_MouseUp(ByVal sender As System.Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles btnRight.MouseUp, btnLeft.MouseUp, btnForward.MouseUp, btnBackward.MouseUp _service.SetOutput(CType(Integer.Parse(CType(sender, Button).Tag.ToString()), Direction), False)End Sub
The events simply look at the button being sent in as the sender parameter, convert the Tag to an entry in the Direction enumeration, and set the associated relay using the SetOutput method.
Now that our base user interface is setup, we need to launch the form from our service. To do this, we need to add reference to the Ccr.Adapters.WinForms namespace. This can be done in the same way as we added the reference to the Phidget library. Once that is done, back in Rccar.cs/.vb, bring the WinForms library into the class by adding the following using statement:
using Microsoft.Ccr.Adapters.WinForms;
Imports Microsoft.Ccr.Adapters.WinForms
To actually launch the form, add the following line to the bottom of the Start method, below the previous call to SubscribeToServices:
WinFormsServicePort.Post(new RunForm(StartForm));
WinFormsServicePort.Post(New RunForm(AddressOf StartForm))
This will launch the method StartForm which actually creates the form. Add a member variable to the class to store the instance of the created form and add the StartMethod form as follows:
RemoteControlForm _remoteForm = null;
public System.Windows.Forms.Form StartForm(){ _remoteForm = new RemoteControlForm(this); return _remoteForm;}
Private _remoteForm As RemoteControlFormPrivate Function StartForm() As System.Windows.Forms.Form _remoteForm = New RemoteControlForm(Me) Return _remoteFormEnd Function
This will create a new instance of the form, assign it to the local member variable, and return the instance back to the calling method.
At this point, everything should be in place to actually run the project. Make sure the Phidget board is connected to the PC via the USB cable and press the F5 key to build and execute the project. If all went to plan, you should see the UI form start up and you should be able to click the arrow buttons and hear the relays click on the 0/0/4 board, or the lights flicker on the 0/16/16 board. If the car is turned on and the remote is turned on, the car will drive around based on the buttons you click.
We can easily add support to drive the car with the keyboard's arrow keys. To use the arrows, we cannot use the native KeyDown event because the arrow keys are normally used to cycle through the active controls of a form. Therefore, we need to trap the key down events earlier in the message chain. To do this, we must override the ProcessCmdKey method of the Form class. Additionally, the form's KeyPreview property must be set to false. Add the following code to the RemoteControlForm.cs/.vb file:
const int WM_KEYDOWN = 0x100;protected override bool ProcessCmdKey(ref Message msg, Keys keyData) { if(msg.Msg == WM_KEYDOWN) { switch(keyData) { case Keys.Up: _service.SetOutput(Direction.Forward, true); break; case Keys.Down: _service.SetOutput(Direction.Backward, true); break; case Keys.Left: _service.SetOutput(Direction.Left, true); break; case Keys.Right: _service.SetOutput(Direction.Right, true); break; } return true; } return false; }
Dim WM_KEYDOWN As Integer = 256Protected Overrides Function ProcessCmdKey(ByRef msg As Message, ByVal keyData As Keys) As Boolean If (msg.Msg = WM_KEYDOWN) Then Select Case (keyData) Case Keys.Up _service.SetOutput(Direction.Forward, true) Case Keys.Down _service.SetOutput(Direction.Backward, true) Case Keys.Left _service.SetOutput(Direction.Left, true) Case Keys.Right _service.SetOutput(Direction.Right, true) End Select Return true End If Return falseEnd Function
This looks much like the button processors from above.
Just like the buttons, we also need a KeyUp event. Go back to the designer for the form and double-click on the KeyUp event in the properties pane, just like we did with the mouse button events. This will generate an event stub for the KeyUp event. Fill in the method with the following:
private void RemoteControlForm_KeyUp(object sender, KeyEventArgs e){ switch(e.KeyCode) { case Keys.Up: _service.SetOutput(Direction.Forward, false); break; case Keys.Down: _service.SetOutput(Direction.Backward, false); break; case Keys.Left: _service.SetOutput(Direction.Left, false); break; case Keys.Right: _service.SetOutput(Direction.Right, false); break; } }
Private Sub RemoteControlForm_KeyUp(ByVal sender As System.Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles MyBase.KeyUp Select Case (e.KeyCode) Case Keys.Up _service.SetOutput(Direction.Forward, False) Case Keys.Down _service.SetOutput(Direction.Backward, False) Case Keys.Left _service.SetOutput(Direction.Left, False) Case Keys.Right _service.SetOutput(Direction.Right, False) End SelectEnd Sub
Now, if you execute the application again, you will be able to drive the car around just by pressing the arrow keys on the keyboard.
If you own an Xbox 360 Controller for Windows, or a similar gamepad device, we can use it to drive our vehicle around. Ensure that the Xbox 360 Controller for Windows drivers are installed and the controller is working normally.
To use the gamepad, we first need to partner with the gamepad service, much like we did with the PhidgetInterfaceKit service.
First, set a reference to the XInputGamepad.Y2006.M09.proxy namespace. Bring the namespace into the RCCar.cs/.vb file by adding the following using statement, and then partner with the XInputGamepad service like we did with the Phidget service above:
[Partner("XInputGamepad", Contract = gamepad.Contract.Identifier,CreationPolicy = PartnerCreationPolicy.UseExistingOrCreate)] private gamepad.XInputGamepadOperations _gamepadPort = new gamepad.XInputGamepadOperations();
<Partner("XInputGamepad", Contract:=gamepad.Contract.Identifier, _CreationPolicy:=PartnerCreationPolicy.UseExistingOrCreate)> _Private _gamepadPort As New gamepad.XInputGamepadOperations()
With that in place, we now need to subscribe to the XInputGamepad service to receive notifications when the state of the controller changes. This code will be added to our previous SubscribeToServices method:
private void SubscribeToServices(){ _ikPort.SelectiveSubscribe(null, new phidgetinterfacekit.PhidgetInterfaceKitBoardsOperations()); gamepad.XInputGamepadOperations _gamepadNotify = new gamepad.XInputGamepadOperations(); Activate(Arbiter.Receive<gamepad.DPadChanged>(true, _gamepadNotify, GamepadDpadHandler)); _gamepadPort.Subscribe(_gamepadNotify);}
Private Sub SubscribeToServices() _ikPort.SelectiveSubscribe(Nothing, New phidgetinterfacekit.PhidgetInterfaceKitBoardsOperations()) Dim _gamepadNotify As gamepad.XInputGamepadOperations = New gamepad.XInputGamepadOperations Activate(Arbiter.Receive(Of gamepad.DPadChanged)(True, _gamepadNotify, AddressOf GamepadDpadHandler)) _gamepadPort.Subscribe(_gamepadNotify)End Sub
This method creates a port named _gamepadNotify which will receive the notifications from the Gamepad service. The Activate method sets up relationships between communication ports and arbiters. An arbiter is simply an object which automatically manages concurrency. So, the second line sets up a relationship between the _gamepadNotify port and a method we will implement, GamepadDpadHandler. This is setup on the DPadChanged message.
The last line in the method sets up a subscription. A subscription is required to receive the messages on the port specified. You may wish to wrap the call to Subscribe a call to Arbiter.Choice. This will allow you to determine if the subscription was successful or not, and display a message either way. The source package for this article contains the code on how this is done. For simplicity, it has been removed for the listing above.
Next, we must actually implement the GamepadDpadHandler method. It looks like the following snippet:
public void GamepadDpadHandler(gamepad.DPadChanged msg){ gamepad.DPad dpad = msg.Body; SetOutput(Direction.Forward, dpad.Up); SetOutput(Direction.Backward, dpad.Down); SetOutput(Direction.Left, dpad.Left); SetOutput(Direction.Right, dpad.Right);}
Private Sub GamepadDpadHandler(ByVal msg As gamepad.DPadChanged) Dim dpad As gamepad.DPad = msg.Body SetOutput(Direction.Forward, dpad.Up) SetOutput(Direction.Backward, dpad.Down) SetOutput(Direction.Left, dpad.Left) SetOutput(Direction.Right, dpad.Right)End Sub
This code simply maps the current state of the d-pad to the outputs on the Phidget interface kit. We can also use the thumbsticks of the Xbox 360 controller to drive the vehicle. You will find the code for this in the source package linked above.
With this code in place, we can once again press F5 to build and execute the project. Ensure the Phidget board and the controller are connected. When the application runs, you will be able to drive the car with the left thumbstick.
Note: This XInputGamepad service specifically works with devices that are defined as gamepads. If you have a true analog joystick, like a flight stick, you will need to replace the code above with a partnership with the GameController service instead of the XInputGamepad service and propagate accordingly.
Now that the car is up and running, we can add the code required to drive the wireless camera.
Unfortunately, I have been unsuccessful in getting the following code to work in Visual Basic due to its lack of custom iterators (i.e. the lack of the yield keyword). I am currently pursuing a workaround for this, but until one is found, the following section will consist of C# code only. I will update this portion of the article with VB code if/when a solution is found.
I decided to create a brand new service to operate the wireless camera so it could be later used in other projects very easily. In this section, we will learn how to create a brand new service and use it to post messages to other services that are listening.
As with the RCCar service, you will use the dssnewservice.exe command to generate the template for the new service. Open the Robotics Studio Command Prompt as before and run the following:
C#
dssnewservice /service:NetCam
This will, as before, generate a directory and code for a new service named NetCam.
Back in Microsoft C# Express 2005, right-click on the solution in the Solution Explorer and select Add -> Existing Project and navigate to the generated NetCam.csproj file. This will bring the project into our existing solution for easy editing and debugging.
The first thing we need to do is add a reference to the System.Drawing assembly. This can be done as described above.
Next, switch to the NetCamTypes.cs file. We will be using the NetCamState to store the most recently captured image from the camera. To do this, we will need to bring the System.Drawing reference into the class and modify the NetCamState object to look like the following:
using System.Drawing;...[DataContract()] public class NetCamState { private Bitmap _image; private int _size; private DateTime _timeStamp; public Bitmap Image { get { return _image; } set { _image = value; } } [DataMember] public Size Size { get { if (_image != null) { return _image.Size; } return Size.Empty; } set { return; } } [DataMember] public DateTime TimeStamp { get { return _timeStamp; } set { _timeStamp = value; } } }
The DataContract() attribute states that the NetCamState object is serializable via XML. The DataMember attribute must be added to every property and field that you would like to be serialized. A Bitmap object cannot be directly serialized, so we will be working around that shortly.
The network camera I chose for this project streams its live video feed via MotionJPEG, or MJPEG. This is a quirky little non-standard that basically sends a never-ending stream of JPG images down the pipe. When displayed fast enough, it looks like smooth animation. Think of an electronic flipbook and you’ve got the right idea.
The net cam expects a web request to its internal web server and then responds with a multipart response which is a stream of JPEG images, each prefaced with some boundary tags and information about the image ahead. A standard GET request to the camera’s IP address will result in something that looks like the following:
HTTP/1.0 200 OK Server: Camera Web Server/1.0 Auther: Steven Wu MIME-version: 1.0 Cache-Control: no-cache Content-Type: multipart/x-mixed-replace;boundary=--video boundary-- --video boundary--Content-length: 9052 Date: 2006-10-01 22:21:47 IO_00000000_PT_000_114 Content-type: image/jpeg <binary data of length 9052 bytes> --video boundary--Content-length: 9375 Date: 2006-10-01 22:21:48 IO_00000000_PT_000_114 Content-type: image/jpeg <binary data of length 9375 bytes>
Our NetCam service is going to connect to the network camera’s MJPEG source URL and loop through the response data, pulling out each frame as it is downloaded. When a partnered service wishes to request the current frame, it will do so by sending the QueryFrame message that we are about to define.
Open the NetCamType.cs file and you will see class definitions for NetCamOperations, Get, and Replace. Here we will add our new message.
Add a class named QueryFrame to the file with the rest of the request types. This will inherit from the base Query object and define its request object, QueryFrameRequest, and it’s response, one of two types: a QueryFrameResponse object, or a Fault in case of an error. Next, we need to add this QueryFrame message to the list of valid messages that NetCamOperations will handle. This is done by changing the class definition as shown. Finally, we need to create the definitions for the QueryFrameRequest and QueryFrameResponse objects.
public class NetCamOperations : PortSet<DsspDefaultLookup, DsspDefaultDrop, Get, QueryFrame>{}public class QueryFrame : Query<QueryFrameRequest, PortSet<QueryFrameResponse, Fault>> {}[DataContract] public class QueryFrameRequest { } [DataContract] public class QueryFrameResponse { [DataMember] public Size Size; [DataMember] public byte[] FrameBuffer; [DataMember] public DateTime TimeStamp; }
As you can see, for right now, the QueryFrameRequest object remains empty, while the QueryFrameResponse contains a handful of properties which define the size of the image, a byte buffer for the image data itself, and a timestamp for the time the image was grabbed.
Now we need to handle the camera data itself and provide the current frame data to a service requesting it. This will be accomplished by creating a processing thread which will parse each image from the webcam into a JPEG image, store it in memory to pass as a response to a request for it, and repeat. Internally, we will parse our JPEG, post a message to an internal port stating that the current frame is ready, store the frame, and start over. Since many requests can be received concurrently, we will be using the built-in Arbiter class to manage incoming requests while at the same time, parsing the data from the webcam to ensure the service does not get into a situation where frame data is being manipulated while a request is simultaneously being processed.
First, the internal message port will be created to handle notifications that a complete frame is available. The port will be named FramePort and designed to handle messages of type Frame. The implementation these types can be added directly to the NetCam.cs file either above or below the implementation of the NetCamService class. The Frame class will be using types from the System.Drawing library, so ensure that the library is both referenced and imported.
using System.Drawing;using System.Drawing.Imaging;...internal class Frame{ private Bitmap _image; private Size _size; private DateTime _timeStamp; public Bitmap Image { get { return _image; } set { _image = value; } } public Size Size { get { if (_image != null) { return _image.Size; } return Size.Empty; } set { return; } } public DateTime TimeStamp { get { return _timeStamp; } set { _timeStamp = value; } }}internal class FramePort : Port<Frame> { }
Now we need to hook up the QueryFrameRequest message so that it can be responded to from incoming requests, and we need to grab the data from the camera. To do this, first, add the following member variable definition to the top of the NetCamService class along with a declaration of a boolean variable to maintain the status of the connection to the camera, and then modify the Start method of the NetCam service to look like the snippet below:
FramePort _framePort = new FramePort();private bool _connected = false;protected override void Start(){ base.Start(); Activate( Arbiter.Interleave( new TeardownReceiverGroup(), new ExclusiveReceiverGroup( Arbiter.Receive(true, _framePort, FrameHandler) ), new ConcurrentReceiverGroup ( Arbiter.Receive<QueryFrame>(true, _mainPort, QueryFrameHandler) ) ) ); DirectoryInsert(); Thread t = new Thread(new ThreadStart(NetCamProcessor)); t.IsBackground = true; t.Start(); LogInfo(LogGroups.Console, "Service uri: "); }
You will notice a new Arbiter method. The Interleave allows you to setup several handlers at once, defining the concurrency of each. Handlers that require Exclusive access are registered in the ExclusiveReceiverGroup constructor, and those that can be called simultaneously are registered in the ConcurrentReceiverGroup constructor.
In the code above, we setup a handler on the internal _framePort port, calling the method FrameHandler. Additionally, a handler is created on the main service port, _mainPort for the QueryFrame message, which will be handled by the QueryFrameHandler method.
Finally, a thread is created which will be responsible for connecting to the network camera and grabbing each JPEG image as it is sent. I am not going to list every single line here, but I will highlight the important points. The full code, of course, can be found in the source package. First, let's implement the FrameHandler method. As described above, a message will be posted from the processing thread when a new frame is ready to be requested. The FrameHandler method is the method that will respond to that notification. Simply, it will expect a Frame object and store it away inside of the NetCamState object we created earlier.
private void FrameHandler(Frame frame){ _state.Image = frame.Image; _state.TimeStamp = frame.TimeStamp;}
Next, we need to implement the NetCamProcessor thread handler. The code here is a bit complex, and I have taken a number of shortcuts to make it as simple as possible. I am the first to admit that the parsing code is not very robust, however it will properly parse the output of the Airlink AIC-250W. The thread processor will use the HttpWebRequest and HttpWebResponse methods of the System.Net class, several IO methods from System.IO and the methods from the System.Text class. As always, ensure this is referenced and imported into the class.
The processor will sit in an infinite loop which connects to the IP address of the camera, requests a specific page, and parses the MJPEG data as described earlier. When a full frame has been parsed, it posts that Frame object to our internal _framePort to store away until a QueryFrameRequest comes in.
using System.Net;using System.IO;using System.Text;...private void CameraProcessor(){ byte[] buff = new byte[1024*1024]; byte[] lenBuff = new byte[10]; HttpWebRequest req = null; HttpWebResponse resp = null; Stream s = null; int len = 0; while(true) { try { // NOTE: change the IP address for the camera on your network req = (HttpWebRequest)HttpWebRequest.Create(""); resp = (HttpWebResponse)req.GetResponse(); _connected = true; s = resp.GetResponseStream(); BinaryReader br = new BinaryReader(s); while(true) { Array.Clear(buff, 0, buff.Length); Array.Clear(lenBuff, 0, lenBuff.Length); // --video boundary-- // Content-length: buff = br.ReadBytes(34); // content length byte by; int i = 0; while((by = br.ReadByte()) != 0x0d) lenBuff[i++] = by; len = int.Parse(Encoding.ASCII.GetString(lenBuff)); // Date: 2000-01-01 01:23:45 IO_00000000_PT_000_114 // Content-type: image/jpeg buff = br.ReadBytes(79); // image data buff = br.ReadBytes(len); // create a new Frame object and post it to our internal port Frame frame = new Frame(); frame.Image = (Bitmap)Bitmap.FromStream(new MemoryStream(buff, 0, len)); frame.TimeStamp = DateTime.Now; _framePort.Post(frame); // new lines before the next --video boundary-- segment buff = br.ReadBytes(6); } } catch(Exception ex) { System.Diagnostics.Debug.WriteLine(ex.Message); _connected = false; } finally { if(req != null) req.Abort(); if(s != null) s.Close(); if(resp != null) resp.Close(); } }}
The NetCamProcessor method connects to the mjpeg.cgi file on the network camera’s IP address. Once connected, it loops forever reading data from the response stream, parsing out the Content-length information and reading precisely that much information into a byte buffer. With this in hand, it fills in an instance of the Frame object and posts that to the internal _framePort communications port. As we set it up earlier, the _framePort handles messages sent to it with the FrameHandler method. This was setup in the exclusive group for a reason. Because images are streaming in as quickly as possible, before all processing can be completed and the next image arrives, by posting the message to the _framePort port, the thread will block until the FrameHandler method returns, as this method was registered in the ExclusiveReceiverGroup above. This will allow us to copy the current Image and TimeStamp into the internal state object before moving on to the next image. Therefore, when a partnered service requests the current frame, it will return the current frame in the state object without worry of being overwritten mid-process.
The final thing to setup in our NetCamService is the method to respond to the QueryFrame message. Recall that this handler was setup in the constructor above. This code uses the Fault object in the W3C.Soap namespace, so import accordingly.
using W3C.Soap;...private void QueryFrameHandler(QueryFrame query){ QueryFrameResponse response = new QueryFrameResponse(); if(!_connected) { response.Size = Size.Empty; response.FrameBuffer = null; response.TimeStamp = DateTime.MinValue; query.ResponsePort.Post(new Fault()); } else { if(_state.Image != null) { ImageFormat format = ImageFormat.Bmp; using (MemoryStream stream = new MemoryStream()) { Size size = _state.Image.Size; _state.Image.Save(stream, format); response.TimeStamp = _state.TimeStamp; response.FrameBuffer = new byte[(int)stream.Length]; response.Size = size; stream.Position = 0; stream.Read(response.FrameBuffer, 0, response.FrameBuffer.Length); } query.ResponsePort.Post(response); } }}
This method takes the current image stored in the internal state object and converts it into a byte stream to be returned in our previously defined QueryFrameResponse object. The response object is then sent back to the caller via the ResponsePort object contained within the passed in QueryFrame object.
Now that we have the camera setup to grab images and allow requests for the currently available frame, we can partner with the service in our RCCar service and display the live stream from the camera.
As with the other services, you will need to add a reference to the NetCam service. You can do this by adding a reference to the NetCam.Y200X.MYY where X is the current year, and Y is the current month. Do not add add a reference to the project-level NetCam assembly.
Now, switch back to the RCCar.cs file. At the top with the other partner definitions, add the following two lines to partner with the NetCam service:
[Partner("NetCam", Contract = netcam.Contract.Identifier,CreationPolicy = PartnerCreationPolicy.UseExistingOrCreate)] private netcam.NetCamOperations _netcamPort = new netcam.NetCamOperations();
Next, we need to add a method which will continually send the QueryFrame message to the NetCam service and handle the resulting image data. This can be accomplished by adding the following call to the Start method:
SpawnIterator(NetCamHandler);
The SpawnIterator method will allow you to call a method of type IEnumerator<ITask> asynchronously. Let’s take a look at the NetCamHandler method that is spawned. This will use code from the System.Threading library, so import as usual.
using System.Threading;...private IEnumerator<ITask> NetCamHandler(){ bool pollCamera = true; while(true) { // wait a few ticks for the cam to get connected Thread.Sleep(4000); pollCamera = true; while(pollCamera) { // send the QueryFrame message and send the result to our form to be displayed yield return Arbiter.Choice( _netcamPort.QueryFrame(new netcam.QueryFrameRequest()), delegate(netcam.QueryFrameResponse response) { WinFormsServicePort.FormInvoke( delegate() { _remoteForm.ImageUpdate(response); } ); }, delegate(Fault fault) { LogError("Error querying frame from camera"); pollCamera = false; } ); } }}
This will wait for 4 seconds for the camera to become available and then continually poll the NetCam service for the latest available frame. The Choice method from the Arbiter object will allow you to send a message to a specified port and handle the result in both a success and a failure. In this case, we are creating a QueryFrameRequest object, posting it to the _netCam port, and handing the response with two delegates, the first being the successful return, and the second being a fault from the NetCam service.
When a valid QueryFrameResponse object is returned, the delegate we’ve defined calls the ImageUpdate method on our user interface via WinFormsServicePort.FormInvoke to ensure that we do not attempt to update the UI thread from a thread other than it was created on. ImageUpdate passes the QueryFrameResponse response object which contains the image data to be displayed.
The yield keyword seen above is new to C# and .NET 2.0 . Also note that the NetCamHandler method is defined as type IEnumerator<ITask>. When yield return is used with an enumerator, this tells the CCR that more data will be available. When yield break is used, this signals the end of the enumeration. In this case, since we will constantly be polling images, this signals to the CCR that more data is forthcoming.
On the RemoteControlForm, a PictureBox control must be added to display the images from the camera. Do this in the designer and set the box dimensions to 320x240, the default size returned by the network camera. Name the control pbImage.
And finally, following code is added to the RemoteControlForm.cs class. Note that the following method, ImageUpdate, uses methods from the System.IO and our NetCam namespaces, so import them at the top.
using System.IO;using netcam = Robotics.NetCam.Proxy;...public void ImageUpdate(netcam.QueryFrameResponse response){ pbImage.Image = Bitmap.FromStream(new MemoryStream(response.FrameBuffer));}
This simply takes the response from our above QueryFrame request, turns it back into a Bitmap object, and then assigns it to the PictureBox.
Pressing F5 to build and run the solution should now allow you to:
So where do we go from here? Well, there are a variety of features that can be added to the project, such as:
As you can see, Microsoft Robotics Studio is a very functional, very sophisticated and very complex library which tackles many of the issues involved with creating software for use in robotics. With the issues of concurrency and decentralization handled by the CCR and DSS runtimes, you are free to focus your efforts at building robust software for controlling the machine at hand.
Here are a few links regarding MSRS, the CCR and various other useful tidbits to help you along:.
If you would like to receive an email when updates are made to this post, please register here
RSS
Brian
great work !!! I'll make a post in mi spanish blog related to your work; and I'll make a try with the camera with mi Lego NXT (I don't know if the Lego will support the weight of the camera :P)
Great work again !!!
Bye from Spain
El Bruno
Hoy en cosas interesantes: Robotics Studio, Windows Hypervisor, Dynamics, RDP desde Visio, Migración
Nice!
Never ever did I dream of a RC Car being the ultimate Spy device!
Would this work of and RC device?
As I have a chopper that I would love to have a camera on =)
Oh, and Is there any way to boost the signal of both the RC car and the camera? When it goes around the house the image goes choppy and the car starts going in odd directions...
Fantastic project
Weasel =)
I want to add the Wireless Camera but is the Camera live or is it a tape
I have no problem compiling the codes. However, when I run the application, this error occurs "Exception:Method not found: 'Int64 Phidgets.Phidget.get_SerialNumber()'". I have tested using the Phidget program and have no problem activating the digital output.
I appreciate if anyone can give any comments.
Thanks
I found this project to be a great example of how 30 years of progress have done little to advance the usefulness of computers for controling hardware.
It would be interesting to compare this project with a similar one from the 70's on a TRS-80.
The fact that an expert, armed with all the latest tools failed to get Visual Basic to work, says it best.
A warning at the begining of the article would be a great time saver for those of us foolish enough to keep chasing these claims of fictional computer advancement.
If only my computer had a parallel port.
In this article, Brian Peek will demonstrate how to use his Managed Wiimote Library and Microsoft Robotics
This is so cool. Man, I really want to try this.
cool project...but i m not able to run the car...i have taken 2 wires each from each metal contact...one for NO and the other for XC...where do u think i have gone wrong?pls do reply immediately...thanx
this is weak. not even "robotics". and a *digital* remote instead of analog with finer grained control?!?
Another nice project:
Amazing. I went from having very very little electronics experience and moderate PHP skills (meaning, I've never programmed in Visual C#) to moving a cheap $7 rc car from walmart around with my laptop...all under an hour (w/ installation). Most of the time was spent trying to find out why "SetOutputRequestType" wasn't a valid command. I ended up getting around it by changing "phidgetcommon." to "Phidgets.Robotics.Services.PhidgetInterfaceKitBoards.Proxy
." in the "public void SetOutput(Direction dir, bool enabled)" method. Seemed to do the trick, because I'm chasing our cats and dogs around as I'm typing this. Thanks Brian!
PingBack from
PingBack from
if you make it with one remote control will it control just the car it was made from or will 1 comp controller control multiple cars
PingBack from
PingBack from
So, if you are using an 8/8/8 board versus a 0/0/4 board you have digital outputs versus physical relays. Does this make wiring the RC transmitter different for the 8/8/8 compared to the 0/0/4? In one, the power is coming from the USB port, and in the other, the power is coming from the RC transmitter itself.
Thanks!
@David B: yes, 8/8/8 is digital, 0/0/4 are analog.
As for wiring, click the contact us link at the top, I'll put you in contact with Brian Peek.
A nice project.i want to do a similar project with some modification like go and stop buttons.
Great project but Please help! I have installed all needed software but had to use Microsoft Robotics 1.5. Everything works so far but the keyup event. I hit the keyboard direction keys and the relay click but when I take my finger off it does not set the relay to false. What am I doing wrong? Thanks
PingBack from
This is a great work done by you Brian.
But, what I would like to comment here is, we really do not need a complex hardware interface. A simpler and cheaper solution would be to use a stacked buffer chip.
if you'd like to know more on how to do it, I can share the circuit diagram and concept with you. I had done this as my final semester project during my bachelors degree.
mail me at mrunmoyAThotmailDOTcom or mrunmoyATyahooDOTcom (please replace DOT with . and AT with @, I've put them just to avoid spams from webbots)
otherwise, great work!
Best Regards,
Mrunmoy
is there a way to do this with perportioal steering and throttle control
Hi,
Thanks for this details. this is realy a good thing
How do you tell if your remote control is digital or not?
This is great! I am working with 0/16/16 phidget board. I have everything installed but I just can't seem to get the wiring right. Is there an article or other project that would go into more detail on wiring, relays, circuits etc... Also, do I need the battery on the remote control or is the USB sufficiant for the controller's power?
thanks again.
So How the CAmera is Linked With The PC IS it wifi CAmera Or Blutooth CAmera.,.??
Plz Tel Me.,.,
@FAVI It is a wifi camera. | http://blogs.msdn.com/coding4fun/archive/2007/01/22/1507304.aspx | crawl-002 | refinedweb | 7,170 | 56.66 |
Talk:Proposed features/Key:gluten free
diet=*
This tag belongs to diet=* and as we do only have little usage so far and the tag was never discussed, this page should be deleted. --Skyper 15:22, 1 July 2013 (UTC)
- I agree. At the very least it should be moved and turned into a proposal draft - but ultimately it makes no sense for the tags to exist in parallel and diet:* is approved. --Tordanik 17:17, 1 July 2013 (UTC)
- I had move it but SK53 did revert it. --Skyper 21:55, 2 July 2013 (UTC)
- The use of this key likely predates the wiki proposal process, so moving it to the proposal section would be misleading IMO. Perhaps you need a section for keys that are just used, like say amenity=pub and highway=motorway? --EdLoach (talk) 20:21, 4 July 2013 (UTC)
- This does not fall in the same category as amenity=pub or highway=motorway. Those are established tags with no serious competition and should simply be documented normally. But this gluten_free key is just an idea by a few people that has not (yet?) taken off. I think a proposal page does not necessarily imply an intent to follow the proposal process. --Tordanik 21:01, 4 July 2013 (UTC)
- As I have stated elsewhere I do not recognise the proposal/approval process. This is not official in way shape or form, and has shown itself to be widely open to abuse, and of little relevance for mapping (other than to confuse or deter newcomers).
- I use the wiki to document a tag as used. The tagging of establishments which offer suitable food for coeliacs is important for such people, but not straightforward because there is in effect a gradation from 'there is at least one dish on the menu which will not make you ill' to 'there are many gluten free dishes, aware serving staff, and the kitchen can modify dishes to order'. It is therefore highly appropriate that these issues be documented separately. I raised the issue as to why I thought a diet:gluten_free tag was inappropriate long before such a tag was even proposed. I stand by what I said then.
- The way I see it, wide usage can very well move a tag from "Proposed" to the main Key name space without ever going through a vote. But I really would like they "Key" name space reserved for those tags that are *either* widely used *or* at least agreed to be useful after some discussion; the "gluten_free" tag is neither. Being a "proposed" tag does not mean it cannot be used (e.g. the "proposed" addr:place is meanwhile used more than 300k times). --Frederik Ramm (talk) 22:47, 4 July 2013 (UTC)
- The problem then though is deciding on a tag by tag basis what defines widely used. Is it based on geographic usage, number of mappers, how often the thing it represents exists in reality (even if there is some way of determining that)? If I invented a tag quarry=diamonds for an open cast diamond mine, then 1 or 2 uses would probably mean it is widely used (if as I'm guessing there aren't many open cast diamond mines in the world). For something like shop=secondhand (for shops that sell secondhand stuff) then that figure would be much higher. And for other items higher still. --EdLoach (talk) 08:49, 5 July 2013 (UTC)
- Judging on figures only does not work (imports and the number of different users adding that key need to be taken into account). For tags which will be low on numbers world-wide this means that their status will always be proposed only. For subtags this might be different. Overall, you won`t have that many problems to get a key accepted after a positive result of a discussion on tagging@. --Skyper 14:12, 5 July 2013 (UTC)
- How an earth can people tell if a tag is in use if only certain tags are allowed in the Key namespace. To me this is entirely counter-intuitive. The fact that no-one has bothered to add a page for widely used tags in the Key namespace is not a reason to use it. I think I will not bother with tagging any dietary information on OpenStreetMap in future and I will actively discourage people with Coeliac's disease from using OSM for information which might help them enjoy eating out without seriously affecting their health because 'it is a pet feature'. I have better things to do than bother with the wiki, now I won't bother at all, there is no point trying to persuade people that their hugely complicated colonic tagging scheme is no better than something simpler. SK53 (talk) 13:00, 5 July 2013 (UTC)
- Sorry, I do not understand your reactions. Yeah, Frederik's comment was not nice, but your reactions in advance were not either. Do not forget that I do care about this issue and just wanted to clean up to get a working, common tagging system. But instead of getting this right and maybe writing a patch for JOSM to add it to presets, I get obstacles put in my way and once more I am disappointed about an user and his reaction plus his social lacks in communication. --Skyper 14:12, 5 July 2013 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/Key:gluten_free | CC-MAIN-2018-51 | refinedweb | 900 | 66.37 |
ConstructorsConstructors
FrontEnd ( )
MembersMembers
The contexts for compilation units that are parsed but not yet entered
override def allowsImplicitSearch : Boolean
If set, implicit search is enabled
protected def discardAfterTyper ( unit: CompilationUnit ) ( implicit ctx: Context ) : Boolean
override def isTyper : Boolean
Is this phase the standard typerphase? True for FrontEnd, but not for other first phases (such as FromTasty). The predicate is tested in some places that perform checks and corrections. It's different from isAfterTyper (and cheaper to test).
override def phaseName : String
A name given to the
Phase that can be used to debug the compiler. For
instance, it is possible to print trees after a given phase using:
$ ./bin/dotc -Xprint:<phaseNameHere> sourceFile.scala
override def runOn ( units: List [ CompilationUnit ] ) ( implicit ctx: Context ) : List [ CompilationUnit ]
def stillToBeEntered ( name: String ) : Boolean
Does a source file ending with
<name>.scala belong to a compilation unit
that is parsed but not yet entered? | http://dotty.epfl.ch/api/dotty/tools/dotc/typer/FrontEnd.html | CC-MAIN-2019-13 | refinedweb | 150 | 55.13 |
Re: Idle curiosity re. using directive/declaration scoped to a givenclass - is this technique sensib
Discussion in 'C++' started by Ian Collins, Apr 18, 2010.
- Similar Threads
using-declaration vs. using-directiveInsert Pseudonym Here, May 3, 2004, in forum: C++
- Replies:
- 1
- Views:
- 1,846
- Rob Williscroft
- May 3, 2004
Using-declaration or using-directive inside unnamed-namespace?Niels Dekker - no reply address, Apr 27, 2010, in forum: C++
- Replies:
- 1
- Views:
- 754
- Niels Dekker - no reply address
- Apr 27, 2010
Idle Curiosity: no headJoy Beeson, Jan 21, 2011, in forum: HTML
- Replies:
- 5
- Views:
- 582
- dorayme
- Jan 22, 2011
Point of idle curiosityChris Angelico, Nov 18, 2012, in forum: Python
- Replies:
- 5
- Views:
- 251
- Chris Angelico
- Nov 19, 2012
Re: Point of idle curiosityTerry Reedy, Nov 18, 2012, in forum: Python
- Replies:
- 0
- Views:
- 209
- Terry Reedy
- Nov 18, 2012 | http://www.thecodingforums.com/threads/re-idle-curiosity-re-using-directive-declaration-scoped-to-a-givenclass-is-this-technique-sensib.720853/ | CC-MAIN-2016-18 | refinedweb | 141 | 52.94 |
Your official information source from the .NET Web Development and Tools group at Microsoft.
When]: I have updated the following code to fix some issues that were reported.
1: <li>
2: @Html.ActionLink("FacebookInfo", "FacebookInfo","Account")
3: </li>
ExternalLoginConfirmation
In Line 26, once the User is created we add a new line to add the FacebookAccessToken as a claim for the user.
1: public class FacebookViewModel
2: {
3: [Required]
4: [Display(Name = "Friend's name")]
5: public string Name { get; set; }
6:
7: public string ImageURL { get; set; }
8: }
1: @model IList<WebApplication96.Models.FacebookViewModel>
2: @if (Model.Count > 0)
3: {
4: <h3>List of friends</h3>
5: <div class="row">
6: @foreach (var friend in Model)
7: {
8: <div class="col-md-3">
9: <a href="#" class="thumbnail">
10: <img src=@friend.ImageURL alt=@friend.Name />
11: </a>
12: </div>
13: }
14: </div>
15: }. | http://blogs.msdn.com/b/webdev/archive/2013/10/16/get-more-information-from-social-providers-used-in-the-vs-2013-project-templates.aspx | CC-MAIN-2014-23 | refinedweb | 145 | 57.37 |
understanding how exactly to get this to work. I have a script called NoSleep.js, which I've imported into my construct project files and is supposed to prevent display sleep on any Android or iOS web browser. But I have no idea what exactly I'm supposed to enter into the Execute Javascript action field. The name of the script? The script itself? Or what? There isn't really much documentation on this, and I can't seem to find any .capx examples of it anywhere.
The action does exactly what it says, it executes some snippet of JavaScript. It's mostly useful for simple JavaScript stuff.
It's not quite sufficient to easily use some JavaScript library. First, importing the library into the files folder isn't enough to make it usable. for that it needs to be loaded in one of three ways:
1. Edit the exported HTML file and add the library with the other js files.
2. Load it after the fact with the jquery.getScript function.
3. Make a plugin and put the library in it's dependencies section of the edittime.js.
One is a bit akward to use for testing, but it is the normal way to include JavaScript files in HTML.
Two is slightly tricky since loading a library is asynchronous so the library can take time to load an may not be usable right away. The syntax is $.getScript(filename, callback) and callback is a function to call when the library is done loading. If you want examples of such a thing search my posts for JavaScript.
Three is the reccomended way by making a plugin. Doing it with a plugin may make some things more straightforward than the other two methods.
So then after you get the library loaded how you use the library depends on the library.
I think you can load a project js file using the script tag.
Getting that into the editor is not easy.
You can use a variable to get past the editors formatting limits.
Then again, why bother with that when you can put the script into the variable.
If it's not to big that is.
Develop games in your browser. Powerful, performant & highly capable.
In the case of that library here's the text to run that will load the library and run the example code from the libraries webpage.
"$.getScript('NoSleep.js', function(){
document.noSleep = new NoSleep();
function enableNoSleep() {
document.noSleep.enable();
document.removeEventListener('touchstart', enableNoSleep, false);
}
document.addEventListener('touchstart', enableNoSleep, false);
});"[/code:3ch80xxw]
I've trouble to exec a java method by the following.
I use the Browser.executeJavaScript from Construct2... As Function name i said: startGame();
In Android Studio I created a Class like:
public class GameEventsPlugin extends CordovaPlugin {
private Context context;
override
public void initialize(CordovaInterface cordova, CordovaWebView webView) {
this.context = cordova.getActivity().getApplicationContext();
super.initialize(cordova, webView);
}
public boolean execute(String action, JSONArray args, CallbackContext callbackContext) throws JSONException {
if (action.equals("startGame")) {
Toast.makeText(this.context, "App gestartet", Toast.LENGTH_LONG).show();
return true;
In my activfity class I put this before loadUrl(launchUrl):
pluginEntries.add(new PluginEntry("QM-Plugins", new GameEventsPlugin()));
The initialize method is called, but never the execute... What is wrong? | https://www.construct.net/en/forum/construct-2/how-do-i-18/browsers-quotexecute-122357 | CC-MAIN-2021-04 | refinedweb | 538 | 60.11 |
Hello there,
It seems that nobody responded to my previous question, it also seems that it was a bit difficult to do then...
But I won't give up yet!
So, I wanted to ask how to make my main character (Titus) aim at a 'aim point' that I control separately.
So basically I want that 'aim point' to be controlled with the IJKL keys, but I want to move my Titus with WASD keys, and when I shoot a projectile, it is shot towards my 'aim point'.
here are my scripts in the following order : Mouse control and shoot and then the movement script of 'Titus'.
using UnityEngine;
using System.Collections;
public class BulletPrototype1 : MonoBehaviour
{
public float maxSpeed = 25f;
public GameObject Bullet;
private Transform _myTransform;
private Vector2 _lookDirection;
private void Start()
{
if (!Bullet)
{
Debug.LogError("Bullet is not assigned to the script!");
}
_myTransform = transform;
}
private void Update()
{
/*
mousePos - Position of mouse.
screenPos2D - The position of Player on the screen.
_lookDirection - Just the look direction... ;)
*/
// Calculate 2d direction
// The mouse pos
var mousePos = new Vector2(Input.mousePosition.x, Input.mousePosition.y);
// Player Camera must have MainCamera tag,
// if you can't do this - make a reference by variable (public Camera Camera), and replace 'Camera.main'.
var screenPos = Camera.main.WorldToScreenPoint(_myTransform.position);
var screenPos2D = new Vector2(screenPos.x, screenPos.y);
// Calculate direction TARGET - POSITION
_lookDirection = mousePos - screenPos2D;
// Normalize the look dir.
_lookDirection.Normalize();
}
private void FixedUpdate()
{
if (Input.GetButtonDown("Fire1"))
{
// Spawn the bullet
var bullet = Instantiate(Bullet, _myTransform.position, _myTransform.rotation) as GameObject;
if (bullet)
{
// Ignore collision
Physics2D.IgnoreCollision(bullet.GetComponent<Collider2D>(), GetComponent<Collider2D>());
// Get the Rigid.2D reference
var rigid = bullet.GetComponent<Rigidbody2D>();
// Add forect to the rigidbody (As impulse).
rigid.AddForce(_lookDirection * maxSpeed, ForceMode2D.Impulse);
// Destroy bullet after 5 sec.
Destroy(bullet, 5.0f);
}
else
Debug.LogError("Bullet not spawned!");
}
}
}
using UnityEngine;
using System.Collections;
public class PlayerMovement2 : MonoBehaviour {
Rigidbody2D PlayerBody;
Animator Animi;
void Start () {
PlayerBody = GetComponent<Rigidbody2D>();
Animi = GetComponent<Animator>();
}
void Update () {
Vector2 movement_vector = new Vector2(Input.GetAxisRaw("Horizontal"), Input.GetAxisRaw("Vertical"));
if (movement_vector != Vector2.zero)
{
Animi.SetBool("Walking", true);
Animi.SetFloat("Input_x", movement_vector.x);
Animi.SetFloat("Input_y", movement_vector.y);
}
else
{
Animi.SetBool("Walking", false);
}
PlayerBody.MovePosition(PlayerBody.position + movement_vector * Time.deltaTime);
}
}
Well, those are the two scripts. Can someone give me some help with this? I do not want a 'script that is already made for me' without information about how it works. I want an explanation about how to do it, instead of having it all done for me. Even though I am no great programmer I still want to try it myself! honestly I do not understand how I can make an object that my 'Titus' aims to. the movement of the object can be done simply, and I can change it in control config in unity game-play settings. The hard part is connecting those two, and make them work correctly.
If you want more information about what kind of game I am making, It is a 2-D top down platform game (not turn based if it is important to know), so there is no gravity, but there are physics like bouncing, walls, doors and mechanics. Thank you all in advance and have a nice day, Daniel Nowak Janssen
Answer by Statement
·
Nov 02, 2015 at 09:35 PM
I want that 'aim point' to be controlled with the IJKL keys, but I want to move my Titus with WASD keys, and when I shoot a projectile, it is shot towards my 'aim point'.
I want that 'aim point' to be controlled with the IJKL keys, but I want to move my Titus with WASD keys, and when I shoot a projectile, it is shot towards my 'aim point'.
Ok..
public Transform aimPoint;
Mouse control and shoot
Mouse control and shoot
Wait, what? I thought you wanted to aim with IJKL? What else does mouse do than shoot? How many hands do you have, human?
Anyway, movement if aim is simple with IJKL, just set axis up in InputManager.
Vector3 aimAxis = new Vector3(Input.GetAxis("AimHorizontal"), Input.GetAxis("AimVertical"));
aimPoint.Translate(aimAxis * aimSpeed * Time.deltaTime);
If you want to get the direction and rotation to aimPoint, I have two helper classes.
namespace Answers
{
using UnityEngine;
public static class Directions
{
public static Vector2 FromTo2D(Vector2 from, Vector2 to)
{
return (from - to).normalized;
}
public static Vector2 FromTo2D(Transform from, Transform to)
{
return FromTo2D(to ? (Vector2)to.position : Vector2.zero,
from ? (Vector2)from.position : Vector2.zero);
}
}
public static class Rotations
{
public static Quaternion LookAt2D(Vector2 from, Vector2 to)
{
Vector2 diff = from - to;
float angle = Mathf.Atan2(diff.y, diff.x) * Mathf.Rad2Deg;
return Quaternion.Euler(0, 0, angle);
}
public static Quaternion LookAt2D(Transform from, Transform to)
{
return LookAt2D(to ? (Vector2)to.position : Vector2.zero,
from ? (Vector2)from.position : Vector2.zero);
}
}
}
Then you can do something like...
bulletRotation = Answers.Rotations.LookAt2D(transform, aimPoint);
bulletDirection = Answers.Directions.FromTo2D(transform, aimPoint);
FromTo2D return Vector2 so if you need Vector3, you might need to cast it.
bulletDirection = (Vector3)Answers.Directions.FromTo2D(transform, aimPoint);
First of all thank you for answering my question! I see, it was simpler than I imagined, I was thinking a bit too difficult again...
Also, Can you explain to me what Mathf.Atan2 does? it's in sentence 24 if you need to look it up again. does it something with float? or does it make you look at a certain angle?
Also, I am a human, at least I think I am ;)
Thank you again for your help and have a nice day,
Daniel.
Can you explain to me what Mathf.Atan2 does?
Can you explain to me what Mathf.Atan2 does?
It's a basic trigonometry function. For a 2d vector, it gives you the angle in radians of that vector.
If the vector points right, you get 0.
If the vector points up, you get PI/2.
If the vector points left, you get PI or -PI.
If the vector points down, you get -PI / 2.
And everything in between, I mean, yeah it's the angle of the vector from a right vector.
I recently made a demo which displays radians for another answer you can look at. Although, I convert the result from Atan2 to have absolute radians (no negative radians).
First of all, thanks again for the explanation, it helped me a lot!
I see, I was trying to use the mathf code to get something, and i got somewhere, but now I just need to bind it to some keys (IJKL) this is the script I made now:
using UnityEngine;
using System.Collections;
public class AimScript : MonoBehaviour {
public Transform Playerobject;
public float distance;
public float angle = 0;
// Use this for initialization
void Start () {
}
// Update is called once per frame
/// <summary>
/// this is the script I made, although it works, I still need to bind them to some keys.
/// So do I need to use the default method, or does it work differently when I am using the mathf?
/// </summary>
void FixedUpdate () {
transform.position = new Vector3(Playerobject.position.x + Mathf.Cos(angle) * distance, Playerobject.position.y + Mathf.Sin(angle) * distance, Playerobject.position.z);
}
As you can see I have to add an object in it to connect it. then it will go around my object (in this instance it is still Titus). I can change the position in the inspector, but now I want to control it with my keys on the keyboard(without screwing it up). How can I bind them to the keys I want, and that it works correctly? I already have added an AimHorizontal and AimVertical in the input manager, and gave them the IJKL keys. now I want to make it move.
Sorry for not being all that smart.
And thanks again for your help,
Daniel
Answer by Daniel_Zabojca
·
Nov 10, 2015 at 10:37 AM
Yep, I got it. Thanks again for the help and the explanation.
I still have to test a few things with the mathf.atan2 before I get it completely, but now I at least know the basics, thanks again .
Player Not Moving With Platform 2D
1
Answer
How to create a 2D obstacle as a question
0
Answers
How do I keep my player colliders from thinking that the player is in the air.
0
Answers
When i flip, the sprite moves
1
Answer
The ban on going beyond the edge collider
0
Answers | https://answers.unity.com/questions/1092349/unity-2d-522-how-can-i-aim-with-keys-towards-an-ob.html | CC-MAIN-2019-43 | refinedweb | 1,396 | 58.48 |
Today, we have released Visual Studio 2013 Update 2 CTP2, a go-live Team Foundation Server 2013 RC and TypeScript 1.0 RC. This update release preview includes several significant feature additions as well as fixes. You can see full details in the release notes.
But you don't have to take my word for it. Check out these posts:
On the Visual C++ side, in addition to stuff done in CTP1, this update lets a developer specify his program be compiled to target latest-generation processors that support the AVX2 instruction set.
Note that this is an early version for you to check out; you will receive a reminder through the Notification Hub in Visual Studio when the final version is ready. As always, we want to hear your suggestions or feature ideas/improvements on UserVoice or vote on other user's suggestions, Visual Studio Connect site for bugs, our C++ Facebook page for general comments or drop me a line (ebattali@microsoft.com).
Hooray! Can you say more about the AVX2 support and performance benefits?
Is it going to fix any of the existing bugs?
Loading files tabs. Maybe too complex a solution. If I close the IDE I have to manually load each source file I had in a tab by hand, going by a snipped view of the open tabs across a couple of monitors. Solution files are 1600+, across 37 active projects. Same problem in 2012. Even on another OS boot (same solution, so it's the .sln and whatever is loaded/or not loaded). I think once it gave a COM error about something but not enough detail to be meaningful. had this problem since I think it was updated 2 on 2012. Before that it almost always loaded my open files in tabs. Now, practically never. Sure, I understand why MS shares are about where they were 10 years ago. I really do. Not a dig, but an observation. You need to do better. A lot better.
I'm with JJ, I'd like some detailed information about what improvements to code generation occur when setting AVX2 mode.
@JJ, @Richard. Our AVX2 support in this CTP, lets you generate faster code, if you are lucky enough to be running one of the new tablets or PCs that support the AVX2 instruction set - introduced with Intel's "Haswell" processor.
So, we generate (automatically, within the compiler) scalar & vector FMA instructions ("Fused Multiply-Add") and make use of the wider vector lanes - 256 bits! - supported by AVX2.
There's a lot more to describe, obviously. I'll write up a blog and post it here in the next few days.
@NotStillRedux, if the opened tabs don't get persisted between VS sessions it might be because VS fails to write this information into the .suo file (sitting side-by-side with the sln file). If you can verify that this file is not accidentally read-only (e.g. unintentionally part of source control), then you're likely running into a VS bug.
The best course of action then is to open a Connect bug (connect.microsoft.com/VisualStudio) and our team will investigate this issue. You can post back the link to the Connect bug as a comment in this thread in case someone else here running into the same issue wants to track its progress.
Any words on the compiler magic required for implementing few remaining C99 features?
Is it happening in Update 2?
If no, will it every happen: The VS version with FULL C99 and C11 support?
Respect C
Thank you.
Jim
Looking forward to it. Did you add any additional #pragma controls?
Would it be possible to get an update to the vectorizer cookbook to go along with it? It does not look like it has been updated since Visual Studio 2012.
blogs.msdn.com/.../auto-vectorizer-in-visual-studio-11-cookbook.aspx
Hello.
VS 2013 has great autoformatting system for C++ code which lacks some small (and probably easily implementable), but important features like this visualstudio.uservoice.com/.../3894937-option-to-stop-indenting-namespaces-in-c-code
Bugs in autoformatting are being fixed, e.g. bad formatting of nested statements was fixed in Update 1 and it was a good sign
But can we hope that small improvements like this can get into VS 2013 updates and not into VS 2014-2015-...?
Hi Vladimir,
I've confirmed with our dev that control for namespace indentation has been checked in and will be available in Update 2 RTM (vs. CTP). Look forward to it in a few weeks!
Can someone tell me if the new update will fix this problem or when and where it will be fixed
connect.microsoft.com/.../uicc-exe-of-windows-kits-8-1-can-not-run-on-windows-7-sp1-x86
Thanks
Does this update fix the (very annoying) issue where the C++ project property pages get corrupted as you edit them?
This is the connect item I think:
connect.microsoft.com/.../800259
I installed the update 2 mentioned in this post. I can confirm it does NOT fix the problem of uicc.exe from the windows 8.1 sdk kit not running at all on Windows 7.
There is no meaningful reply to that connect bug (surprise surprise).
Can someone please address this situation and reply so there is at least a meaningful note in the connect bug, otherwise you've wasted someone's time to bother to write it up in the first place and my time to mention it here. It might also have prevented me bothering to apply the update mentioned in this post on the off chance it might fix it, because I've no idea when it will be fixed so what else are people supposed to do.
It's not like the bug is hard to reproduce, the program simply doesn't run.
Can someone (anyone) please take the time to tell me when the Connect bug below will be fixed? The Update related to this blog didn't fix it and I've asked previously on the VC blog about it and heard nothing. Nobody has updated the status of the bug meaningfully on Connect either. I find both these facts pretty bad and I imagine the person(s) who took the time to write up the Connect issues in the first place feel even worse about it.
You know if it's likely been fixed or not simply by running the latest build of uicc.exe with no parameters at a command prompt on Windows 7, and it will crash right out with an error message if it's not been fixed for Windows 7. | http://blogs.msdn.com/b/vcblog/archive/2014/02/25/visual-studio-2013-update-2-ctp2.aspx | CC-MAIN-2015-35 | refinedweb | 1,118 | 72.97 |
Details
- Type:
Bug
- Status:
Reopened
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 1.7.4
- Fix Version/s: None
- Component/s: Configuration
- Labels:None
- Environment:MacOS 10.5, CentOS 5.0
- Number of attachments :
Description
When I create a featureType from a PostGIS table with a space in the name (eg. Foo Bar), I end up with directories in data/featureTypes that look like this:
postgis_Foo%252525252BBar
I assume that what is happening is this: The first time I save the featureType, it encodes the space as %2B. The next time I save the config (any part of it, I think), it reads that feature type filename, sees that it has a %, and encodes that as %25. The next time, it sees that it still contains a %, and encodes it again, ad infinitum. This repeats each time I make any changes to my config, until the filenames are longer than the legal filesystem limit. Then I can't save anything.
Activity
Hi Daniel,
I can't seem to get by the Foo+Bar step. Can you give me teh exactly sequence of saves/loads/restarts involved so I can try to reproduce. Thanks.
Justin,
- Start with a table in postgis called "Foo Bar".
- Add a featuretype for the table.
- Assuming the namespace for the featureType is "postgis," you should get a directory named postgis_Foo+Bar in your featureTypes dir.
However, I was unable just now to replicate the re-encoding of the plus signs to %2B. So the directories I was seeing with the extra encoding may have been from before I upgraded to 1.6.x.
Can't reproduce in 1.7.4. Will re-open if it shows up again. Sorry to waste your time.
Although I am not seeing runaway growth of file names in 1.7.5, I think there are still potential issues with the encoding.
For a layer named Foo Bar, I am getting both namespace_Foo+Bar and namespace_Foo%2BBar in my featureTypes directory.
I'm not totally sure about the sequence, but it looks like the plus sign version is saved first, and then the %-encoded version comes later when I save another change (anywhere in the config, I think). Strangely, it seems that it then saves it with the plus sign again at some later point.
My main concern is the duplication of the config for these layers. Which version of the file gets read in when the config is reloaded? It seems like there is a potential for the two versions to get out of sync and for the older one to be read in by mistake when the server is restarted.
Notice that %2B is the code for plus, not space. If the filename is going to be URL encoded, it should be %20. In any case, it just needs to be done consistently.
I still seem unable to reproduce this. It remains namespace_Foo+Bar through a variety of saves, loads, and restarts. I am running mac os x 10.5 as well. I wonder if it could be anything with your locale or file system? Does your file system any special options? Or just the defaults?
That's odd. My filesystem is standard. I'm using the .dmg package of Geoserver. Here's some info:
$ java -version java version "1.5.0_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-284) Java HotSpot(TM) Client VM (build 1.5.0_16-133, mixed mode, sharing)
$=
I'm seeing similar (or the same) problems on CentOS linux.
Are there other diagnostics I could do?
I noticed that there's an intermediate step where the spaces are encoded as plus signs. That's what %2B stands for. So it goes like this:
Foo Bar
Foo+Bar
Foo%2BBar
Foo%252BBar
Foo%25252BBar
... | http://jira.codehaus.org/browse/GEOS-3132?focusedCommentId=181087&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2013-48 | refinedweb | 634 | 76.11 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
int fvol (
const char *drive, /* drive letter */
char *buf); /* buffer where label will be written */
The function fvol reads the volume label.
Parameter drive is a string specifying the volume drive.
The default drive is used if an empty string is provided.
Parameter buf specifies a buffer where volume label will be
stored as a NULL-terminated string. The size must be at least 12
bytes. If the volume has no label, then an empty string is
returned.
#include <rtl.h>
void cmd_vol (void) {
char label_buf[12];
if (fvol ("M0:", label_buf) == 0) {
if (label_buf[0]) {
printf ("Volume in drive M0 is %s\n", label_buf);
}
else {
printf ("Volume in drive M0 has no label.\n");
}
}
else {
printf ("Volume access. | http://www.keil.com/support/man/docs/rlarm/rlarm_fvol.htm | CC-MAIN-2019-30 | refinedweb | 133 | 74.49 |
The abstract interface to all kinds of lights.
More...
#include "light.h"
List of all members.
The abstract interface to all kinds of lights.
The actual light objects also inherit from PandaNode, and can therefore be added to the scene graph at some arbitrary point to define the coordinate system of effect.
Definition at line 42 of file light.h.
[protected, virtual]
Fills the indicated GeomNode up with Geoms suitable for rendering this light.
Reimplemented in Spotlight.
Definition at line 128 of file light.cxx.
Referenced by get_viz().
[protected]
This internal function is called by make_from_bam to read in all of the relevant data from the BamFile for the new Light.
Reimplemented in AmbientLight, DirectionalLight, LightLensNode, LightNode, PointLight, and Spotlight.
Definition at line 151 of file light.cxx.
References DatagramIterator::get_int32(), and BamReader::read_cdata().
[inline]
Returns the basic color of the light.
Definition at line 70 of file light.I.
Referenced by SpeedTreeNode::cull_callback(), DXGraphicsStateGuardian9::get_light_color(), and DXGraphicsStateGuardian8::get_light_color().
Returns the priority associated with this light.
See set_priority().
Definition at line 118 of file light.I.
Referenced by SpeedTreeNode::cull_callback().
[inline, static]
Returns a global sequence number that is incremented any time any Light in the world changes sort or priority.
This is used by LightAttrib to determine when it is necessary to re-sort its internal array of stages.
Definition at line 132 of file light.I.
[virtual]
Computes the vector from a particular vertex to this light.
The exact vector depends on the type of light (e.g. point lights return a different result than directional lights).
The input parameters are the vertex position in question, expressed in object space, and the matrix which converts from light space to object space. The result is expressed in object space.
The return value is true if the result is successful, or false if it cannot be computed (e.g. for an ambient light).
Reimplemented in DirectionalLight, PointLight, and Spotlight.
Definition at line 97 of file light.cxx.
Returns a GeomNode that may be rendered to visualize the Light.
This is used during the cull traversal to render the Lights that have been made visible.
Definition at line 109 of file light.cxx.
References fill_viz_geom().
Returns true if this is an AmbientLight, false if it is some other kind of light.
Reimplemented in AmbientLight.
Definition at line 75 of file light.cxx.
[inline, protected]
Indicates that the internal visualization object will need to be updated.
Definition at line 143 of file light.I.
Referenced by set_color(), DirectionalLight::set_direction(), DirectionalLight::set_point(), PointLight::set_point(), Spotlight::xform(), and DirectionalLight::xform().
Sets the basic color of the light.
Definition at line 81 of file light.I.
References mark_viz_stale().().
Definition at line 103 of file light.I.
Writes the contents of this object to the datagram for shipping out to a Bam file.
Definition at line 138 of file light.cxx.
References Datagram::add_int32(), and BamWriter::write_cdata(). | http://www.panda3d.org/reference/1.8.0/cxx/classLight.php | CC-MAIN-2013-20 | refinedweb | 481 | 53.07 |
A contact form using HTML::TemplateGI::Application.
If you are convinced that CGI::Application is the framework for you, you're in the right place. If you are still unsure, check out some additional rationale. Otherwise, let's dive in.. Also, this tutorial is not meant as a placement for CGI::Application's well-written POD, which should be required reading.
A Word About HTML::Template
While this is not a tutorial for HTML::Template or HTML::FillInForm, the novice Web developer will see an example of how they can all work together with CGI::Application to create more useful, dynamic, and user-friendly applications.
Basic CGI::Application Concepts
- All CGI::Application applications are invoked by an instance script, which, in test case below, is named index.cgi. One of the advantages of using CGI::Application, is that it encourages "smart" URI's, e.g., The instance script can be named anything, but this author prefers some__directory/index.cgi to take advantage of the smart URI, and using the .cgi extension, reserving .pl) and .pm for the actual modules.
- A instance usually passes one run mode, which can be equated to the processing of one screen or Web page. For example, the first run mode might populate and display a form, and the second run mode might validate that form upon submission. If a run mode is not passed, a default run mode can be set at the top of the actual application (more on that in a minute).
- After the function is run in the application, the application must return something or to something: usually an HTML page through CGI::Application's redirect method, or in our case, an HTML::Template template using an output method.
- $self, or the object, is passed throughout the application and finally returned at the end. It can be called anything, but $self is common practice in the OO world.
The Layout
The locations of your files will vary depending on your server, e.g., you could be on a shared host and using something like /usr/home/foobar and /foobar/public_html/. The salient point is that the CGI::Application applications (server-side) are placed "out-of-reach" of the public Web directory (client-side).
Notes:
- One programming camp would advocate putting all of one's modules in a "lib" directory as opposed to the "myapps" directory shown here. However, this author prefers to think of "lib" as a place for Perl, CPAN modules, and those modules that will not be edited and part of the general operation of Perl itsel—background stuff.
- Also, there are times when templates can and should be placed outside of the document root—in a directory in "myapps" for example. However, one of the reasons for using a template system is to separate application code from presentation code. Therefore it might be best to keep the designers and HTML folks on the "client side" of things and not mucking around on the "server side." That is way the sample application shown below places the templates in the doc root.
/opt/foobar/myapps/---+ | | | | | Foobar_Super.pm | | | Common.pm | | | /Acmecorp/---+ | | | Contact.pm | | | /conf/---+ | | | acmecorp.conf | | /var/www/acmecorp/--+ | home.html | /contact/----+ | | | index.cgi | /templates/---+ | | | contact.tmpl | thankyou.html
Sample Application
CGI::Application, Plugin's, and HTML::Template
index.cgi (our instance script)
Notes:
- this instance script will pass a PARAM to the application
- in this example, we need to output the form that now in a .tmpl file. Though some browsers will actually display the .tmpl file without HTML::Template, we'll use an instance of our application to simply display the form.
- the instance script might be evoked by a text link:
<a href="">Contact Us</a>
- note that paths are relative to the location of the .tmpl file and not the instance script or application.
#!/usr/local/bin/perl -T use lib "/opt/foobar/myapps/"; use warnings; use strict; use Acmecorp::Contact; my $app = Acmecorp::Contact->new( PARAM => 'client' ); $app->run();
contact.tmpl ('rm' is our run mode>
Foobar_Super (a super class for CGI::Application applications)
Notes:
- because you may be creating several CGI::Application applications for other sites on your server, developing a super class eliminate repetition
- CGI::Application Plugin's integrates the modules they are associated with, negating the necessity of loading the native CPAN module
- cgiapp_init is run before anything in the application. In the super class you can read configuration files, set paths, set sessions, etc.
- the PARAM 'client' that was passed by the instance script is used by the super class to 'personalize' the instance./myapps/' . ucfirst $self->param('c +lient') .'/conf/'. ;
acmecorp.conf (a configuration file)
Note: read by CA_Super's cgiapp_init
#--- MySQL Server --- [db] host = DBI:mysql:foobar:localhost user = acmecorp pass = AKCgKYxc
Contact.pm (the actual application called by the instance script
Notes:
- in the method 'display', the CGI::Application convention for specifying a template is used: $self->load_tmpl
- in the first firing of the instance script, no run mode will be passed, so the application will use the default function defined by the $self->start_mod('d') in the setup method. This will simply output, or display, the template file containing our form.
- we also use the CGI::Application::Plugin::FillInForm convention for outputting the form with the fill-in-form values: $self->fill_form
package Acmecorp::Contact; use base qw(Foobar_Super Common); use strict; use warnings; use MIME::Lite; #load any extra modules needed use Date::Calc qw(Today); #--- SETUP Run modes sub setup { my $self = shift; $self->start_mode('d'); #if no run mode,;
Common.pm (a module with common methods)package Common;
sub validate { my $self = shift; my $to_check = shift; if ( $to_check !~ /^([\w ]+)$/ ) { return ( $to_check, " has invalid characters or is blank" ); } else { return $1; } } 1;
Summary
This tutorial has shown use of the basis tenents of using CGI::Application as an application framework:
- file layout and directory structure
- instance scripts
- run modes
- cgiapp_init and redirect
- integration of CGI::Application::Plugin's
- use of super class
- intro to the HTML::Template templating system
As always, you are encouraged to read the POD for CGI::Application and then take a look at the growing number of Plugins to see if CGI::Application can further streamline your coding process.
Other Resources
CGI::Application Wiki
Mailing List
HTML::Template Tutorial
Red Antiqua Tutorial
Using CGI::Application | http://www.perlmonks.org/?displaytype=print;node_id=698693 | CC-MAIN-2018-09 | refinedweb | 1,052 | 52.29 |
warning: function may return address of local variable [-Wreturn-local-addr]
(1) By Adam S Levy (alaskanarcher) on 2020-05-25 00:29:18 [link] [source]
I am getting the following warning when compiling the last two official release amalgamations using CGo:
$ go build # crawshaw.io/sqlite In file included from ./static.go:19: ././c/sqlite3.c: In function 'sqlite3SelectNew': ././c/sqlite3.c:128048:10: warning: function may return address of local variable [-Wreturn-local-addr] 128048 | return pNew; | ^~~~ ././c/sqlite3.c:128008:10: note: declared here 128008 | Select standin; | ^~~~~~~
The relevant amalgamation code is below.
Clearly the standin variable is protected from being exposed using the assert at the bottom. But perhaps this should be made into a global to avoid the warning.
/* ** Allocate a new Select structure and return a pointer to that ** structure. */ SQLITE_PRIVATE Select *sqlite3SelectNew( Parse *pParse, /* Parsing context */ ExprList *pEList, /* which columns to include in the result */ SrcList *pSrc, /* the FROM clause -- which tables to scan */ Expr *pWhere, /* the WHERE clause */ ExprList *pGroupBy, /* the GROUP BY clause */ Expr *pHaving, /* the HAVING clause */ ExprList *pOrderBy, /* the ORDER BY clause */ u32 selFlags, /* Flag parameters, such as SF_Distinct */ Expr *pLimit /* LIMIT value. NULL means not used */ ){ Select *pNew; Select standin; pNew = sqlite3DbMallocRawNN(pParse->db, sizeof(*pNew) ); if( pNew==0 ){ assert( pParse->db->mallocFailed ); pNew = &standin; } if( pEList==0 ){ pEList = sqlite3ExprListAppend(pParse, 0, sqlite3Expr(pParse->db,TK_ASTERISK,0)); } pNew->pEList = pEList; pNew->op = TK_SELECT; pNew->selFlags = selFlags; pNew->iLimit = 0; pNew->iOffset = 0; pNew->selId = ++pParse->nSelect; pNew->addrOpenEphm[0] = -1; pNew->addrOpenEphm[1] = -1; pNew->nSelectRow = 0; if( pSrc==0 ) pSrc = sqlite3DbMallocZero(pParse->db, sizeof(*pSrc)); pNew->pSrc = pSrc; pNew->pWhere = pWhere; pNew->pGroupBy = pGroupBy; pNew->pHaving = pHaving; pNew->pOrderBy = pOrderBy; pNew->pPrior = 0; pNew->pNext = 0; pNew->pLimit = pLimit; pNew->pWith = 0; #ifndef SQLITE_OMIT_WINDOWFUNC pNew->pWin = 0; pNew->pWinDefn = 0; #endif if( pParse->db->mallocFailed ) { clearSelect(pParse->db, pNew, pNew!=&standin); pNew = 0; }else{ assert( pNew->pSrc!=0 || pParse->nErr>0 ); } assert( pNew!=&standin ); return pNew; }
(2) By Richard Hipp (drh) on 2020-05-25 01:02:09 in reply to 1 [link] [source]
perhaps this should be made into a global to avoid the warning.
The "
standin" local variable is a structure used to clean up and free
prior memory allocations associated with the parse tree after a malloc()
failure while trying to acquire space to build a "
Select" object.
We cannot use a global for this, as that would cause problems if two or more threads all suffer a malloc() failure at about the same time.
We cannot malloc for space. The whole reason for using "
standin" in the first place is that malloc isn't working.
Hence "
standin" must be a stack variable.
As you observe, the logic of the overall system and especially the assert() on the previous line demonstrate that the value returned cannot be the stack variable.
This problem only arises because SQLite is very careful to not leak memory nor crash following a malloc() failure. We test that using simulated malloc() failures.
Perhaps you should bring this false-positive warning message to the attention of the people who maintain the CGo compiler?
(3) By Richard Hipp (drh) on 2020-05-25 01:39:44 in reply to 2 [link] [source]
On second thought, there is a way to work around this false-positive warning (I think - I don't have a convenient way to test it.) See my proposed work-around on the cgo-warning-workaround branch.
This work-around makes SQLite very slightly larger and slower. So then the question becomes: Do we allow SQLite to grow slightly larger and slower on billions of devices all over the world in order to silence a false-positive compiler warning from CGo? To be fair, we have already done that for MSVC. But MSVC seems like a more important compiler than CGo. I think I'll leave the patch on the branch while we ponder this question.
(6) By Adam S Levy (alaskanarcher) on 2020-05-25 19:29:49 in reply to 3 [link] [source]
Thanks for your thoughtful response.
I don't fully understand why the standin is necessary but I absolutely trust your expertise. The warning can be tolerated. It doesn't stop or hurt the build. It just will confuse Golang users of the package.
I think I will just apply your patch in the amalgamation that the Golang package builds, since if size and speed is the ultimate priority for a user, Golang is probably the wrong language choice anyway.
(7) By Adam S Levy (alaskanarcher) on 2020-05-25 19:37:56 in reply to 6 [link] [source]
BTW I tried your patch and it does silence the warning. Thank you.
(8) By kyle on 2020-05-25 21:09:41 in reply to 3 [link] [source]
CGo isn't the compiler for the c code in this case, just a wrapper for CC, which is gcc in Adam's case. I've tried to replicate the warning with a few gcc versions and the amalgamation, but cannot.
(9) By Mike (mike.mcternan) on 2020-05-26 06:20:03 in reply to 3 [link] [source]
Your work-around works for me too.
I see this with gcc 10.1, as shipped with Fedora 32, when compiling at -O2 and above:
$ gcc -v
Using built-in specs.
COLLECT_GCC=/bin/gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/10/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --enable-offload-targets=nvptx-none --without-cuda-driver --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC)
$ $ gcc -O2 -c sqlite3.c
sqlite3.c: In function ‘sqlite3SelectNew’:
sqlite3.c:128048:10: warning: function may return address of local variable [-Wreturn-local-addr]
128048 | return pNew;
| ^~~~
sqlite3.c:128008:10: note: declared here
128008 | Select standin;
| ^~~~~~~
This was with sqlite-amalgamation-3310100, but fixed by your work-around at -O2 and -Wall -O3.
Note that I did try other workarounds, but couldn't get any to work.
Specifically I think the problem in the original code is that gcc can't track
pParse->db->mallocFailed, either having to assume it could be aliased or modified by intermediate function calls.
Replacing the condition above the call to
clearSelect() with
if ( should be robust (and cheap), but still doesn't fix the warning .
pNew==&standin || pParse->db->mallocFailed )
I'm not sure how much overhead your workaround adds (it looks fairly cheap to my eyes, but I am not a compiler!) but I'd guess the warning will become a lot more prevalent now GCC 10 is shipping with distros.
(10) By Mike (mike.mcternan) on 2020-05-30 07:32:58 in reply to 3 [link] [source]
I see this made it back to trunk already, but just as an addendum, I roughly checked the overhead of the fix, using gcc (GCC) 10.1.1 20200507 (Red Hat 10.1.1-1) at -O2:
It doesn't look any different with out without the workaround applied to the last pair of releases. Phew!
(11) By Richard Hipp (drh) on 2020-05-30 10:06:04 in reply to 10 [link] [source]
I measure size and performance using -Os with GCC 5.4.0. Your mileage may vary.
(16) By Mike (mike.mcternan) on 2020-05-31 03:03:53 in reply to 11 [link] [source]
Okay, last one on this topic for me, but I'm getting better mileage with -Os and the fix:
gcc version 10.1.1 20200507 (Red Hat 10.1.1-1) (GCC) -Os
Thank you for your work on this, and sqlite as a whole.
(12) By anonymous on 2020-05-30 20:26:42 in reply to 3 [link] [source]
Why not just declare
standin static, properly initialized?
You don't intend to return its value, just testing the address in assert. This should silence the warning.
(13) By Richard Damon (RichardDamon) on 2020-05-30 21:25:51 in reply to 12 [link] [source]
One thought is that wouldn't be thread-safe if two threads both hit the condition. The memory IS used internally to complete the operation, it just isn't returned.
(14) By anonymous on 2020-05-30 23:03:09 in reply to 13 [link] [source]
From the code it looks like the variable's address value is used as a sentinel; it's read-only, used as a const value. As such it is thread-safe.
Moreover, static variable's address will be const by definition.
(15) By Richard Hipp (drh) on 2020-05-30 23:54:41 in reply to 14 [link] [source]
A parse tree is being constructed. The job of the sqlite3SelectNew() routine is to takes a bunch of subtrees as input (the pararameters to the sqlite3SelectNew() call), allocate a new SELECT object, attach the subtress to the Select object, then returns the new Select object. Ownership of the subtrees passes from the caller into the new Select object.
If an out-of-memory (OOM) error occurs, sqlite3SelectNew() should return NULL. But before doing so, it also has to free all of the substructure that was passed in as parameters (because it owns it). The easiest way to delete all that substructure is to load it all into a Select object, and invoke the destructor on that Select object. But if the OOM occurred while allocating the Select object itself, what can you use to load the substructure into?
Solution: Load the substructure into a Select object on the stack, and call the destructor for that Stack object instead. The destructor in this case is the clearSelect() routine. clearSelect() always frees the substructure, but it only frees the Select object itself if the second parameter is true. That parameter is false when we are deleting the stack-based Select object, since obviously it would be a problem if you tried to free an object on the stack.
So the "
standin" variable is used (though rarely - only when an OOM occurs),
and it could in theory be used simultaneously by two different threads. So
it cannot be static.
(17) By anonymous on 2020-05-31 03:08:24 in reply to 15 [link] [source]
Thank you for the explanation. I can see that the current version (using pAllocated) in this case is the most easiest fix to silence the eager compiler.
Looking closely into the details, it seems that the whole
sqlite3SelectNew() function is executed under an active mutex, since it calls
sqlite3DbMallocRawNN() which expects
assert( sqlite3_mutex_held(db->mutex) );.
Does this mean the whole function is effectively single-threaded?
(18) By Keith Medcalf (kmedcalf) on 2020-05-31 05:15:31 in reply to 17 [source]
No.
SQLite3 is multiple-entrant (can be executed on multiple threads simultaneously) but is only singly-entrant per connection.
In other words, the sqlite3SelectNew() function will only be executed on one thread per connection, however, multiple connections may execute the function concurrently. The mutex protects simultaneous access to the connection and is not a single execution path (a critical section).
(20) By anonymous on 2020-07-10 21:39:01 in reply to 2 [link] [source]
Hi, I have the same warning with the latest SQLite & GCC 10.
As you observe, the logic of the overall system and especially the assert() on the previous line demonstrate that the value returned cannot be the stack variable.
But assert does nothing when NDEBUG is defined, and this is exactly what SQLite does by default:
** Setting NDEBUG makes the code smaller and faster by disabling the ** assert() statements in the code. So we want the default action ** to be for NDEBUG to be set and NDEBUG to be undefined only if SQLITE_DEBUG ** is set. Thus NDEBUG becomes an opt-in rather than an opt-out ** feature. */ #if !defined(NDEBUG) && !defined(SQLITE_DEBUG) # define NDEBUG 1 #endif #if defined(NDEBUG) && defined(SQLITE_DEBUG) # undef NDEBUG #endif
Is it possible to silence it with something like:
assert( pNew!=&standin ); + pNew = NULL;
?
Thanks.
(21) By Richard Hipp (drh) on 2020-07-10 21:53:08 in reply to 20 [link] [source]
See for the actual work-around. The prerelease snapshot contains the work-around.
I consider this to be a bug in GCC-10. But I have worked around the bug nevertheless. It isn't the first compiler bug to be worked around (some prior bugs were quite a bit more serious) and it probably won't be the last.
(4) By kyle on 2020-05-25 17:11:58 in reply to 1 [link] [source]
Adam, can you give some more information on what compiler cgo is using and which flags? I couldn't reproduce on my local machine. The full output of
go env would help.
(5) By Adam S Levy (alaskanarcher) on 2020-05-25 19:21:26 in reply to 4 [link] [source]
$ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/aslevy/.cache/go-build" GOENV="/home/aslevy/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/aslevy/go" GOPRIVATE="" GOPROXY="direct" GOROOT="/usr/lib/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/aslevy/repos/go/AdamSLevy/flagbind/go.mod"049582596=/tmp/go-build -gno-record-gcc-switches"
Please also note that if you are building the master branch of
crawshaw.io/sqlite I pushed a small patch to silence the warning, but from Richard Hipp's response I think that my patch may be foolish. If you want to reproduce it try commit
1afc5f0 on the repo at
(19) By Chris Carson (chriscarson) on 2020-06-20 10:38:27 in reply to 1 [link] [source]
Very helpful thread. For me, just turning off the category of gcc warnings is an effective workaround.
export CGO_CFLAGS="-g -O2 -Wno-return-local-addr" | https://sqlite.org/forum/forumpost/e44657f5468907af?t=h | CC-MAIN-2022-33 | refinedweb | 2,374 | 62.27 |
C# | Hashtable Class
The Hashtable class represents a collection of key/value pairs that are organized based on the hash code of the key. This class comes under the System.Collections namespace. The Hashtable class provides various types of methods that are used to perform different types of operation on the hashtables. In Hashtable, keys are used to access the elements present in the collection. For very large Hashtable objects, you can increase the maximum capacity to 2 billion elements on a 64-bit system.
Characteristics of Hashtable Class:
- In Hashtable, key cannot be null but value can be.
- In Hashtable, key objects must be immutable as long as they are used as keys in the Hashtable.
- The capacity of a Hashtable is the number of elements that Hashtable can hold.
- A hash function is provided by each key object in the Hashtable.
Constructors
Example:
Output:
Hashtable: 5-C# 4-of 3-tutorials 2-to 1-welcome
Properties
Example:
Output:
has1 Hashtable is not synchronized. has2 Hashtable is synchronized. Total Number of Elements in has1: 5
Methods
Example:
Output:
Hashtable Contains: 2-to 3-Geeks 5-Geeks 1-Welcome 4-for Hashtable after removing element 4: 2-to 3-Geeks 5-Geeks 1-Welcome
Reference:
-
Recommended Posts:
- C# | Check if a Hashtable is equal to another Hashtable
- C# | Get or Set the value associated with specified key in Hashtable
- C# Hashtable with Examples
- C# | Check if the Hashtable contains a specific Value
- C# | Gets an ICollection containing the keys in the Hashtable
- C# | How to get hash code for the specified key of a Hashtable
- C# | Get an enumerator that iterates through the Hashtable
- C# | Remove all elements from the Hashtable
- C# | Remove the element with the specified key from the Hashtable
- C# | Gets an ICollection containing the values in the Hashtable
- Difference between Hashtable and Dictionary in C#
- C# | Adding an element into the Hashtable
- C# | Check if the Hashtable contains a specific Key
- C# | Check if Hashtable is read-only
- C# | Check whether a Hashtable contains a specific key. | https://www.geeksforgeeks.org/c-sharp-hashtable-class/ | CC-MAIN-2019-51 | refinedweb | 341 | 52.63 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Here is a simple example to introduce the class
circular_buffer.
For all examples, we need this include:
#include <boost/circular_buffer.hpp>
This example shows contruction, inserting elements, overwriting and popping.
// Create a circular buffer with a capacity for 3 integers. boost::circular_buffer<int> cb(3); // Insert threee elements into the buffer. cb.push_back(1); cb.push_back(2); cb.push_back(3); int a = cb[0]; // a == 1 int b = cb[1]; // b == 2 int c = cb[2]; // c == 3 // The buffer is full now, so pushing subsequent // elements will overwrite the front-most elements. cb.push_back(4); // Overwrite 1 with 4. cb.push_back(5); // Overwrite 2 with 5. // The buffer now contains 3, 4 and 5. a = cb[0]; // a == 3 b = cb[1]; // b == 4 c = cb[2]; // c == 5 // Elements can be popped from either the front or the back. cb.pop_back(); // 5 is removed. cb.pop_front(); // 3 is removed. // Leaving only one element with value = 4. int d = cb[0]; // d == 4
You can see the full example code at circular_buffer_example.cpp.
The full annotated description is in the C++ Reference section. | https://www.boost.org/doc/libs/1_65_1/doc/html/circular_buffer/example.html | CC-MAIN-2018-26 | refinedweb | 207 | 60.01 |
I'd like to use aspectj to profile a library. My plan was to mark methods that require profiling with an annotation:
@Profiled("logicalUnitOfWork")
And then have an aspect that would fire before and ...
@Profiled("logicalUnitOfWork")
If I put:
public CountryState CountryState.find(long id) {
return (CountryState) findById(CountryState.class, id);
}
I am writing a program, and I would only like the user to be able to make certain method calls every 1 second. I'm having trouble figuring out the best way ...
I am attempting to write an aspect which monitors calls to public methods on a variety of objects, but ignore calls to self. For this, I have an aspect like this:
abstract ...
ThisJoinPoint can only get the current method information, anyway to get the caller method information?
I was wondering if there is anyway of determing what mehtod was active when this aspect was triggered. I found the method JointPoint.getSourceLocation() which returns the source code line. I realised ...
Is it possible to set pointcut on native method call with AspectJ? I tried following aspect:
public aspect EmailAspect {
pointcut conn() : call(* java.net.PlainSocketImpl.socketConnect(..));
before() ...
I want to create a pointcut to target a call to a method from specific methods.
take the following:
class Parent {
public foo() {
//do something
...
I've read some articles of aspectj, I know it can enhance classes, which is attractive. I've a very stupid question that I can't find a clear answer:
Can aspectj add methods to ...
I've got generic method Foo.foo():
Foo.foo()
class Foo {
static native T <T> foo();
}
Bar bar = Foo.foo();
I am doing some profiling with Aspectj.
I need to identify uniquely the instances of a method where the field been accessed
For example:
public class Class{ int a; int b;
public void method1(){
...
Here is code:
IDefaultInterface.aj:
public interface IDefaultInterface {
public void m1();
static aspect Impl{
public int f1;
...
Let's imagine the following aspect:
aspect FaultHandler {
pointcut services(Server s): target(s) && call(public * *(..));
before(Server s): services(s) {
// How to ...
I have a Hibernate transactional method "doImportImpl" which runs multi-threaded. Certain records however need to be imported in sequence, so the code structure is roughly like this:
public RecordResult doImportImpl(String data) {
...
Hi. I'm relatively new to AspectJ (used in the past, but not for a while, so I'm a little bit rusty). I've setup a pointcut to catch all method calls, which it does, and then in the advice, I'm trying to print out the actual method that was called. I can do that using the following: thisJoinPoint.getSignature().toString() and that works just ... | http://www.java2s.com/Questions_And_Answers/Java-Enterprise/aspectj/method.htm | CC-MAIN-2018-39 | refinedweb | 443 | 58.48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.