text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
#include "SmartPtr.h" #include <string> #include <iostream> using namespace SmartPtrNames; int main(int argc, char *argv[]) { // some simple tests SmartPtr<int> p = new int(); *p = 5; printAllCounts("first"); cout << "p = " << *p << &p << endl; SmartPtr<int> q = p; cout << "p = " << *p << " q = " << *q << endl; printAllCounts("second"); SmartPtr<int> r = new int(); *r = 2; printAllCounts("third"); p = r; cout << "p = " << *p << " q = " << *q << " r = " << *r << endl; printAllCounts("fourth"); q = p; cout << "p = " << *p << " q = " << *q << " r = " << *r << endl; q = p = r = 0; printAllCounts("fifth"); return 1; } namespace SmartPtrNames { using namespace std; void printAllCounts(char* word) { }; } #ifndef _SMARTPTR_H #define _SMARTPTR_H template <class T> class SmartPtr { public: SmartPtr(T* ptr = 0) : itsCounter(0) { if (ptr) itsCounter = new counter(ptr); }; ~SmartPtr() { release(); }; SmartPtr(const SmartPtr<T>& sptr) { acquire(sptr.itsCounter); }; SmartPtr<T>& operator= (const SmartPtr<T>& sptr) { if (this != &sptr) { release(); acquire(sptr.itsCounter); } return *this; }; T* operator->() {}; T& operator*() {}; private: struct counter { counter(T* p = 0, unsigned c = 1) : ptr(p), count(c) {} T* ptr; unsigned count; }* itsCounter; void acquire(counter* c) throw() { // increment the count itsCounter = c; if (c) ++c->count; } void release() { // decrement the count, delete if it is 0 if (itsCounter) { if (--itsCounter->count == 0) { delete itsCounter->ptr; delete itsCounter; } itsCounter = 0; } } }; #endif /* _SMARTPTR_H */
Home made Smart Pointer in C++
Page 1 of 1
My = opreator isn't working how id want it to any help would be ap
1 Replies - 967 Views - Last Post: 04 September 2009 - 09:55 PM
#1
Home made Smart Pointer in C++
Posted 04 September 2009 - 07:33 PM
I know this isnt the greatest place to post this but this forum has the fastest response times iv found. This is my main file im having an error on the line SmartPtr<int> q = p; Its causing a seg fault and a talking talking about possible corrupted stack so im wondering if im getting some memory leakage. So first is my main code and then my SmartPtr.h file.
Replies To: Home made Smart Pointer in C++
#2
Re: Home made Smart Pointer in C++
Posted 04 September 2009 - 09:55 PM
SmartPtr is a class..So, *p isn't dereferencing the value pointed to by the pointer, but trying to dereference the class to a value.
IE: p is a class. if p was a ptr to a class, *p would refer to the class. In this case, *p doesn't exist even if you set it equal to new int().
IE: p is a class. if p was a ptr to a class, *p would refer to the class. In this case, *p doesn't exist even if you set it equal to new int().
Page 1 of 1 | http://www.dreamincode.net/forums/topic/124000-home-made-smart-pointer-in-c/ | CC-MAIN-2016-30 | refinedweb | 446 | 60.89 |
We use count_if when we want to get the number of elements in a sequence that respect a clause we specify; find_if is used to find the fist element in a sequence matching our requirements.
Without using boost or C++11 we should define a functor to specify the behavior that we are interested in. As we can see in this example, the new techniques make the code cleaner and more readable:
#include <iostream> #include <vector> #include <functional> #include <algorithm> #include <iterator> #include "boost/bind.hpp" using namespace std; namespace { template<class T> void dump(vector<T>& vec) { copy(vec.begin(), vec.end(), ostream_iterator<T>(cout, " ")); cout << endl; } } void bind04() { vector<int> vec; vec.push_back(12); vec.push_back(7); vec.push_back(4); vec.push_back(10); dump(vec); cout << endl << "Using boost::bind" << endl; cout << "Counting elements in (5, 10]: "; // 1 auto fb = boost::bind(logical_and<bool>(), boost::bind(greater<int>(), _1, 5), boost::bind(less_equal<int>(), _1, 10)); int count = count_if(vec.begin(), vec.end(), fb); cout << "found " << count << " items" << endl; cout << "Getting first element in (5, 10]: "; vector<int>::iterator it = find_if(vec.begin(), vec.end(), fb); if(it != vec.end()) cout << *it << endl; cout << endl << "Same, but using lambda expressions" << endl; cout << "Counting elements in (5, 10]: "; // 2 auto fl = [](int x){ return x > 5 && x <= 10; }; count = count_if(vec.begin(), vec.end(), fl); cout << "found " << count << " items" << endl; cout << "Getting first element in (5, 10]: "; it = find_if(vec.begin(), vec.end(), fl); if (it != vec.end()) cout << *it << endl; }1. since we use the same predicate a couple of times it's a good idea storing it in a local variable (here using the cool C++11 'auto' keyword to save the time from figuring out the correct type definition for the predicate). The construct is a bit verbose, but should be clear enough in its sense: we are looking for a value in the interval (5, 10]; count_if will count all the elements in the sequence respecting this rule; find_if will return the iterator to the first element for which that is valid - or end() if no one will do.
2. it is so much cleaner implementing the same functionality using the C++11 lambda syntax.
The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed. | http://thisthread.blogspot.com/2010/05/stdcountif-and-stdfindif.html | CC-MAIN-2018-17 | refinedweb | 400 | 59.13 |
In this series of articles, we will discuss all aspects of how to create a Ruby gem (gem is just a fancy word for “library” or “plugin”). In this section we will make the initial preparations, create the project structure, define the gemspec, and proceed to writing the actual gem.
All in all, this series will cover the following topics:
- Creating the gem structure.
- Adding a gemspec.
- Integrating Rubocop.
- Allowing specification of gem options.
- Setting up a testing suite using RSpec.
- Generating a dummy Rails application for testing.
- Creating and testing installation tasks.
- Creating and testing rake tasks.
- Working and testing the third-party API.
- Managing ZIP files.
- Setting up TravisCI and Codecov services.
- And more!
By the end of the series, you’ll be able to create your own Ruby gem.
We will tackle the above concepts using a “learn by example approach”. I’ll show you how to create a new gem for the Rails app. This gem will allow the exchange of translation files between the Rails app and Lokalise TMS. Basically, it will provide two main rake tasks: import and export. Running the corresponding task will either download translations from Lokalise to Rails, or upload translations from your app to Lokalise. These tasks will have additional configuration options so that the user can have full control over this process. While this functionality is not overly complex, it will allow us to discuss many specifics regarding the gem creation process.
The final result can be found at github.com/bodrovis/lokalise_rails.
Second part of the series: lokalise.com/blog/how-to-create-a-ruby-gem-testing-suite.
Third part of the series: lokalise.com/blog/how-to-create-a-ruby-gem-publishing.
Prerequisites
In this tutorial, I will assume that you have a basic knowledge of Ruby language and Rails framework. The code samples won’t be too complex, so being a Ruby expert is not required in any case.
You will also need the following software:
- Ruby 2.5 or above. Windows users may take advantage of RubyInstaller.
- RubyGems and Bundler. Install by running
gem update --systemand
gem install bundler.
- Git version control system (not strictly required).
Creating a project skeleton
While we could utilize a helper library like Juwelier to generate a boilerplate project structure, I’ll be adding every file manually from scratch. This will allow us to discuss every aspect of the Ruby gem creation in greater detail.
Start by creating a new project directory: I’ll call mine
lokalise_rails. Now let’s proceed to adding specific files to it.
Gemspec
The most important file to create is the gemspec as it will contain specification for your library. Typically, this file provides the following info:
- Gem name, version, and description.
- Authors of the gem.
- Required Ruby version.
- List of the project files.
- List of dependencies.
Rubygems.org provides a nice summary of all fields supported in gemspec.
Within your project directory create a new file named
GEM_NAME.gemspec where
GEM_NAME is the name of your brand-new library. In my case, the filename is:
lokalise_rails/lokalise_rails.gemspec.
Defining main specifications
Start by requiring a file with the gem version (we are going to add it later) and by providing a specification block:
require File.expand_path('lib/lokalise_rails/version', __dir__) Gem::Specification.new do |spec| end
Now use the
spec local variable to define the gem’s specifications. Here’s an example:
require File.expand_path('lib/lokalise_rails/version', __dir__) Gem::Specification.new do |spec| spec.= 2.5.0' end
Name
name is, well, the name of your gem. Make sure that the chosen name is not already in use (which means you should not call your gem
lokalise_rails as this name is already taken by me). To check whether a gem with any given name exists, use the search box at the rubygems.org website. A good name should briefly explain the purpose of the gem, for example:
jquery-rails (adds jQuery to the Rails app),
database_cleaner (cleans the test database), and
angular_rails_csrf (makes Rails CSRF play nicely with Angular). Never use names like
Array or
String for your plugin! Some gems have fancier names like
puma or
koala. In certain cases, that’s fine (after all, it’s better than
yet_another_webserver), but in general I’d suggest sticking to something more basic. This is especially important if you are creating a niche solution.
Version
version provides the version of your gem. Utilizing semantic versioning, like so, is recommended:
MAJOR.MINOR.PATCH (for example,
2.1.3).
MAJOR should be incremented only when you are introducing breaking changes that are not backwards compatible. For instance, if you rename or remove a method, that’s a breaking change. Increment
MINOR when you add new features in a backwards compatible manner. For example, adding a new method without modifying the existing ones is backwards compatible. Finally, increment
PATCH when you make backwards compatible bug fixes. For example, when you are updating a method so that it returns a proper value under certain conditions. We are going to store the gem version under the
VERSION constant defined in a separate file.
Authors
authors lists one or more authors of the gem. All authors are going to be displayed on the gem’s page at rubygems.org.
Summary
summary is a very short description of the gem’s purpose. This summary is shown when you are running the
gem list -d command on your PC.
Description
description is a detailed explanation of the gem’s purpose: it usually contains a couple of paragraphs. Note that the description can’t have any formatting and should not contain any usage examples. The description is shown on the RubyGems website.
homepage provides the URL of the gem’s home page. Usually it points to GitHub repo but that’s not always the case.
License
license provides the name of the gem license. The most common license type for open source projects is MIT which means that anyone can do basically anything with the source code. However, the original authors must always be credited in this case. It also means that the authors provide no warranty for the project and do not take any responsibility for the potential harm caused by using the software.
The simplest way to provide a license is to specify its ID that can be found at spdx.org. Also, you may utilize the license chooser service. While you can omit this field, I would not recommend doing so: knowing the license type is very important for developers that are going to use your gem in their corporate projects.
Platform
The
platform field is optional, but I usually provide it for the sake of completeness. In most cases its value is just
Gem::Platform::RUBY.
Required Ruby version
required_ruby_version usually provides the minimal Ruby version required to run this gem. As
lokalise_rails employs some newer language features, I’ve set the minimal version to 2.5. This is quite alright, because Ruby 2.4 is not supported by the core team anymore.
Listing project files
The next step is to list all the files that your gem includes. Use the
files attribute:
require File.expand_path('lib/lokalise_rails/version', __dir__) Gem::Specification.new do |spec| # ...other specs... spec.files = Dir['README.md', 'LICENSE', 'CHANGELOG.md', 'lib/**/*.rb', 'lib/**/*.rake', 'lokalise_rails.gemspec', '.github/*.md', 'Gemfile', 'Rakefile'] end
Note that the
files attribute is mandatory and must contain files only (not directories). If any file is missing from the list, it won’t be available during the gem usage! As our gem will contain both Ruby files and Rake tasks, I’ve added both file types.
Let’s also provide extra RDoc files:
require File.expand_path('lib/lokalise_rails/version', __dir__) Gem::Specification.new do |spec| # ... spec.extra_rdoc_files = ['README.md'] end
These files will be used by the RDoc documentation generator.
Listing dependencies
Last but not least, comes providing gem dependencies: that is, the libraries required to properly run it. There are two types of dependencies:
- Runtime dependencies — libraries mandatory to actually use the gem. Bundler installs these dependencies automatically when your gem is present in the
Gemfileand the
bundle installcommand is called. Some gems may have no runtime dependencies at all.
- Development dependencies — libraries that are required only when working with the gem source code and running the test suite. In other words, Bundler won’t install these dependencies when your gem is included in, say, a Rails app.
In this part of the article, we’ll add all the runtime dependencies and a couple of development ones. We’ll talk about development dependencies in greater detail when writing tests for the gem.
So, to import and export translation files we’ll require the following runtime dependencies:
ruby-lokalise-api— official Lokalise API client which I created a couple of years ago.
rubyzip— library to manipulate ZIP files. We’ll employ it when extracting translation bundles.
As for the development dependencies, I would like to add the following:
rubocop— a great gem to check your code style and fix formatting issues.
rubocop-performance— Rubocop extension to search for performance-related issues.
rubocop-rspec— another Rubocop extension that checks RSpec test files.
Go ahead and add your dependencies, as follows:
require File.expand_path('lib/lokalise_rails/version', __dir__) Gem::Specification.new do |spec| # ... spec.add_dependency 'ruby-lokalise-api', '~> 3.1' spec.add_dependency 'rubyzip', '~> 2.3' spec.add_development_dependency 'rubocop', '~> 0.60' spec.add_development_dependency 'rubocop-performance', '~> 1.5' spec.add_development_dependency 'rubocop-rspec', '~> 1.37' end
You may also utilize the
add_runtime_dependency method which does the same thing as the
add_dependency. I really recommend providing the dependency versions using the
~> operator. For example,
~> 3.1 means that your gem works only with dependency version
3.x but not with
4.x, or
5.x. Remember that the first number is the
MAJOR version which increments only if the library has breaking changes.
Great job, the gemspec file is now ready!
Gemfile
Each project should have a
Gemfile, so let’s create one:
# lokalise_rails/Gemfile source '' gemspec
We are instructing Bundler to download all dependencies from the
rubygems.org website. The actual dependencies list can be found in the gemspec, so there’s nothing else to add to this file.
Library folder
Next, let’s create the
lib folder which is going to host all the plugin files. Here’s a sample directory structure:
lib
lokalise_rails.rb— name this file after your gem. For now, this file is empty but later we’ll add library-specific config options there.
lokalise_rails— name this directory after the gem. This folder will contain the “meat” of our plugin.
version.rb— this file will define the
VERSIONconstant. Later you may use this constant in the code.
Add the following content to
lib/lokalise_rails/version.rb:
module LokaliseRails VERSION = '1.0.0' end
Name the module after your gem. If the name contains multiple words separated with dashes or underscores, each separate word must start with a capital letter (camel case).
At this point, you may open your terminal,
cd, in the project directory and run:
bundle i
This command will install all the dependencies and create a
Gemfile.lock file. This file makes sure that your code is run with specific dependency versions. Make sure to periodically check for outdated dependencies by running:
bundle out
This command will check all your dependencies and tell you whether newer versions are available. Update dependency versions in the gemspec and run:
bundle u
Make sure that your gem plays nicely with new dependencies before publishing it! We’ll talk about publishing your gem to
rubygems.org in more detail later.
Readme
Providing a
README file for your gem is very much recommended. Usually, a
README sums up the gem’s purpose and explains how to use it. Providing detailed usage instructions is a really good idea because otherwise your fellow developers may have a hard time working with your gem. In many cases the
README is written in Markdown format. Therefore, create a new file
README.md in the project root, for instance:
# LokaliseRails This gem provides [Lokalise]() integration for Ruby on Rails and allows to exchange translation files easily. It relies on [ruby-lokalise-api]() to send APIv2 requests. ## Getting started ### Requirements This gem requires Ruby 2.5+ and Rails 5.1+.
The full README for the
lokalise_rails gem can be found at GitHub.
License
We have already provided a license type in the gemspec, but let’s also add the license text to a separate file,
LICENSE:
MIT License Copyright (c) 2020 Lokalise team, Ilya Bod
I can’t stress enough how important it is to provide a changelog for your project. A changelog (sometimes also called “History”) has to sum up all changes for every version of the gem. If a version has introduced breaking changes, make sure to highlight them and explain how to migrate. Create a new
CHANGELOG.md file inside the project root, like so:
# Changelog ## 1.0.0 (01-Oct-20) * Initial release
Once again: don’t forget to list changes within this file after publishing a new version!
Rubocop config
As the next step, let’s add a Rubocop config file. In it, you can specify the target Ruby version and code formatting rules. Also, you may add specific files to an ignore list or disable certain checks. Create a new
.rubocop.yml file in the project root as follows:
require: - rubocop-performance - rubocop-rspec AllCops: TargetRubyVersion: 2.5 NewCops: enable
The full Rubocop config for the
lokalise_rails gem can be found on GitHub. Also, make sure to check the official Rubocop documentation which lists all the available checks (called “cops”) and their options.
You may add other Rubocop extensions as needed.
Rakefile
Rakefile contains Rake tasks available for your gem. For now, we’ll define only Rubocop-related tasks:
require 'rake' require 'rubocop/rake_task' RuboCop::RakeTask.new do |task| task.requires << 'rubocop-performance' task.requires << 'rubocop-rspec' end
To run Rubocop, use the following command:
rubocop
Or run it with the rake task:
rake rubocop
If you’d like to automatically fix minor formatting issues, provide the
-a option. There’s also an
-A flag that enables “aggressive” mode, which fixes all found issues, except for those that cannot be fixed automatically. If Rubocop can’t resolve an issue, it will at least provide a hint and you can deal with it manually.
It is not mandatory to follow all the guidelines, therefore you may disable certain cops, for example:
Style/Documentation: Enabled: false
Inside the
.github/ directory you may provide additional files that are specific to the GitHub platform. These are:
- Code of conduct
- Contributing guide
- Pull request template
- Issue templates (stored in a separate folder)
You can find the corresponding examples by clicking the links above.
Git files
Okay, so we are nearly done with the initial skeleton of our project. The last thing to do is provide a
.gitignore file and initialize a new Git repo.
.gitignore should live inside the root of your project. It lists all files and folders that should not be tracked by Git. Here’s a typical
.gitignore file:
*.gem coverage/* Gemfile.lock *~ .bundle .rvmrc log/* measurement/* pkg/* .DS_Store .env spec/dummy/tmp/* spec/dummy/log/*.log
When you are ready, initialize a new Git repo and perform the first commit:
git init git add . git commit -am "Initial commit"
Next, create a new repo using your favorite code hosting website (I really love GitHub, but you may stick to GitLab or Bitbucket if you wish) and push the code there. Nice work!
Defining gem options
So, we have seen how to create a basic Ruby gem file structure: it took quite a while, but now we know the purpose of each file. Before wrapping up this part, let’s proceed to fleshing out the gem. Specifically, we are going to define the options that the gem will accept:
api_token— required option that will contain the Lokalise API token. This token will then be used to send the API requests.
project_id— required option containing the Lokalise project ID to export and import files to/from.
locales_path— full path to the directory with the Rails translation files. Should default to
/config/localesunder the
Rails.root.
file_ext_regexp— regular expression to employ when filtering out translation files. This regexp will be applied to file extensions. By default, it should select only YAML translation files.
import_opts— translation file import options. These options should have sensible defaults.
import_safe_mode— boolean option which defaults to
false. When enabled, the import rake task will check whether the target directory to which translations are downloaded is empty.
export_opts— translation file export options. These options should have sensible defaults.
skip_file_export— lambda or procedure containing additional exclusion criteria for the exported translation files. After all, the
localesdirectory may contain dozens of translation files and the developer needs a way to pick only the required ones.
Adding option accessors
Okay, so where do we define these options? Typically, such general configurations should be placed in the
lib/lokalise_rails.rb file (this file will have a different name in your case). Therefore, let’s start by defining a new module within it:
module LokaliseRails end
You may also create a class instead of a module: it depends on whether or not you’d like it to be instantiated. The module must be named after your gem and all gem-specific code must be namespaced under this module. Never ever create gem-specific classes or constants outside of this namespace because this may lead to name clashes.
Mandatory config
Now, the questions is: how do I want to manage the gem options? These options will be accessed by a rake task, so something like
LokaliseRails.api_token or
LokaliseRails.project_id should do the trick. This means that we require module attributes. Define them using the
class << self trick:
module LokaliseRails class << self attr_accessor :api_token, :project_id end end
So, this is just a good old
attr_acessor which allows us to read and write the
api_token and
project_id. These options are mandatory and do not have any defaults.
Optional config
Other options should have default values, which means we have to use
attr_writer for them:
module LokaliseRails class << self attr_accessor :api_token, :project_id attr_writer :import_opts, :import_safe_mode, :export_opts, :locales_path, :file_ext_regexp, :skip_file_export end end
We will define attribute readers ourselves. Why? Because we need to check whether the user of this gem has provided a custom value for each option. If the custom value is set, we simply use it. If it is not set, we provide a default value instead:
module LokaliseRails class << self attr_accessor :api_token, :project_id attr_writer :import_opts, :import_safe_mode, :export_opts, :locales_path, :file_ext_regexp, :skip_file_export def locales_path @locales_path || "#{Rails.root}/config/locales" end end end
So, if the
@locales_path has a value (in other words, if it is not
nil), we return that value. If
@locales_path is
nil, we provide the default path to the translation files directory.
Provide some other reader methods, for instance:
module LokaliseRails class << self # ... def file_ext_regexp @file_ext_regexp || /\.ya?ml\z/i end def import_opts @import_opts || { format: 'yaml', placeholder_format: :icu, yaml_include_root: true, original_filenames: true, directory_prefix: '', indentation: '2sp' } end def export_opts @export_opts || {} end end end
By default, translation files should have
.yml or
.yaml extensions. Import options also have sensible defaults (a full list of available options can be found in the Lokalise API docs). Export options are empty by default (we’ll provide the required export options elsewhere).
Finally, add two more readers:
module LokaliseRails class << self # ... def import_safe_mode @import_safe_mode.nil? ? false : @import_safe_mode end def skip_file_export @skip_file_export || ->(_) { false } end end end
The
@import_safe_mode option may be set to
false, therefore instead of saying simply
@import_safe_mode ? ... we must check if it’s
nil? or not.
skip_file_export by default returns a lambda which yields
false, meaning that there are no additional exclusion criteria and all translation files with the proper extensions have to be exported.
Config method
While the options can now be provided by saying
LokaliseRails.api_token = '123', that’s not very convenient. Instead I would like to use the following construct:
LokaliseRails.config do |c| c.api_token = '123' c.project_id = '345.abc' end
Actually, this is very straightforward to achieve. Simply add the following class method
config to the
lib/lokalise_rails.rb file:
module LokaliseRails class << self attr_accessor :api_token, :project_id attr_writer :import_opts, :import_safe_mode, :export_opts, :locales_path, :file_ext_regexp, :skip_file_export def config # <------------- yield self end # ... your readers end end
This method will simply yield
self (the actual
LokaliseRails module) to the block, and the user can adjust all the options as needed!
Trying it out
We don’t have a testing environment set up yet, but still it’s a good idea to make sure everything is working fine. Create a new
demo.rb file in the project root:
require_relative 'lib/lokalise_rails' LokaliseRails.config do |c| c.api_token = '123' c.project_id = '345.abc' c.export_opts = { convert_placeholders: true } c.skip_file_export = ->(filename) { filename.include?('skip') } end
That’s exactly how I would like our users to provide the config options for the gem. The same approach is also utilized by such popular solutions as Devise.
Now let’s also try to read these options:
# ... setting the options ... puts LokaliseRails.api_token puts LokaliseRails.project_id puts LokaliseRails.export_opts puts LokaliseRails.import_opts puts '=' * 10 file_to_skip = 'skip_me.yml' other_file = 'en.yml' [file_to_skip, other_file].each do |f| puts "#{f} skip? #{LokaliseRails.skip_file_export.call(f)}" end
Run the
demo.rb file:
ruby demo.rb
And here’s the output:
123 345.abc {:convert_placeholders=>true} {:format=>"yaml", :placeholder_format=>:icu, :yaml_include_root=>true, :original_filenames=>true, :directory_prefix=>"", :indentation=>"2sp"} ========== skip_me.yml skip? true en.yml skip? false
As you can see, the options can be accessed just fine, which means we’ve completed the task successfully!
Conclusion
This was only the first part of the “Create a Ruby gem” series where we have prepared the groundwork for our project and created accessors for the gem options. In the upcoming article, we will see how to set up a solid testing suite and how to create an installation task. So, see you really soon!
Proceed to the second part | https://lokalise.com/blog/create-a-ruby-gem-basics/ | CC-MAIN-2022-05 | refinedweb | 3,671 | 59.09 |
If you are interested in reading this article in Spanish, check out my blog:
The Developer's Dungeon
Hey guys, how you been? me? I have been working a lot on a new project I am building with the help of a friend, I will not discuss the details here but you will probably hear me talk about it very soon. Today I want to speak about one of the technology selections I made for building this project.
I wanted to build an API, using node, using TypeScript. I have heard a lot of great things from this backend framework called Nest.js but I hadn't tried it myself. Now after a week of coding the API, having several endpoints, authenticating, connecting to a database, and stuff I will give you my honest review. But let's start from the beginning.
What is Nest.js?
From the documentation itself we get this answer:
A progressive Node.js framework for building efficient, reliable, and scalable server-side applications.
That doesn't say much right? well in my own words, Nest.js is a Node.js framework built on top of express.js and TypeScript that comes with a strong opinion on how API's should be built. Since it is very opinionated it provides a structure, a CLI, and an almost infinite amount of tools that let you create professional APIs very very fast.
I guess it would be like Django for Python or asp.net Core for C#.
What is it like?
Well, as with most API frameworks, Nest.js defines endpoints through an entity called
Controller.
import { Controller, Get } from '@nestjs/common'; @Controller('cats') export class CatsController { @Get() findAll(): string { return 'This action returns all cats'; } @Get(":id") findOne(@Param('id') id:string): string { return 'This action returns one cat'; } }
For me, coming from C#, to see this in the JavaScript is just a pleasure. But I guess it can be daunting for you noders out there, so let me explain all the goodies that are happening in the previous example.
This
Controller will create 2 endpoints in the routes
{url}/cats and
{url}/cats/{id}, the id in the URL of the second endpoint will be automatically mapped to the id parameter in the method.
These types of tags
@Get() are called decorators and there are a bunch of them, you can use them for defining the HTTP method, for getting properties, for defining authentication, basically whatever you feel like.
But where do you write your business logic? you might ask. Well Nest.js got you covered, for that you will use an entity called
Service
import { Injectable } from '@nestjs/common'; import { Cat } from './interfaces/cat.interface'; @Injectable() export class CatsService { private readonly cats: Cat[] = []; create(cat: Cat) { this.cats.push(cat); } findAll(): Cat[] { return this.cats; } }
Nothing too weird here, except, what is that
@Injectable decorator doing?. Nest.js comes with
Dependency Injection by default, this decorator defines which dependencies can be injected into other components through their constructors.
This seems like is gonna generate a lot of code, is there an easy way to manage dependencies? yes, there is. You can pack functionality together by using Modules, they are like Node Modules but in Nest.js you can hold controllers, services, and more inside one Module that represent a feature, then inject that entire module into others to be used there.
import { Module } from '@nestjs/common'; import { CatsController } from './cats.controller'; import { CatsService } from './cats.service'; @Module({ controllers: [CatsController], providers: [CatsService], }) export class CatsModule {}
I don't see any mention of how to contact a database, is there something for that?. Didn't I tell you that Nest.js is pretty opinionated? as such, it comes with a way of working with databases. Enters TypeORM.
Instead of doing SQL queries manually, we use an Object Relational Model, for working with the database, we define database entities that later will be used for creating the tables on the application startup and use automatic
Repositories created based on our database model.
import { Entity, Column, PrimaryGeneratedColumn } from 'typeorm'; @Entity() export class Cat { @PrimaryGeneratedColumn() id: number; @Column({ length: 500 }) name: string; @Column('text') color: string; }
Seems super complicated, who is it for?
I would lie if I say that everyone who starts messing with Nest.js is going to be productive immediately.
- Nest.js follows a pattern that is very Object-Oriented, which is not something we see very often in the JavaScript world.
- If you only know dynamically typed languages is gonna be hard doing a switch because of TypeScript.
On the other hand, if you come from a language like C#, then TypeScript is gonna feel right at home(they were actually designed by the same guy). On top of that, you probably used a framework like asp.net Core so you know exactly what a
Controller is, you probably created a layered architecture and used the word
Service to define your business logic even before seeing a single line of Nest.js code.
But, I have never done any backend, can I take Nest.js as my first project? It depends.
Nest.js is gonna be easier for you if you come from Angular instead of React.
The entire Module, Dependency Injection, Decorator architectural patterns that Nest.js uses is heavily inspired in Angular, they are like cousins, and if you come from Angular you will already know TypeScript so picking up Nest.js will be a no brainer.
Conclusion
You probably know that I gonna say I really like Nest.js, well yeah, it seems like a great framework to create reliable node.js APIs. It provides tons of functionality out the box, and if you wanna do something special, their documentation is just outstanding. If you come from one of the backgrounds I mentioned previously or you just want to learn something new I would definitely recommend you to give Nest.js a try 🤞.
As always, if you liked this post go ahead and share it, have you tried Nest.js? Do you want to know something specific? let me know below in the comments 😄
Discussion (3)
I love it as well. Decent choice for Angular-focused front-end developers that would like to dive into back-end development deeper (for side projects or even something more sophisticated). Good SoC, easy swagger generation, possibility to easily share data models with Angular front end. All in all, highly recommended 👍
I agree
I use KrakenJS at my job and I really like it. it gives you a lot of freedom and we can use a functional approach with it. But KrakenJS has a problem, it's really outdated and when I run npm audit it finds a lot of problems, NestJS in the other hand, finds 0 problems.
I agree with fyodor, if you use Angular NestJS is going to be awesome | https://practicaldev-herokuapp-com.global.ssl.fastly.net/patferraggi/one-week-with-nest-js-is-it-good-5hgo | CC-MAIN-2021-21 | refinedweb | 1,146 | 64.61 |
Jump Statements in C – break, continue, goto, return
Jump Statement makes the control jump to another section of the program unconditionally when encountered. It is usually used to terminate the loop or switch-case instantly. It is also used to escape the execution of a section of the program.
In our previous tutorial of IF- ELSE statements we saw that there are 4 jump statements offered by C Programming Language:
- Break
- Continue
- Goto
- Return
In this tutorial, we will discuss Jump Statements in detail.
Break
A break statement is used to terminate the execution of the rest of the block where it is present and takes the control out of the block to the next statement. It is mostly used in loops and switch-case to bypass the rest of the statement and take the control to the end of the loop. The use of break in switch-case has been explained in the previous tutorial Switch – Control Statement.
Another point to be taken into consideration is that break statement when used in nested loops only terminates the inner loop where it is used and not any of the outer loops. Let’s have a look at this simple program to better understand how break works:
#include <stdio.h> int main() { int i; for(i=1;i<=15;i++) { printf("%d\n",i); if(i==10) break; } return 0; }
Output:- 1 2 3 4 5 6 7 8 9 10
In this program, we see that as soon as the condition if(i==10) becomes true the control flows out of the loop and the program ends.
Continue
The Continue statement like any other jump statements interrupts or changes the flow of control during the execution of a program. Continue is mostly used in loops. Rather than terminating the loop it stops the execution of the statements underneath and takes the control to the next iteration. Similar to a break statement, in case of nested loop, the continue passes the control to the next iteration of the inner loop where it is present and not to any of the outer loops.Let’s have a look at the following example:
#include <stdio.h> int main() { int i,j; for(i=1;i<3;i++) { for(j=1;j<5;j++) { if(j==2) continue; printf("%d\n",j); } } return 0; }
Output:- 1 3 4 1 3 4
In this program, we see that the printf() instruction for the condition j=2 is skipped each time during the execution because of continue. We also see that only the condition j=2 gets affected by the continue. The outer loop runs without any disruption in its iteration.
Goto
This jump statement is used to transfer the flow of control to any part of the program desired. The programmer needs to specify a label or identifier with the goto statement in the following manner:
goto label;
This label indicates the location in the program where the control jumps to. Have a look at this simple program to understand how goto works:
#include <stdio.h> int main() { int i,j; for(i=1;i<5;i++) { if(i==2) goto there; printf("%d\n",i); } there: printf("Two"); return 0; }
Output:- 1 Two
In this program, we see that when the control goes to the goto there; statement when i becomes equal to 2 then the control next goes out of the loop to the label(there: ) and prints Two.
Return
This jump statement is usually used at the end of a function to end or terminate it with or without a value. It takes the control from the calling function back to the main function(main function itself can also have a return).
An important point to be taken into consideration is that return can only be used in functions that is declared with a return type such as int, float, double, char, etc. The functions declared with void type does not return any value. Also, the function returns the value that belongs to the same data type as it is declared. Here is a simple example to show you how return statement works.
#include <stdio.h> char func(int ascii) { return ((char)ascii); } int main() { int ascii; char ch; printf("Enter any ascii value in decimal: \n"); scanf("%d",&ascii); ch=func(ascii); printf("The character is : %c",ch); return 0; }
Output:- Enter any ascii value in decimal: 110 The character is : n
In this program we have two functions that have a return type but only one function is returning a value[func()] and the other is just used to terminate the function[main()]. The function func() is returning the character value of the given number(here 110). We also see that return type of func() is char because it is returning a character value.
The return in main() function returns zero because it is necessary to have a return value here because main has been given the return type int.
An investment in knowledge always pays the best interest. Hope you like the tutorial. Do come back for more because learning paves way for a better understanding.
Do not forget to share and Subscribe.
Happy coding!! 🙂 | https://www.codingeek.com/tutorials/c-programming/jump-statements-in-c-break-continue-goto-return/ | CC-MAIN-2018-26 | refinedweb | 866 | 57.4 |
Catalina declares its own protected member variable "server" and a corresponding "setServer" method, while Catalina's super class Embedded has a private member "server" and a "getServer" method, so that Catalina sets its "service" member, but when asking, will return super.server which is consistently null.
This means that Catalina cannot be used as Tomcat embedding directly but at least requires an extension like so:
public class MyCatalina extends Catalina {
public Server getServer() {
return this.server;
}
}
It might have other ramifications though and was most likely not intended.
See also:
updated platform and OS to "All".
Fixed in trunk and proposed for 6.0.x. Many thanks.
This has been fixed in 6.0.x and will be included in 6.0.25 onwards. | https://bz.apache.org/bugzilla/show_bug.cgi?id=48678 | CC-MAIN-2021-17 | refinedweb | 124 | 56.86 |
C Programming/Structure and style
From Wikibooks, the open-content textbooks collection
[edit] C Structure and Style
This is a basic introduction to good code style in the C Programming Language. It is designed to provide information on how to effectively use indentation, almost never true, because well-written code that follows a well-designed structure is usually much easier for programmers to read, and revise.
In the following sections, we will attempt to explain good programming practices that will in turn make your programs clearer and white space can create a visual gauge of how your code flows, which can be very important when returning to your code when you want to maintain it.
[edit] Line Breaks
#include <stdio.h> to offset the main components of your code. Use them
- After precompiler declarations.
- After new variables are declared.
#include <stdio.h>
int main(void)
{
int i=0;
printf("Hello, World!");
for (i=0; i<1; i++)
{
printf("\n");
break;
}
return 0;
}
Based on the rules we established earlier, there should now be two line breaks added.
- Between lines 1 and 2, becuase line 1 has a preprocessor directive
- Between lines 4 and 5, becuase line 4 contains a variable declaration
This will make the code much more readable than it was before:
The following lines of code have line breaks between functions, but not any indentation.
#include <stdio.h>
int main(void)
{
int i=0;
printf("Hello, World!");
for (i=0; i<1; i++)
{
printf("\n");
break;
}
return 0;
}
But this still isn't as readable as it can be.
[edit] Indentation.
So, based on our code from the previous section, there are two blocks requiring indentation:
- Lines 5 to 13
- Lines 10 and 11
#include <stdio.h>
int main(void)
{
int i=0;
printf("Hello, World!");
for (i=0; i<1; i++)
{
printf("\n");
break;
}
return 0;
}
It is now fairly obvious as to which parts of the program fit inside which blocks. You can tell which parts of the program will loop, and which ones will not. Although it might not be immediately noticeable, once many nested loops and paths get added to the structure of the program, the use of indentation can be very important. This indentation makes the structure of your program clear.
[edit] Comments
Comments in code can be useful for a variety of purposes. They provide the easiest way to set off specific parts of code (and their purpose); as well as providing a visual "split" between various parts of your code. Having a good comments throughout your code will make it much easier to remember what specific parts of your code do.
Comments in modern flavours of C (and many other languages) can come in two forms:
//Single Line Comments
and
/*Multi-Line Comments*/
Note that Single line comments are a fairly recent addition to C, so some compilers may not support them. A recent version of GCC will have no problems supporting them.
This section is going to focus on the various uses of each form of commentary.
[edit] Single-line Comments 40, to explain what 'int i' is going to do
- Line 80, to explain why there is a 'break' keyword.
This will make our program look something like
#include <stdio.h> int main(void) { int i=0; // loop variable. printf("Hello, World!"); for (i=0; i<1; i++) { printf("\n"); break; //Exits 'for' loop. } return 0; }
[edit] Multi-line Comments times, understand what the code does, and how it works. It also prevents confusion. [2] [3]
[edit] Links
- wiki | http://en.wikibooks.org/wiki/C_Programming/Structure_and_style | crawl-002 | refinedweb | 590 | 70.33 |
I need help transforming my data so I can read through transaction data.
Business Case
I'm trying to group together some related transactions to create some groups or classes of events. This data set represents workers going out on various leaves of absence events. I want to create one class of leaves based on any transaction falling within 365 days of the leave event class. For charting trends, I want to number the classes so I get a sequence/pattern.
My code allows me to see when the very first event occurred, and it can identify when a new class starts, but it doesn't bucket each transaction into a class.
Requirements:
import pandas as pd
data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014'] = df['Effective Date'].astype('datetime64[ns]')
df['EmplidShift'] = df['Employee ID'].shift(-1)
df['Effdt-Shift'] = df['Effective Date'].shift(-1)
df['Prior Row in Same Emplid Class'] = "No"
df['Effdt Diff'] = df['Effdt-Shift'] - df['Effective Date']
df['Effdt Diff'] = (pd.to_timedelta(df['Effdt Diff'], unit='d') + pd.to_timedelta(1,unit='s')).astype('timedelta64[D]')
df['Cumul. Count'] = df.groupby('Employee ID').cumcount()
df['Groupby'] = df.groupby('Employee ID')['Cumul. Count'].transform('max')
df['First Row Appears?'] = ""
df['First Row Appears?'][df['Cumul. Count'] == df['Groupby']] = "First Row"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Effdt > 1 Yr?'] = ""
df['Effdt > 1 Yr?'][ ((df['Prior Row in Same Emplid Class'] == "Yes" ) & (df['Effdt Diff'] < -365)) ] = "Yes"
df['Unique Leave Event'] = ""
df['Unique Leave Event'][ (df['Effdt > 1 Yr?'] == "Yes") | (df['First Row Appears?'] == "First Row") ] = "Unique Leave Event"
df
This is a bit clunky but it yields the right output at least for your small example:
import pandas as pd data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"], 'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"] = pd.to_datetime(df["Effective Date"]) df = df.sort_values(["Employee ID","Effective Date"]).reset_index(drop=True) for i,_ in df.iterrows(): df.ix[0,"Result"] = "Unique Leave Event 1" if i < len(df)-1: if df.ix[i+1,"Employee ID"] == df.ix[i,"Employee ID"]: if df.ix[i+1,"Effective Date"] - df.ix[i,"Effective Date"] > pd.Timedelta('365 days'): df.ix[i+1,"Result"] = "Unique Leave Event " + str(int(df.ix[i,"Result"].split()[-1])+1) else: df.ix[i+1,"Result"] = df.ix[i,"Result"] else: df.ix[i+1,"Result"] = "Unique Leave Event 1"
Note that this code assumes that the first row always contains the string
Unique Leave Event 1.
EDIT: Some explanation.
First I convert the dates to datetime format and then reorder the dataframe such that the dates for every Employee ID are ascending.
Then I iterate over the rows of the frame using the built-int iterator
iterrows. The
_ in
for i,_ is merely a placeholder for the second variable I do not use because the iterator gives back both row numbers and row names.
In the iterator I'm doing row-wise comparisons so by default I fill in the first row by hand and then assign to the
i+1-th row. I do it like this because I know the value of the first row but not the value of the last row. Then I compare the
i+1-th row with the
i-th row within an
if-safeguard because
i+1 would give an index-error on the last iteration.
In the loop I first check if the
Employee ID has changed between the two rows. If it has not then I compare the dates of the two rows and see if they are apart more than 365 days. If this is the case I read the string
"Unique Leave Event X" from the
i-th row, increase the number by one and write it in the
i+1-row. If the dates are closer I just copy the string from the previous row.
If the
Employee ID does change on the other hand I just write
"Unique Leave Event 1" to start over.
Note 1:
iterrows() has no options to set so I can't iterate only over a subset.
Note 2: Always iterate using one of the built-in iterators and only iterate if you can't solve the problem otherwise.
Note 3: When assigning values in an iteration always use
ix,
loc, or
iloc. | https://codedump.io/share/FmmMaZL6H1ns/1/create-groupsclasses-based-on-conditions-within-columns | CC-MAIN-2017-04 | refinedweb | 791 | 66.94 |
4-9 MINIMIZING FLOATING-POINT ERRORS ************************************* A few remarks ------------- o Some practical methods for checking the severity of floating-point errors can be found in the chapter: 'practical issues'. o The chapter on 'FORTRAN pitfalls' discusses various programming practises that may amplify floating-point (and other) errors, and it is very important to avoid them. o Note that an interval/stochastic arithmetic package is not just a diagnostic tool for FP errors, a result without an error estimation is not very useful, as errors can never be eliminated completely in experimental data and computation. Carefully written programs -------------------------- This term was probably coined by Sterbenz (see bibliography below), and means programs that are numerically correct. this is not easy to achieve as the following example will show. Sterbenz discusses the implementation of a Fortran FUNCTION that returns the average of two REAL numbers, the specifications for the routine are: o The sign must be always correct. o The result should be as close as possible to (x+y)/2 and stay within a predefined bound. o min(x,y) <= average(x,y) <= max(x,y) o average(x,y) = average(y,x) o average(x,y) = 0 if and only if x = -y unless an underflow occurred. o average(-x,-y) = -average(x,y) o An overflow should never occur. o An underflow should never occur, unless the mathematical average is strictly less than the smallest representable real number. Even a simple task like this, requires considerable knowledge to program in a good way, there are 4 (at least) possible average formulas: 1) (x + y) / 2 2) x/2 + y/2 3) x + ((y - x) / 2) 4) y + ((x - y) / 2) Sterbenz have a very interesting discussion on choosing the most appropriate formulas, he also consider techniques like scaling up the input variables if they are small. Grossly oversimplifying, we have: Formula #1 may raise an overflow if x,y have the same sign #2 may degrade accuracy, but is safe from overflows #3,4 may raise an overflow if x,y have opposite signs We will use formulas #1,3,4 according to the signs of the input numbers: real function average (x, y) real x, y, zero, two, av1, av2, av3, av4 logical samesign parameter (zero = 0.0e+00, two = 2.0e+00) av1(x,y) = (x + y) / two av2(x,y) = (x / two) + (y / two) av3(x,y) = x + ((y - x) / two) av4(x,y) = y + ((x - y) / two) if (x .ge. zero) then if (y .ge. zero) then samesign = .true. else samesign = .false. endif else if (y .ge. zero) then samesign = .false. else samesign = .true. endif endif if (samesign) then if (y .ge. x) then average = av3(x,y) else average = av4(x,y) endif else average = av1(x,y) endif return end Programming using exception handling ------------------------------------ Computing the average of two numbers may serve as an example for the system-dependent technique of writing faster numerical code using exception handling. Most of the time the formula (x + y)/2 is quite adequate, the FUNCTION above is needed essentially to avoid overflow, so the following scheme may be used: call reset_overflow_flag result = (x + y) / 2.0 call check_overflow_flag(status) if (status .eq. .true.) result = average(x,y) In this way the "expansive" call to the average routine may be eliminated in most cases. Of course, two system-dependent calls were added for reseting and checking the overflow flag, but this may be still worth it if the ratio between the two algorithms is large enough (which is not the case here). Using REAL*16 (QUAD PRECISION) ------------------------------ This is the most simple solution on machines that supports this data type. REAL*16 takes more CPU time than REAL*8/REAL*4, but introduces very small roundoff errors, and has a huge range. Performance cost of different size floats is VERY machine dependent (see the performance chapter). A crude example program: program reals real*4 x4, y4, z4 real*8 x8, y8, z8 real*16 x16, y16, z16 x4 = 1.0e+00 y4 = 0.9999999e+00 z4 = x4 - y4 write(*,*) sqrt(z4) x8 = 1.0d+00 y8 = 0.9999999d+00 z8 = x8 - y8 write(*,*) sqrt(z8) x16 = 1.0q+00 y16 = 0.9999999q+00 z16 = x16 - y16 write(*,*) sqrt(z16) end Normalization of equations -------------------------- Floating-point arithmetic is best when dealing with numbers with magnitudes of the order of 1.0, the 'representation density' is not maximal but we are in the 'middle' of the range. Usually you can decrease the range of numbers appearing in the computation, by transforming the system of units, so that you get dimensionless equations. The diffusion equation will serve as an example, I apologize for the horrible notation: Ut = K * Uxx Where the solution is U(X,T), the lowercase letters denote the partial derivatives, and K is a constant. Let: L be a typical length in the problem U0 a typical value of U Substitute: X' = X / L U' = U / U0 Then: Ux = Ux' / L Uxx = Ux'x' / (L*L) Substitute in the original equation: (U' * U0)t = (K / (L*L)) * (U' * U0)x'x' ((L * L) / K) U't = U'x'x' Substitute: T' = (K * T) / (L * L) And you get: U't' = U'x'x' With: X' = X / L U' = U / U0 T' = (K * T) / (L * L) Multi-precision arithmetic -------------------------- That is a really bright idea, you can simulate floating-point numbers with very large sizes, using character strings (or other data types), and create routines for doing arithmetic on these giant numbers. Of course such software simulated arithmetic will be slow. By the way, the function overloading feature of Fortran 90, makes using multi-precision arithmetic packages with existing programs easy. Two free packages are "mpfun" and "bmp" (Brent's multiple precision), which are available from Netlib. Using special tricks -------------------- A good example are the following tricks for summing a series. The first is sorting the numbers and adding them in ascending order. An example program: program rndof integer i real sum sum = 0.0 do i = 1, 10000000, 1 sum = sum + 1.0 / real(i) end do write (*,*) 'Decreasing order: ', sum sum = 0.0 do i = 10000000, 1, -1 sum = sum + 1.0 / real(i) end do write (*,*) 'Increasing order: ', sum end There is no need here for sorting, as the series is monotonic. Executing 2 * 10**7 iterations will take some CPU seconds, but the result is very illuminating. Another way (though not as good as doubling the precision) is using the Kahan Summation Formula. Suppose the series is stored in an array X(1:N) SUM = X(1) C = 0.0 DO J = 2, N Y = X(J) - C T = SUM + Y C = (T - SUM) - Y SUM = T ENDDO Yet another method is using Knuth's formula. The recommended method is sorting and adding. Another example is using the standard formulae for solving the quadratic equation (real numbers are written without mantissa to enhance readability): a*(x**2) + b*x + c = 0 (a .ne. 0) When b**2 is much larger than abs(4*a*c), the discriminat is nearly equal to abs(b), and we may get "catastrophic cancellation". Multiplying and dividing by the same number we get alternative formulae: -b + (b**2 - 4*a*c)**0.5 -2 * c x1 = ------------------------ = ----------------------- 2*a b + (b**2 - 4*a*c)**0.5 -b - (b**2 - 4*a*c)**0.5 2 * c x2 = ------------------------ = ------------------------ 2*a -b + (b**2 - 4*a*c)**0.5 If "b" is much larger than "a*c", use one of the standard and one of the alternative formulae. The first alternative formula is suitable when "b" is positive, the other when it's negative. Using integers instead of floats -------------------------------- See the chapter: "The world of integers". Manual safeguarding ------------------- You can check manually every dangerous arithmetic operation, special routines may be constructed to perform arithmetical operations in a safer way, or get an error message if this cannot be done. Hardware support ---------------- IEEE conforming FPUs can raise an exception whenever a roundoff was performed on an arithmetical result. You can write an exception handler that will report the exceptions, but as the result of most operations may have to be rounded, your program will be slowed down, and you will get huge log files. Rational arithmetic ------------------- Every number can be represented (possibly with an error) as a quotient of two integers, the dividend and divisor can be kept along the computation, without actually performing the division. See Knuth for technical details. It seems this method is not used. Bibliography ------------ An excellent article on floating-point arithmetic: David Goldberg What Every Computer Scientist Should Know about Floating-Point arithmetic ACM Computing Surveys Vol. 23 #1 March 1991, pp. 5-48 An old but still useful book: Sterbenz, Pat H. Floating-Point Computation Prentice-Hall, 1974 ISBN 0-13-322495-3 An old classic presented in a mathematical rigorous way (oouch!): Donald E. Knuth The Art of Computer Programming Volume II, sections 4.2.1 - 4.2.3 Addison-Wesley, 1969 The Silicon Graphics implementation of the IEEE standard, republished later in another issue of Pipeline: How a Floating Point Number is represented on an IRIS-4D Pipeline July/August 1990 The homepage of Prof. William Kahan, the well-known expert on floating-point arithmetic: A short nice summary on floating-point arithmetic: CS267: Supplementary Notes on Floating PointReturn to contents page | http://www.ibiblio.org/pub/languages/fortran/ch4-9.html | CC-MAIN-2017-47 | refinedweb | 1,567 | 52.49 |
Hello, I was trying to learn about Abstract classes and I came to find the code you see below. My question is, why do you also see the abstract Bike class constructor being executed when creating the Honda object. The Honda class already has its own constructor and yes, it does extend to Bike, but if the Honda already implements the constructor why does the Bike constructor show up in the console?
abstract class Bike{ Bike(){ System.out.println("Bike Constructor accessed."); } abstract void run(); void changeGear(){ System.out.println("gear changed"); } } //Bike class Honda extends Bike{ Honda(){ System.out.println("Honda Constructor accessed."); } void run(){ System.out.println("running safely.."); } } public class TestingCode { public static void main(String[] args) { Honda obj = new Honda(); obj.run(); obj.changeGear(); }//Main //Start }//Class | https://www.javaprogrammingforums.com/computer-support/44105-two-constructors-accessed.html | CC-MAIN-2022-05 | refinedweb | 131 | 60.82 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Server error when we do mass mailing with 3000 contacts.
Can someone help on below issue ?
When we send the Mail for 3000 customer then system is successfully send only 24 Customer and rest are fail, then system is showing Server Error which is mention below:
Odoo Server Error
Traceback (most recent call last):
File "/opt/odoo/openerp/http.py", line 518, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/opt/odoo/openerp/http.py", line 539, in dispatch
result = self._call_function(**self.params)
File "/opt/odoo/openerp/http.py", line 295, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/opt/odoo/openerp/service/model.py", line 113, in wrapper
return f(dbname, *args, **kwargs)
File "/opt/odoo/openerp/http.py", line 292, in checked_call
return self.endpoint(*a, **kw)
File "/opt/odoo/openerp/http.py", line 755, in __call__
return self.method(*args, **kw)
File "/opt/odoo/openerp/http.py", line 388, in response_wrap
response = f(*args, **kw)
File "/opt/odoo/addons/web/controllers/main.py", line 953, in call_button
action = self._call_kw(model, method, args, {})
File "/opt/odoo/addons/web/controllers/main.py", line 941, in _call_kw
return getattr(request.registry.get(model), method)(request.cr, request.uid, *args, **kwargs)
File "/opt/odoo/openerp/api.py", line 237, in wrapper
return old_api(self, *args, **kwargs)
File "/opt/odoo/addons/marketing_campaign/marketing_campaign.py", line 308, in synchroniz
self.process_segment(cr, uid, ids)
File "/opt/odoo/openerp/api.py", line 237, in wrapper
return old_api(self, *args, **kwargs)
File "/opt/odoo/addons/marketing_campaign/marketing_campaign.py", line 359, in process_segment
Workitems.process_all(cr, uid, list(campaigns), context=context)
File "/opt/odoo/openerp/api.py", line 237, in wrapper
return old_api(self, *args, **kwargs)
File "/opt/odoo/addons/marketing_campaign/marketing_campaign.py", line 765, in process_all
self.process(cr, uid, workitem_ids, context=context)
File "/opt/odoo/openerp/api.py", line 237, in wrapper
return old_api(self, *args, **kwargs)
File "/opt/odoo/addons/marketing_campaign/marketing_campaign.py", line 745, in process
self._process_one(cr, uid, wi, context=context)
File "/opt/odoo/openerp/api.py", line 237, in wrapper
return old_api(self, *args, **kwargs)
File "/opt/odoo/addons/marketing_campaign/marketing_campaign.py", line 740, in _process_one
workitem.write({'state': 'exception', 'error_msg': tb})
File "/opt/odoo/openerp/api.py", line 235, in wrapper
return new_api(self, *args, **kwargs)
File "/opt/odoo/openerp/models.py", line 3700, in write
self._write(old_vals)
File "/opt/odoo/openerp/api.py", line 235, in wrapper
return new_api(self, *args, **kwargs)
File "/opt/odoo/openerp/api.py", line 552, in new_api
result = method(self._model, cr, uid, self.ids, *args, **kwargs)
File "/opt/odoo/openerp/models.py", line 3811, in _write
cr.execute(query, params + (sub_ids,))
File "/opt/odoo/openerp/sql_db.py", line 158, in wrapper
return f(self, *args, **kwargs)
File "/opt/odoo/openerp/sql_db.py", line 234, in execute
res = self._obj.execute(query, params)
InternalError: current transaction is aborted, commands ignored until end of transaction block
Thanks in advance,
hi Nimesh,
you can set the limit for the mail in addons/mail/mail_mail.py .
In the file there is a method called def process_email_queue().
you can set the limitby use the below code changes in the method.
def process_email_queue(self, cr, uid, ids=None, context=None):
"""Send immediately queued messages, committing after each
message is sent - this is not transactional and should
not be called during another transaction!
:param list ids: optional list of emails ids to send. If passed
no search is performed, and these ids are used
instead.
:param dict context: if a 'filters' key is present in context,
this value will be used as an additional
filter to further restrict the outgoing
messages to send (by default all 'outgoing'
messages are sent).
"""
if context is None:
context = {}
if not ids:
filters = ['&', ('state', '=', 'outgoing'), ('type', '=', 'email')]
if 'filters' in context:
filters.extend(context['filters'])
ids = self.search(cr, uid, filters, context=context,limit=500)
res = None
try:
# Force auto-commit - this is meant to be called by
# the scheduler, and we can't allow rolling back the status
# of previously sent emails!
res = self.send(cr, uid, ids, auto_commit=True, context=context)
except Exception:
_logger.exception("Failed processing mail queue")
return res
this may help you
Hello Vasanth, Thanks for you help. But, When we run the marketing Campaign for 3000 users then system is send the mail for 115-130 customer successfully and Rest customer are fail, then system is showing the below Error is Again with new one Error "XMlHttpRequestError. Thanks, Nimesh.
i think your server was stopped during a particular time.please check out is there any time-out process going on your server
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
You enable log_level = debug_sql in openerp-server.conf and show what is in the log file (when error). | https://www.odoo.com/forum/help-1/question/server-error-when-we-do-mass-mailing-with-3000-contacts-75468 | CC-MAIN-2017-43 | refinedweb | 844 | 52.87 |
This blog post is a thought on a following question: Can we make
cassava (= CSV) stuff a bit safer using fancy types? This is not a proposal to change
cassava, only some ideas and a demonstration of "real world" fancy types (though STLC-implementation also count as real in my world ;). Also a bit of Generics.
This post is not only a Literate Haskell file , but also can be simply run directly with (given a very recent version of
cabal-install),
cabal run --index-state=2019-07-15T07:00:36Z posts/2019-07-15-fancy-types-for-cassava.lhs
as we specify dependencies. We'll use
fin and
vec packages. I assume that
data Nat = Z | S Nat and
data Vec :: Nat -> Type -> Type are familiar to you.1
{- cabal: build-depends: , base ^>=4.10 || ^>=4.11 , fin ^>=0.1 , vec ^>=0.1.1.1 , tagged ^>=0.8.6 , text ^>=1.2.3.0 ghc-options: -Wall -pgmL markdown-unlit build-tool-depends: markdown-unlit:markdown-unlit ^>=0.5.0 -}
Next a
{-# LANGUAGE Dependent #-} collection of extensions...
{-# LANGUAGE EmptyCase #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE PartialTypeSignatures #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# OPTIONS_GHC -Wall -Wno-partial-type-signatures -Wno-unused-imports #-}
... and imports
module Main where import Control.Monad (forM) import Data.Bifunctor (first, bimap) import Data.Kind (Type) import Data.Tagged (Tagged (..)) import Data.Text (Text) import Data.Type.Equality ((:~:) (..)) import Data.Fin (Fin (..)) import Data.Type.Nat (Nat (..), SNatI) import Data.Vec.Lazy (Vec (..)) import GHC.Generics import Text.Read (readMaybe) import qualified Data.Fin as F import qualified Data.Text as T import qualified Data.Text.IO as T import qualified Data.Type.Nat as N import qualified Data.Vec.Lazy as V
Encoding is often easier than decoding, so let's start with it. Our running example will be programming languages:
data PL = PL { plName :: Text , plYear :: Int , plPerson :: Text } deriving (Eq, Ord, Show, Generic)
We can create a small database of programming languages we have heard of:
pls :: [PL] pls = [ PL "Haskell" 1990 "Simon" , PL "Scala" 2004 "Martin" , PL "Idris" 2009 "Edwin" , PL "Perl" 1987 "Larry" ]
For encoding, we'll need to be able to encode individual fields / cells. This is similar to what we have in
cassava now. I'm for using type-classses for such cases, because I want to be able to leverage Generics, we'll see that soon.
Thanks to the fancier types, it would be possible to avoid type-classes, still getting something for free, but that's a topic for another post.
class ToField a where toField :: a -> Text instance ToField Text where toField = id instance ToField Int where toField = T.pack . show
As we can serialise individual fields, let us serialise records. We could have a method
toRecord :: r -> [Text] as in
cassava now, but it is potentially unsafe. Length of the list may vary depending on the record value. So we'd rather use fancy types!
-- field count in a record: its size. type family Size (a :: Type) :: Nat class ToRecord r where toRecord :: r -> Vec (Size r) Text
It's easy to imagine the
ToRecord PL instance. But we rather write a generic implementation for it.
First a generic
Size. Recall that
GHC.Generics represents records as a binary tree of
:*:. To avoid using
Plus, i.e. adding up
Nats and concatenating
Vecs, we'll
foldr like implementation. There is a nested application of
GSizeF type family in the
:*:-case. GHC wants
UndecidableInstances.
type GSize a = GSizeF (Rep a) 'Z -- start from zero type family GSizeF (f :: Type -> Type) (acc :: Nat) :: Nat where GSizeF U1 acc = acc GSizeF (K1 i a) acc = 'S acc GSizeF (M1 i c f) acc = GSizeF f acc GSizeF (f :*: g) acc = GSizeF f (GSizeF g acc)
Using that type family, we can say succintly define:
type instance Size PL = GSize PL
Also we can check this dependent-language style. If this test fails, it will be a compilation error. This definitely blurs the distiction between tests and types :)
check1 :: Size PL :~: N.Nat3 check1 = Refl
Using similar induction on the structure, we can write a generic implementation for
ToRecord. Different clauses are handled by different instances of a workhorse class
GToRecord.
genericToRecord :: forall r. (Generic r, GToRecord (Rep r)) => r -> Vec (GSize r) Text genericToRecord = gtoRecord VNil . from class GToRecord rep where gtoRecord :: Vec acc Text -> rep () -> Vec (GSizeF rep acc) Text instance GToRecord U1 where gtoRecord xs _ = xs instance ToField c => GToRecord (K1 i c) where gtoRecord xs (K1 c) = toField c ::: xs instance GToRecord f => GToRecord (M1 i c f) where gtoRecord xs (M1 f) = gtoRecord xs f instance (GToRecord f, GToRecord g) => GToRecord (f :*: g) where gtoRecord xs (f :*: g) = gtoRecord (gtoRecord xs g) f
The
ToRecord PL instance in the user code is a one-liner:
instance ToRecord PL where toRecord = genericToRecord
One more thing: column names. Usually CSV files start with a header line. This is where using
Size pays off again: Header have to be the same size as content rows. Here I use
Tagged to avoid
AllowAmbiguousTypes and
Proxy r extra argument.
class Header r where header :: Tagged r (Vec (Size r) Text)
We could write generic implementation for it, but as dealing with metadata in
GHC.Generics is not pretty, I'll implement
PL instance manually:
instance Header PL where header = Tagged $ "name" ::: "year" ::: "person" ::: VNil
The one piece left is an actual
encode function. I cut corners by not dealing with escaping for the sake of brevity.
You should notice, that in the implementation of
encode we don't care that much about the fact we get
Vecs of the same length for each record.
encode implementation would work, even with
class ToRecord' r where toRecord' :: r -> [Text]. Fancy types are here to help users of a library write correct (by construction) instances.
encode :: forall r. (Header r, ToRecord r) => [r] -> Text encode rs = T.unlines $ map (T.intercalate "," . V.toList) $ unTagged (header :: Tagged r _) : map toRecord rs
And it works:
*Main> T.putStr $ encode pls name,year,person Haskell,1990,Simon Scala,2004,Martin Idris,2009,Edwin Perl,1987,Larry
Good. Next we'll write an inverse.
Other direction, decoding is trickier. Everything could fail. Fields can contain garbage, there might be not enough fields (too much is not such a problem), but most importantly the fields can come in wrong order. Luckily we have fancy types helping us.
Like
ToField,
FromField is a copy of
cassava class:
type Error = String class FromField a where fromField :: Text -> Either String a instance FromField Text where fromField = Right instance FromField Int where fromField t = maybe (Left $ "Invalid Int: " ++ show t) Right $ readMaybe $ T.unpack t
Also like
ToRecord,
FromRecord is simple class as well. Note how library users need to deal only with vector of the right size (versus list of any length). We'll also assume that vector is sorted, to match header columns (which could be encoded with fancier types!)
class FromRecord r where fromRecord :: Vec (Size r) Text -> Either Error r
Generic implementation "peels off" provided vector:
genericFromRecord :: forall r. (Generic r, GFromRecord (Rep r)) => Vec (GSize r) Text -> Either String r genericFromRecord ts = let tmp :: Either Error (Rep r (), Vec 'Z Text) tmp = gfromRecord ts in to . fst <$> tmp class GFromRecord rep where gfromRecord :: Vec (GSizeF rep acc) Text -> Either Error (rep (), Vec acc Text) instance GFromRecord U1 where gfromRecord xs = return (U1, xs) instance FromField c => GFromRecord (K1 i c) where gfromRecord (x ::: xs) = do y <- fromField x return (K1 y, xs) instance GFromRecord f => GFromRecord (M1 i c f) where gfromRecord = fmap (first M1) . gfromRecord instance (GFromRecord f, GFromRecord g) => GFromRecord (f :*: g) where gfromRecord xs = do (f, xs') <- gfromRecord xs (g, xs'') <- gfromRecord xs' return (f :*: g, xs'') instance FromRecord PL where fromRecord = genericFromRecord
And a small sanity check:
*Main> fromRecord ("Python" ::: "1990" ::: "Guido" ::: VNil) :: Either String PL Right (PL {plName = "Python", plYear = 1990, plPerson = "Guido"}) *Main> fromRecord ("Lambda Calculus" ::: "in the 1930s" ::: "Alonzo" ::: VNil) :: Either String PL Left "Invalid Int: \"in the 1930s\""
We are now solved all the easy problems. We have set up the public api of library.
To* and
From* classes use fancy types, we have taken some burden from library users. However the difficult task is still undone: implementing
decode.
The example we'll want to work will have extra fields, and the fields shuffled:
input :: Text input = T.unlines [ "year,name,types,person,website" , "1987,Perl,no,Larry," , "1990,Haskell,nice,Simon," , "2004,Scala,weird,Martin," , "2009,Idris,fancy,Edwin," ]
which is
*Main> T.putStr input year,name,types,person,website 1987,Perl,no,Larry, 1990,Haskell,nice,Simon, 2004,Scala,weird,Martin, 2009,Idris,fancy,Edwin,
There's still enough information, we should be able to successfully extract
PL.
Zeroth step is to split input into lines, and lines into cells, and extract the header row. That's not the hard part:
prepare :: Text -> Either Error ([Text], [[Text]]) prepare i = case map (T.splitOn ",") (T.lines i) of [] -> Left "No header" (r:rs) -> Right (r, rs)
The hard part is to decode from
[Text] into
Vec (Size r) Text. And not only we need to decode, but also sort the columns. Our plan would be to
We'll require that content rows contain at least as many columns as header row. It's reasonable requirement, and simplifies things a bit. More relaxed requirement would be to require only as much rows as needed, e.g. in our example we could require only four fields, as we aren't interested in the fifth
website field.
What's the trace of a sort? Technically it's permutation. However in this case, it's not regular permutation, as we aren't interested in all fields. It's easier to think backwards, and think which kind of trace would determine the execution in the step 2. We'll be given a
Vec n Text for some
n, and we'll need to produce a
Vec m Text for some other
m (=
Size r). Let's try to write that as a data type:
data Extract :: Nat -> Nat -> Type where Step :: Fin ('S n) -- take a nth value, x -> Extract n m -- recursively extract rest, xs -> Extract ('S n) ('S m) -- cons x xs Done :: Extract n 'Z -- or we are done. deriving instance Show (Extract n m)
In retrospect, that type is a combination of less-than-or-equal-to and is a permutation (inductively defined) predicates.2
We can (should!) immediately try this type in action. For what it's worth, the implementations of following functions is quite restricted by their types. There are not much places where you can make a mistake. To be fair,
extract and
Extract were written simultaneously:
extract is structurally recusive in the
Extract argument, and
Extract has just enough data for
extract to make choices.
extract :: Extract n m -> Vec n a -> Vec m a extract Done _ = VNil extract (Step n e) xs = case delete n xs of (x, xs') -> x ::: extract e xs' -- this probably should be in the `vec` library delete :: Fin ('S n) -> Vec ('S n) a -> (a, Vec n a) delete FZ (x ::: xs) = (x, xs) delete (FS FZ) (x ::: y ::: xs) = (y, x ::: xs) delete (FS n@FS {}) (x ::: xs) = case delete n xs of (y, ys) -> (y, x ::: ys)
For example, given a row and a trace, we can extract fields we want (writing correct trace by hand is tricky).
*Main> row = "1987" ::: "Perl" ::: "no" ::: "Larry" ::: "" ::: VNil *Main> trc = Step 1 $ Step F.fin0 $ Step F.fin1 Done :: Extract N.Nat5 N.Nat3 *Main> extract trc row "Perl" ::: "1987" ::: "Larry" ::: VNil
That starts to feel like magic, doesn't it? To complete the whole spell, we need to complete part 1, i.e. construct
Extract traces. Luckily, types are there to guide us:
columns :: (Eq a, Show a) => Vec m a -- ^ wanted header values -> Vec n a -- ^ given header values -> Either Error (Extract n m) columns VNil _ = Right Done columns (_ ::: _) VNil = Left "not enought header values" columns (h ::: hs) xs@(_ ::: _) = do (n, xs') <- find' h xs -- find first value rest <- columns hs xs' -- recurse return $ Step n rest -- record the trace
where we use a helper function
find', which finds a value in the
Vec ('S n) and returns not only an index, but also a leftover vector. We could write a test:
Right (n, ys) = find'x xs
ys = delete n xs
find' :: (Eq a, Show a) => a -> Vec ('S n) a -> Either Error (Fin ('S n), Vec n a) find' x (y ::: ys) | x == y = Right (FZ, ys) | otherwise = case ys of VNil -> Left $ "Cannot find header value " ++ show x _ ::: _ -> do (n, zs) <- find' x ys return (FS n, y ::: zs)
Let's try
columns. It takes some time to understand to interpret
Extract values. Luckily the machine is there to do that.
*Main> columns ("name" ::: "year" ::: VNil) ("name" ::: "year" ::: VNil) Right (Step 0 (Step 0 Done)) *Main> columns ("name" ::: "year" ::: VNil) ("year" ::: "name" ::: VNil) Right (Step 1 (Step 0 Done)) *Main> columns ("name" ::: "year" ::: VNil) ("year" ::: "extra" ::: "name" ::: VNil) Right (Step 2 (Step 0 Done)) *Main> columns ("name" ::: "year" ::: VNil) ("year" ::: "extra" ::: "foo" ::: VNil) Left "Cannot find header value \"name\"" *Main> columns ("name" ::: "year" ::: VNil) ("name" ::: VNil) Left "not enought header values"
We have three steps
prepareto split input data into header and content rows
columnswhich checks whether there are all fields we want in the provided header, and returns an
Extractvalue saying how to permute content rows.
extractwhich uses an
Extractto extract (and order) correct data columns.
We'll use few two functions from
vec:
reifyList and
fromListPrefix.
-- Reify any list [a] to Vec n a. reifyList :: [a] -> (forall n. SNat n => Vec n a -> r) -> r -- Convert list [a] to Vec n a. Returns Nothing if input list is too short. fromListPrefix :: SNatI n => [a] -> Maybe (Vec n a)
They both convert a list
[a] into
Vec n a, however they are different
reifyListworks for any list. As we don't know the length of dynamic inputs,
reifyListtakes a continuation which accepts
Vecof any length. That continuation however would know and be able to use the vector length.
fromListPrefixtries to convert a list to a vector of known length, and thus may fail.
To put it differently, using
reifyList we learn the length of the header, and then we require that subsequent content rows are of atleast the same length. Lifting (or promoting) some information to the type level, reduces the amount of dynamic checks we'll need to consequtively e.g.
extract doesn't perform any checks.
decode :: forall r. (Header r, FromRecord r) => Text -> Either String [r] decode contents = do (hs,xss) <- prepare contents V.reifyList hs $ \hs' -> do trc <- columns (unTagged (header :: Tagged r _)) hs' forM xss $ \xs -> do xs' <- maybe (Left "not enough columns") Right $ V.fromListPrefix xs fromRecord (extract trc xs')
All done! To convince you that it works, let's run
decode on an
input we defined at the beginning of this section.
main :: IO () main = case decode input :: Either String [PL] of Left err -> putStrLn $ "ERROR: " ++ err Right xs -> mapM_ print xs
*Main> main PL {plName = "Perl", plYear = 1987, plPerson = "Larry"} PL {plName = "Haskell", plYear = 1990, plPerson = "Simon"} PL {plName = "Scala", plYear = 2004, plPerson = "Martin"} PL {plName = "Idris", plYear = 2009, plPerson = "Edwin"}
This is a challenging exercise. Improve
decode to deal with incomplete data like:
input2 :: Text input2 = T.unlines [ "year,name,types,person,website" , "1987,Perl,no,Larry" , "1990,Haskell,nice,Simon," ]
Note, the first content row has only four fields so original
decode errors with
*Main> decode input2 :: Either String [PL] Left "not enough columns"
The goal is to make
decode succeed:
*Main> mapM_ (mapM print) (decode input2 :: Either String [PL]) PL {plName = "Perl", plYear = 1987, plPerson = "Larry"} PL {plName = "Haskell", plYear = 1990, plPerson = "Simon"}
There are at least two way to solve this. A trickier one, for which there are two hints in footnotes: first3 and second4. And a lot simpler way, which "cheats" a little.
There is still a lot places where we can make mistakes. We use
Vec n a, so we have
n elements to pick. If we instead use heterogenous lists, e.g.
NP from
sop-core The types would become more precise. We could change our public interface to:
type family Fields r :: [Type] class ToRecord' r where toRecord' :: r -> NP I (Fields r) class Header' r where header' :: Tagged r (NP (K Text) (Fields r))
then writing correct versions of
delete,
extract etc will be even more type-directed. That's is left as an exericise, I suspect that the code shape will be quite the same.
One valid question to ask, is whether row-types would simplify something here. Not really.
For example
vinyl's
Rec type is essentially the same as
NP. Even if there were anonymous records in Haskell, so
toRecord could be implemented directly using a built-in function, it would remove only a single problem from many. At it's not much, as
toRecord is generically derivable.
In this post I described a complete fancy types usage example, helping us to deal with the untyped real world. Fancy types make library API more precise: we encode (pre/post)conditions like "lists are of the equal length" in the types.
Also we have seen a domain specific"inductive predicate:
Extract. It's a library internal, implementation-detail type. Even in "normal" Haskell, not all types (need to) end up into the library's public interface.
The vector example is the hello world of dependent types, but here it prevents users from making silly errors, and also make the implementation of a library more robust.
If they aren't, read through e.g. Stitch paper and / or watch a video of a talk Richard Eisenberg presented at ZuriHac '19 (which i saw live, hopefully it will be posted somewhere soon). or older version from NYC Haskell Group meetup (which I didn't watch, only googled for). The definitions in
fin and
vec are as follows:
data Nat = Z | S Nat data Fin :: Nat -> Type where FZ :: Fin ('S n) FS :: Fin n -> Fin ('S n) data Vec :: Nat -> Type -> Type where VNil :: Vec 'Z a (:::) :: a -> Vec n a -> Vec ('S n) a
compare
Extract with
LEProof and
Permutation
-- | An evidence of \(n \le m\). /zero+succ/ definition. data LEProof :: Nat -> Nat -> Type where LEZero :: LEProof 'Z m LESucc :: LEProof n m -> LEProof ('S n) ('S m) -- | Permutation. 'PCons' can be interpretted in two ways: -- * uncons head part, insert in a given position in permutted tail -- * delete from given position, cons to permutted tail. data Permutation :: Nat -> Type where PNil :: Permutation 'Z PCons :: Fin ('S n) -> Permutation n -> Permutation ('S n)
First hint implement a function like:
minimise :: SNatI n => Extract n m -> (forall p. SNatI p => Extract p m -> r) -> r -- conservative implementation, not minimising at all minimise e k = k e
Second hint My variant of
minimise uses
LEProof and few auxiliary functions:
minimise :: Extract n m -> (forall p. N.SNatI p => LEProof p n -> Extract p m -> r) -> r minimiseFin :: Fin ('S n) -> (forall p. N.SNatI p => LEProof p n -> Fin ('S p) -> r) -> r maxLE :: LEProof n p -> LEProof m p -> Either (LEProof n m) (LEProof m n) weakenFin :: LEProof n m -> Fin ('S n) -> Fin ('S m) weakenExtract :: LEProof n m -> Extract n p -> Extract m p | https://oleg.fi/gists/posts/2019-07-15-fancy-types-for-cassava.html | CC-MAIN-2021-25 | refinedweb | 3,260 | 70.13 |
Redux · An Introduction
Redux.
Further Reading on SmashingMag
- Why You Should Consider React Native For Your Mobile App
- Test Automation For Apps, Games And The Mobile Web
- Server-Side Rendering With React, Node And Express
- Notes On Client-Rendered Accessibility.
Redux is used mostly for application state management. To summarize it, Redux maintains the state of an entire application in a single immutable state tree (object), which can’t be changed directly. When something changes, a new object is created (using actions and reducers). We’ll go over the core concepts in detail below.
How Is It Different From MVC And Flux?
To give some perspective, let’s take the classic model-view-controller (MVC) pattern, since most developers are familiar with it. In MVC architecture, there is a clear separation between data (model), presentation (view) and logic (controller). There is one issue with this, especially in large-scale applications: The flow of data is bidirectional. This means that one change (a user input or API response) can affect the state of an application in many places in the code — for example, two-way data binding. That can be hard to maintain and debug.
Flux is very similar to Redux. The main difference is that Flux has multiple stores that change the state of the application, and it broadcasts these changes as events. Components can subscribe to these events to sync with the current state. Redux doesn’t have a dispatcher, which in Flux is used to broadcast payloads to registered callbacks. Another difference in Flux is that many varieties are available, and that creates some confusion and inconsistency.
Benefits Of Redux
You may be asking, “Why would I need to use Redux?” Great question. There are a few benefits of using Redux in your next application:
- Predictability of outcome
There is always one source of truth, the store, with no confusion about how to sync the current state with actions and other parts of the application.
- Maintainability
Having a predictable outcome and strict structure makes the code easier to maintain.
- Organization
Redux is stricter about how code should be organized, which makes code more consistent and easier for a team to work with.
- Server rendering
This is very useful, especially for the initial render, making for a better user experience or search engine optimization. Just pass the store created on the server to the client side.
- Developer tools
Developers can track everything going on in the app in real time, from actions to state changes.
- Community and ecosystem
This is a huge plus whenever you’re learning or using any library or framework. Having a community behind Redux makes it even more appealing to use.
- Ease of testing
The first rule of writing testable code is to write small functions that do only one thing and that are independent. Redux’s code is mostly functions that are just that: small, pure and isolated.
Functional Programming
As mentioned, Redux was built on top of functional programming concepts. Understanding these concepts is very important to understanding how and why Redux works the way it does. Let’s review the fundamental concepts of functional programming:
- It is able to treat functions as first-class objects.
- It is able to pass functions as arguments.
- It is able to control flow using functions, recursions and arrays.
- It is able to use pure, recursive, higher-order, closure and anonymous functions.
- It is able to use helper functions, such as map, filter and reduce.
- It is able to chain functions together.
- The state doesn’t change (i.e. it’s immutable).
- The order of code execution is not important.
Functional programming allows us to write cleaner and more modular code. By writing smaller and simpler functions that are isolated in scope and logic, we can make code much easier to test, maintain and debug. Now these smaller functions become reusable code, and that allows you to write less code, and less code is a good thing. The functions can be copied and pasted anywhere without any modification. Functions that are isolated in scope and that perform only one task will depend less on other modules in an app, and this reduced coupling is another benefit of functional programming.
You will see pure functions, anonymous functions, closures, higher-order functions and method chains, among other things, very often when working with functional JavaScript. Redux uses pure functions heavily, so it’s important to understand what they are.
Pure functions return a new value based on arguments passed to them. They don’t modify existing objects; instead, they return a new one. These functions don’t rely on the state they’re called from, and they return only one and the same result for any provided argument. For this reason, they are very predictable.
Because pure functions don’t modify any values, they don’t have any impact on the scope or any observable side effects, and that means a developer can focus only on the values that the pure function returns.
Where Can Redux Be Used?
Most developers associate Redux with React, but it can be used with any other view library. For instance, you can use Redux with AngularJS, Vue.js, Polymer, Ember, Backbone.js and Meteor. Redux plus React, though, is still the most common combination. Make sure to learn React in the right order: The best guide is Pete Hunt’s, which is very helpful for developers who are getting started with React and are overwhelmed with everything going on in the ecosystem. JavaScript fatigue is a legitimate concern among front-end developers, both new or experienced, so take the time to learn React or Redux the right way in the right order.
One of the reasons Redux is awesome is its ecosystem. So many articles, tutorials, middleware, tools and boilerplates are available. Personally, I use David Zukowski’s boilerplate because it has everything one needs to build a JavaScript application, with React, Redux and React Router. A word of caution: Try not to use boilerplates and starter kits when learning new frameworks such as React and Redux. It will make it even more confusing, because you won’t understand how everything works together. Learn it first and build a very simple app, ideally as a side project, and then use boilerplates for production apps to save time.
Building Parts Of Redux
Redux concepts might sound complicated or fancy, but they’re simple. Remember that the library is only 2 KB. Redux has three building parts: actions, store and reducers.
Let’s discuss what each does.
Actions
In a nutshell, actions are events. Actions send data from the application (user interactions, internal events such as API calls, and form submissions) to the store. The store gets information only from actions. Internal actions are simple JavaScript objects that have a
type property (usually constant), describing the type of action and payload of information being sent to the store.
{ type: LOGIN_FORM_SUBMIT, payload: {username: ‘alex’, password: ‘123456’} }
Actions are created with action creators. That sounds obvious, I know. They are just functions that return actions.
function authUser(form) { return { type: LOGIN_FORM_SUBMIT, payload: form } }
Calling actions anywhere in the app, then, is very easy. Use the
dispatch method, like so:
dispatch(authUser(form));
Reducers
We’ve already discussed what a reducer is in functional JavaScript. It. Here is a very simple reducer that takes the current state and an action as arguments and then returns the next state:
function handleAuth(state, action) { return _.assign({}, state, { auth: action.payload }); }
For more complex apps, using the
combineReducers() utility provided by Redux is possible (indeed, recommended). It combines all of the reducers in the app into a single index reducer. Every reducer is responsible for its own part of the app’s state, and the state parameter is different for every reducer. The
combineReducers() utility makes the file structure much easier to maintain.
If an object (state) changes only some values, Redux creates a new object, the values that didn’t change will refer to the old object and only new values will be created. That’s great for performance. To make it even more efficient you can add Immutable.js.
const rootReducer = combineReducers({ handleAuth: handleAuth, editProfile: editProfile, changePassword: changePassword });
Store.
import { createStore } from ‘redux’; let store = createStore(rootReducer); let authInfo = {username: ‘alex’, password: ‘123456’}; store.dispatch(authUser(authInfo));
Developer Tools, Time Travel And Hot Reloading
To make Redux easier to work with, especially when working with a large-scale application, I recommend using Redux DevTools. It’s incredibly helpful, showing the state’s changes over time, real-time changes, actions, and the current state. This saves you time and effort by avoiding
console.log’s current state and actions
Redux has a slightly different implementation of time travel than Flux. In Redux, you can go back to a previous state and even take your state in a different direction from that point on. Redux DevTools supports the following “time travel” features in the Redux workflow (think of them as Git commands for your state):
- Reset: resets to the state your store was created with
- Revert: goes back to the last committed state
- Sweep: removes all disabled actions that you might have fired by mistake
- Commit: makes the current state the initial state
The time-travel feature is not efficient in production and is only intended for development and debugging. The same goes for DevTools.
Redux makes testing much easier because it uses functional JavaScript as a base, and small independent functions are easy to test. So, if you need to change something in your state tree, import only one reducer that is responsible for that state, and test it in isolation.
Build An App
To conclude this introductory guide, let’s build a very simple application using Redux and React. To make it easier for everyone to follow, I will stick to plain old JavaScript, using ECMAScript 2015 and 2016 as little as possible. We’ll continue the log-in logic started earlier in this post. This example doesn’t use any live data, because the purpose of this app is to show how Redux manages the state of a very simple app. We’ll use CodePen.
1. React Component
We need some React components and data. Let’s make a simple component and render it on the page. The component will have an input field and a button (it’s a very simple log-in form). Below, we’ll add text that represents our state:
See the Pen Intro to Redux by Alex Bachuk (@abachuk) on CodePen.
2. Events and Actions
Let’s add Redux to the project and handle the
onClick event for the button. As soon as the user logs in, we will dispatch the action with the type
LOGIN and the value of the current user. Before we can do that, we have to create a store and pass a reducer function to it as an argument. For now, the reducer will just be an empty function:
See the Pen Intro to Redux - Step 2. Events and Actions by Alex Bachuk (@abachuk) on CodePen.
3. Reducers
Now that we have the action firing, the reducer will take that action and return a new state. Let’s handle the
LOGIN action returning a logged-in status and also add a
LOGOUT action, so that we can use it later. The
auth reducer accepts two parameters:
- the current state (which has the default value),
- the action.
See the Pen Intro to Redux - Step 3. Reducers by Alex Bachuk (@abachuk) on CodePen.
4. Displaying the Current State
Now, that we have the initial state (the default value in reducer) and the React component ready, let’s see how the state looks. A best practice is to push the state down to children components. Because we have only one component, let’s pass the app’s state as a property to
auth components. To make everything work together, we have to register the store listener with a
subscribe helper method, by wrapping
ReactDOM.render in a function and passing it to
store.subscribe():
See the Pen Intro to Redux - Step 4. Displaying current state by Alex Bachuk (@abachuk) on CodePen.
5. Log In and Out
Now that we have log-in and log-out action handlers, let’s add a log-out button and dispatch the
LOGOUT action. The last step is to manage which button to display log-in or log-out by moving this log-in outside of the render method and rendering the variable down below:
See the Pen Intro to Redux - Step 5. Login/Logout by Alex Bachuk (@abachuk) on CodePen.
Conclusion
Redux is gaining traction every day. It’s been used by many companies (Uber, Khan Academy, Twitter) and in many projects (Apollo, WordPress’ Calypso), successfully in production. Some developers might complain that there is a lot of overhead. In most cases, more code is required to perform simple actions like button clicks or simple UI changes. Redux isn’t a perfect fit for everything. There has to be a balance. Perhaps simple actions and UI changes don’t have to be a part of the Redux store and can be maintained at the component level.
Even though Redux might not be ideal solution for your app or framework, I highly recommend checking it out, especially for React applications.
Front page image credits: Lynn Fisher, @lynnandtonic
| https://www.smashingmagazine.com/2016/06/an-introduction-to-redux/ | CC-MAIN-2022-27 | refinedweb | 2,232 | 63.8 |
0
Hi All,
I have read that the system call "execlp" will take the first arguement as the file name(or path), second arguement as the command and all others are arguements to the command terminated by NULL.
i have written below program.
#include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #define PATH "/home/FILE/test" int main(void) { int fd,st; fd = fork(); if(fd == 0) { printf("In child\n"); if (execlp(PATH,"test","hello","how","are","you",(char *)0)){ printf("Exec succ\n"); }else { printf("Exec fail\n"); } } else { printf("In parent\n"); wait(&st); } }
the program is working correctly if i use the PATH macro as it gives the full path of the file image.
but if i replace PATH with "test"(image name) i am getting the out put as
In parent In child test: extra argument `how'
i dont understand the problem.
please some body point out where it is going wrong.
for your reference test.c
int main(int argc , char **argv) { int i= 0; for(;i<argc-1;){ printf(" %s \n", argv[++i]); } printf("this is a test\n"); return 0; }
Thanks in Advance,
Gaiety. | https://www.daniweb.com/programming/software-development/threads/412469/execlp-function-behaviour | CC-MAIN-2017-39 | refinedweb | 204 | 74.39 |
This post is about connecting to a WCF service from a client application. The key takeaways in this post relate to understanding endpoints and how to connect to them. We will explore adding service references from client applications. We will also show you how client applications can pass data into a WCF service.
The client is also capable of receiving return data from a WCF service call.
In this post, you will learn a few things:
How to write and debug a client application connecting to WCF services
How to access named endpoints inside of an app.config on the client
How to attach a service reference and generate type information to simplify client programming
How to run both the WCF service and the client using the debugging tools
Before we can connect from a client application, we need to look at the WCF services Mex endpoint. The client application will use this endpoint to retrieve type information about FlipCaseService.
Viewing app.config
We will now add the client project to the overall solution. Right mouse click on the solution and follow the menu below.
Adding a new project
Choose console application from the new Project dialog box. Provide a name as seen below.
Adding a console application
Note that our solution now has to projects. One for server and one for client.
Viewing solution Explorer
In order for the client to connect to the WCF service, we need to add a service reference. Right mouse click on the references I the client and choose Add Service reference.
Adding a service reference from the client
Although we already looked at the endpoint so that we knew how to connect to it, we can choose the Discover button as seen below. Because our client application is in the same solution as the WCF of service, this works. Note that we will provide a namespace name before clicking OK.
Connecting to a Mex endpoint
Adding the service reference also added two additional references as seen below.
Viewing the references that were added
Double-click on Program.cs to open the file to be edited.
Opening program.cs
The first red box below was about adding that using statements for the previous reference added. The second red box is the first line of code which will start the process of connecting to the WCF service. Notice the quotes are blank. Our WCF service offers three separate endpoints. The desired endpoint name is the one that gets typed into the quotes below in the second red box.
Adding some code
The five viewing the client’s app.config file, you can get the name of each of the three endpoints. We will arbitrarily use the endpoint that has basic HTTP binding.
Viewing the names of the endpoints
Paste in the name of the endpoint. Then add the remaining courage you see in the red box. This is the main routine for our client file Program.cs. You can double-click on Program.cs to open it so that you can edit it.
Wrapping up Program.cs
Notice in the red box below we made a slight edit. The variable sd is assigned the return value from FlipTheCase(). Notice that the original string being passed in is Bruno.
Writing the client-side code to consume Web services
After compiling the solution, you can begin by running the WCF service. Right mouse click on the Web service project and choose Debug/ Start new instance. This will load the WCF service into memory and make it available to the client application.
Starting the WCF service
Notice the WCF service is now up and running, as evidenced by the WCF test client application in the red box.
Viewing the WCF Client
We will now debug the client application by right mouse clicking on the client project in solution Explorer. Then choose Debug/Start new instance.
Starting the client
Notice below that the flipped case has appeared bRUNO.
Verifying correctness
This concludes explanation of how a client application can consume WCF Services. Although we chose a console application, you can also choose WPF or Winforms, and more.
In this post, you learned a few things:
My experience is that writing unit tests and having client and service created in the same test is a handy way to quickly and automatically test the full WCF-communication with all its layers between a client and a service. Testing non-GUI software through some GUI seems odd and unpractical, even as an example.
I agree that automating client side interactions with a WCF Service back-end makes sense. However, in order to properly understand how to automate and perform unit testing, you need a solid conceptual understanding of how to write hard-coded client applications by hand. Automating something without understanding the underpinnings is like memorizing a multiple choice exam without reading the questions. | http://blogs.msdn.com/b/brunoterkaly/archive/2013/10/28/wcf-programming-how-to-write-a-client-app-that-connects-to-a-wcf-service.aspx | CC-MAIN-2015-11 | refinedweb | 812 | 64.71 |
Wiki
SCons / ExternalTools
The SCons wiki has moved to
There are lots of tools you might want to have SCons launch instead of an editor. These frequently require certain types of environment variable inheritance, and SCons, by default does none of that. For example, let's say you want to use the most excellent tool ccache ( with your SConscript. You already know that you have to replace the tool with another, so you setup an environment like:
env = Environment( CXX = 'ccache g++' )
and invoke SCons, and see:
% scons scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... ccache g++ -c -o foo.o foo.cxx ccache: failed to create (null)/.ccache (No such file or directory) scons: *** [foo.o] Error 1 scons: building terminated because of errors.
Drat! So what's going on? Well, turns out that ccache relies on putting temp files in your HOME directory. Arguably, this is a bug with the way ccache handles the getenv() system call, but that's another story. The scoop is that, alas, the environment is not inherited by SCons. So, how does one get the environment into SCons? Very simply:
import os env = Environment(ENV = os.environ)
ah, much better:
% scons scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... ccache g++ -c -o foo.o foo.cxx ccache g++ -o foo foo.o scons: done building targets.
/!\ A better way of updating the ENV settings of the Environment is shown in ImportingEnvironmentSettings.
Updated | https://bitbucket.org/scons/scons/wiki/ExternalTools?action=AttachFile | CC-MAIN-2018-22 | refinedweb | 249 | 69.48 |
52674,
You can perform this task in two ...READ MORE
Hey,
To format a string, use the .format ...READ MORE
You can use this:
lines = sc.textFile(“hdfs://path/to/file/filename.txt”);
def isFound(line):
if ...READ MORE
Function Definition :
def test():Unit{
var a=10
var b=20
var c=a+b
}
calling ...READ MORE
Please try the following Scala code:
import org.apache.hadoop.conf.Configuration
import ...READ MORE
Please refer to the below code as ...READ MORE
The statement display(id, name, salary) is written before the display function ...READ MORE
Hi,
You can use a simple mathematical calculation ...READ MORE
Hey,
The Map.contains() method will tell you if ...READ MORE
you can access task information using TaskContext:
import org.apache.spark.TaskContext
sc.parallelize(Seq[Int](), ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/52674/appending-to-a-string-in-scala | CC-MAIN-2021-43 | refinedweb | 158 | 61.22 |
One fairly common requirement for web applications is the display of Summary fields with calculated values. An obvious example is a table of multiple records with column-summaries appearing underneath the table. Using ADF Faces technology, it is fairly simple to quickly develop an application that presents a multi-record layout based on data retrieved from a database. In this article we will see how we can add a summary column to the columns in such a table layout – and to make those summaries automatically updating when a value for one of the records in the table is changed in the specific column.
Creating the master-detail application
The application I use for this example is a well-known one: a Master-Detail (form-table) page for DEPT and EMP. For each Department, we will see the details in a Form layout with underneath a Table component with all Employees in the Department.
Using JDeveloper 10.1.3, ADF BC, ADF Binding (Framework) and ADF Faces, creating such a page is almost trivial, especially if you generate it using JHeadstart 10.1.3 (which is what I did here). In quick summary the steps:
- Create new Application, choose Web Technology (ADF BC and JSF) as Technology Template; Model and ViewController project are created automatically
- Create Business Components from Tables EMP and DEPT in SCOTT schema in the Model project; add View Link from Source DeptView to Target EmpView
- Using JHeadstart: enable JHeadstart on the ViewController project, create default Application Definition file, generate the application
- Without JHeadstart: create a new JSF JSP page, drag and drop DeptView from the AppModule DataControl as Editable Form, drag and drop EmpView2 under DeptView to the jspx page.
- Run the Application to verify the data is shown and the master-detail coordination works as expected.
Add the Salary Summary to the DeptView ViewObject
We want this page to also contain the summary of all salaries in the currently selected Department. Where to put it is a later concern, let’s first get it on the page in the first place. The steps for this are:
1. Select the DeptView ViewObject and select Edit from the RMB menu
2. On the Attributes tab, press the New button to create a new (transient) attribute called SalSum. SalSum is of type Number, is Never updateable and is not mapped to Column or SQL
3. On the Java Tab, check the checkbox Generate Java File under View Row Class DeptViewRowImpl; also check the Accessors checkbox:
4. Press OK to close the VO Editor wizard.
5. From the RMB menu on the DeptView VO, select the option Goto View Row Class
Locate the Number getSalSum() accessor method. Replace the default implementation with this one:
public Number getSalSum() {<br /> RowIterator emps = getEmpView();<br /> Number sum = new Number(0);<br /> while (emps.hasNext()) {<br /> sum = sum.add( (Number)emps.next().getAttribute("Sal"));<br /> }<br /> return sum;<br /> }<br />
6. Add the SalarySum to the page either by dragging and dropping it from the DataControl Palette, or by synchronizing the Dept Group in the JHeadstart Application Definition editor and regenerating the application.
7. Run the application to inspect the newly added SalarySum. It should contain the correct sum of the salaries in the currently selected department. If you change the value of one of the salaries, it will currently not be updated automatically when you leave the field. It will however be synchronized for example when you sort the table by clicking one of the sortable column headers.
Creating a proper Column Footer with the Salary Summary
The next step is to create the proper layout for the Salary Summary: we want to have it displayed in Footer underneath the Salary Column in our Table Component. The ADF Faces Column Component has a so called footer facet. We can use that facet to specify whatever should be rendered underneath the column. So we can implement the column footer facet for the Salary column, containing the SalarySummary, much like this:
<af:column<br /> <f:facet<br /> <h:panelGroup><br /> <h:outputText<br /> </h:panelGroup><br /> </f:facet><br /> <af:inputText<br /> <f:convertNumber<br /> </af:inputText><br /> <f:facet<br /> <h:panelGroup><br /> <af:outputText<br /> <f:convertNumber<br /> </af:outputText><br /> </h:panelGroup><br /> </f:facet><br /> </af:column><br />
However, it turns out that the Column Footer Facet gets only rendered if also the Table’s Footer Facet has been specified. Otherwise, the column footer is simply not rendered! The table footer facet is like this:
<f:facet<br /> <af:outputText<br /> </f:facet><br /></af:table><br />
Dynamically Refreshing the Salary Summary whenever a Salary value is changed
At this point, when the page is loaded, the Summary is displayed as expected. However, if we change the value of one of the salaries, the Summary is not immediately updated. Only when the table is refreshed for some other reason, such as sorting or detail disclosure/hide will the new summary value be shown. What we would like to have is an immediate update of the Summary whenever one of the Salaries is changed. We want to leverage the Partial Page Refresh mechanism of ADF Faces to help us realize this.
1. Let’s ensure that a change in a salary value causes an immediate submit to the server. We can easily do that by setting the autoSubmit attribute for the DetailEmpSal inputText component to true:
/> <f:convertNumber<br /> </af:inputText><br />
2. In order to update the SummarySal element upon processing the Partial Page Refresh caused by the Sal
ary change, we have to set the partia
lTriggers attribute.
However, here we run into a problem: we need to specify the ID of the component whose change should trigger the update of the Salary Sum in the partialTriggers attribute for the DeptEmpSalSum component. However, even though the ID for the Salary inputText component seems simple enough – DetailEmpSal – this is not the correct value! Since the Salary field will appear not once but once for every record in our table component and the ID needs to be unique, the actual ID will be different. Each Salary field will have an ID that includes DetailEmpSal as well as :0, :1, :2 etc. to make the ID values unique.
When I try a temporary workaround, just to see whether things are working, I run into a second issue: if I specify partialTriggers="DeptEmpDname" for the Column Footer Facet and/or the Table Footer Facet, it turns out that they are not in fact refreshed when PPR is processed. Only when I specify partialTriggers="DeptEmpDname" at the level of the Table Component will the Footer Facets be properly updated as part of the PPR processing cycle. That means I have now achieved that whenever I change one or more Salary values and I subsequently change the Department Name in the Master record, I get ‘dynamic, instantaneous’ update of the Salary Sum. Almost there, but not quite. By the way: what I am doing here is somewhat similar to Frank Nimphius’ blog article: ADF Faces: Compute totals of selected rows in a multi select table. He does not seem to have a problem with Table Footer refresh so perhaps I am doing something wrong here.
After consulting Frank and Duncan Mills, I am pointed in a new direction: programmatically specifying the targets of Partial Page Refresh. Sounds interesting, let’s try it out:
Programmatically specifying the targets of Partial Page Refresh
There is an API call – AdfFacesContext.getCurrentInstance().addPartialTarget(<the component to
refresh>); – that Duncan suggested to me. This should allow me to specify the Table – or perhaps even the Table and/or Column Footer Facet – that should be refreshed as part of the current PPR cycle. Of course, this call should be made whenever Salary has been changed. So using a ValueChangeListener, I should be well on my way. Should I not?
1. Create a new class EmpMgr to handle the Salary Changed event by adding the EmployeeTable to the list of partial targets:
import javax.faces.application.Application;<br />import javax.faces.component.UIComponent;<br />import javax.faces.context.FacesContext;<br />import javax.faces.event.ValueChangeEvent;<br /><br />import oracle.adf.view.faces.context.AdfFacesContext;<br /><br />public class EmpMgr {<br /> public EmpMgr() {<br /> }<br /><br /> public void HandleSalaryChangeEvent(ValueChangeEvent valueChangeEvent) {<br /> Application app = FacesContext.getCurrentInstance().getApplication();<br /> UIComponent table = (UIComponent)app.createValueBinding("#{DetailEmpCollectionModel.table}").getValue(FacesContext.getCurrentInstance());<br /> AdfFacesContext.getCurrentInstance().addPartialTarget(table);<br /><br /> }<br />}<br />
Note: the table component had its Binding property already set to #{DetailEmpCollectionModel.table}; I am simply reusing that binding. Also note that trying to use the DeptEmpSalSum outputText as partialTarget did not lead to a refresh: I had to use the table as refresh target.
2. Configure this new class as Managed Bean
...<br /> <managed-bean><br /> <managed-bean-name>EmpMgr</managed-bean-name><br /> <managed-bean-class>EmpMgr</managed-bean-class><br /> <managed-bean-scope>session</managed-bean-scope><br /> </managed-bean><br /></faces-config><br />
3. Specify a ValueChangeListener for the Salary field in my Employees table:
/><br /> <f:convertNumber<br /></af:inputText><br /> <br />
4. Run the application, change a salary and keep your fingers crossed… IT WORKS!!!! (Thanks Frank and Duncan). Note: this opens up a lot of very interesting possibilities: we can determine dynamically on the server side which fields to be updated in the browser by simply adding them to the list of partialTargets.
Resources
ADF Faces: Compute totals of selected rows in a multi select table – Frank Nimphius’ Blogbuster Weblog
ADF Faces Components Index on OTN
ADF Faces Apache Incubator – AF:Column component
Subversion Sources Repository for Apache MyFaces (Incubator) ADF Faces Sources
I need the same funtionality in 11g table in with jsff and regions ,Â
my outputText is not refresed(PPR ) on the column value submitsÂ
Â
hi
   Very Nice.
I wnt to source code of this file.
Thank you works very nice. I have one problem. When i have a new row in the table and i fill in the salary(in my case hours) then it doesnt refresh the total line. when ik refres the page the total does change. so it looks like the Partial Page Refresh does not work on new rows. do you have a solution for that problem?
Thanks..thanks..thanks.. for your article.Very useful.But I had one problem.if I dont add emps.first() before return sum,sum value equals to last value of salary.Because of this I add emps.first().Now I have new problem.
if table has range size,because of first() you cant see other pages.only show 1-10,what can I do about this problem???
Thanks for your article. The solution solved the same problem I had. In my case, I have had the table binded to the CoreTable in the bean. So, only one line of code in the back bean valueChange handler to programmatically specifying the table as the target of PPR.
Reference to this article from an interesting aggregator of JSF resources: Resources for Java server-side developers at | https://technology.amis.nl/2006/07/27/creating-a-dynamic-ajax-column-footer-summary-in-a-table-component-using-adf-faces/ | CC-MAIN-2015-27 | refinedweb | 1,852 | 53.71 |
Few weeks back we showed you how to upgrade to Ubuntu 12.04 from 10.04. That tutorial showed you how to do it from Ubuntu Desktop. Doing it from the desktop is easy and is the best option for new users. To read that post, click here.
However, Ubuntu can also be upgraded via the command console and for some users, this method is the easiest and fastest. This brief tutorial will show you how to upgrade Ubuntu via the command console if you prefer this method.
Objectives:
- Upgrade to Ubuntu 12.04 (Precise Pangolin) from Ubuntu 10.04 (Lucid Lynx) via the Terminal
- Enjoy!
To get started, press Ctrl – Alt – T on your keyboard to open the Terminal. When it opens, run the commands below to install update-manager-core if isn’t already installed.
sudo apt-get install update-manager-core
Next, run the commands below to open update-manager’s release upgrade file.
sudo vi /etc/update-manager/release-upgrades
Then, change the line with prompt in the file to prompt=lts if it’s not already set to lts. To change it, scroll down to the line press the X key on the keyboard to delete each character. Then hit the I key and begin typing the new line. When you’re done, press Esc key, then type :wq to save and exit.
Next, run the commands below to update all packages installed before you upgrade.
sudo apt-get update && sudo apt-get upgrade & sudo apt-get autoremove
Finally, run the commands below to begin the upgrade process.
sudo do-release-upgrade
When ask if you want to continue with the upgrade, type Y for yes.
Enjoy!
In my case, the above steps appeared to suceed until I tried “sudo do-release-upgrade”. Terminal:
#> sudo do-release-upgrade
Checking for a new ubuntu release
Failed Upgrade tool signature
Failed Upgrade tool
Done downloading Failed to fetch
Fetching the upgrade failed. There may be a network problem.
Any ideas?
Cheers,
Rob
I got the same error as Robz
These instructions fail, packages no longer on the server, lots of 404 failures
Before you do the do-release-upgrade, change the release-upgrades file back to the original as prompt=lts and then run do-release-upgrade.
It atleast started the upgrade process
This worked for me! Thanks a lot!
root@pcz-ee205855-2:~# do-release-upgrade
Traceback (most recent call last):
File “/usr/bin/do-release-upgrade”, line 10, in
from UpdateManager.Core.DistUpgradeFetcherCore import DistUpgradeFetcherCore
File “/usr/lib/python2.6/dist-packages/UpdateManager/Core/DistUpgradeFetcherCore.py”, line 34, in
import GnuPGInterface
ImportError: No module named GnuPGInterface
Run this:
cp /usr/share/pyshared/GnuPGInterface.py /usr/lib/python2.6/
I tried this, but then got the following message:
An unresolvable problem occurred while calculating the upgrade:
E:Error, pkgProblemResolver::Resolve generated breaks, this may be
caused by held… Done
Building dependency tree
Reading state information… Done
Building data structures… Done
Any idea what I can do ? I tried to install it before via the update manager but it always got stuck.
This fails at the last step. Probably because 10.04 is an LTS release.
I got:
Failed Upgrade tool signature
Failed Upgrade tool
Failed to fetch
Fetching the upgrade failed. There may be a network problem.
I got it to work by NOT changing the prompt=normal.
It should be prompt=lts
Thank you , very helpful to me
I tried this and all it did was stop half of my programs from running, change some graphics and it didn’t upgrade the distro
error ensuring `/var/lib/dpkg/reassemble.deb’ doesn’t exist: Read-only file system
No apport report written because MaxReports is reached already
dpkg: too many errors, stopping
Processing triggers for man-db …
dpkg: unrecoverable fatal error, aborting:
unable to flush updated status of `man-db’: Read-only file system
touch: cannot touch `/var/lib/update-notifier/dpkg-run-stamp’: Read-only file system
sh: 1: cannot create /var/lib/update-notifier/updates-available: Read-only file system
while updating the ubuntu i am getting this error, plz help me..
thanks, i will try it
Just lots of 404s.
Doesn’t appear to do anything.
Any ideas – I’m a *total* Linux idiot…
If you guys hitting 404 error,
below my little solution,
since Ubuntu old release package has been moved to the old archive, you should update your source.list file with below
deb lucid main restricted universe
deb lucid-updates main restricted universe
deb lucid-security main restricted universe
deb-src lucid main restricted universe
save, and then try from the beginning.
For more detail I found this tuts: | http://www.liberiangeek.net/2012/04/upgrade-to-ubuntu-12-04-from-ubuntu-10-04-via-the-terminal/ | CC-MAIN-2014-15 | refinedweb | 779 | 55.74 |
Hello readers, this is yet another post in a series we are doing PyTorch. This post is aimed for PyTorch users who are familiar with basics of PyTorch and would like to move to an intermediate level. While we have covered how to implement a basic classifier in an earlier post, in this post, we will be discussing how to implement more complex deep learning functionality using PyTorch. Some of the objectives of this posts are to make you understand.
- What is the difference between PyTorch classes like
nn.Module,
nn.Functional,
nn.Parameterand when to use which
- How to customise your training options such as different learning rates for different layers, different learning rate schedules
- Custom Weight Initialisation
Before we begin, let me remind you this part.
So, let's get started.
You can get all the code in this post, (and other posts as well) in the Github repo here.
nn.Module vs nn.Functional
This is something that comes quite a lot especially when you are reading open source code. In PyTorch, layers are often implemented as either one of
torch.nn.Module objects or
torch.nn.Functional functions. Which one to use? Which one is better?
As we had covered in Part 2,
torch.nn.Module is basically the cornerstone of PyTorch. The way it works is you first define an
nn.Module object, and then invoke it's
forward method to run it. This is a Object Oriented way of doing things.
On the other hand,
nn.functional provides some layers / activations in form of functions that can be directly called on the input rather than defining the an object. For example, in order to rescale an image tensor, you call
torch.nn.functional.interpolate on an image tensor.
So how do we choose what to use when? When the layer / activation / loss we are implementing has a loss.
Understanding Stateful-ness
Normally, any layer can be seen as a function. For example, a convolutional operation is just a bunch of multiplication and addition operations. So, it makes sense for us to just implement it as a function right? But wait, the layer holds weights which need to be stored and updated while we are training. Therefore, from a programmatic angle, a layer is more than function. It also needs to hold data, which changes as we train our network.
I now want to you to stress upon that fact that the data held by the convolutional layer changes. This means that the layer has a state which changes as we train. For us to implement a function that does the convolutional operation, we would also need to define a data structure to hold the weights of the layer separately from the function itself. And then, make this external data structure an input to our function.
Or just to beat the hassle, we could just define a class to hold the data structure, and make convolutional operation as an member function. This would really ease up our job, as we don't have to worry about stateful variables existing outside of the function. In these cases, we would prefer to use the
nn.Module objects where we have weights or other states which might define the behaviour of the layer. For example, a dropout / Batch Norm layer behaves differently during training and inference.
On the other hand, where no state or weights are required, one could use the
nn.functional. Examples being, resizing (
nn.functional.interpolate), average pooling (
nn.functional.AvgPool2d).
Despite the above reasoning, most of the
nn.Module classes have their
nn.functional counterparts. However, the above line of reasoning is to be respected during practical work.
nn.Parameter
An important class in PyTorch is the
nn.Parameter class, which to my surprise, has gotten little coverage in PyTorch introductory texts. Consider the following case.
class net(nn.Module): def __init__(self): super().__init__() self.conv = nn.Linear(10,5) def forward(self, x): return self.linear(x) myNet = net() #prints the weights and bias of Linear Layer print(list(myNet.parameters()))
Each
nn.Module has a
parameters() function which returns, well, it's trainable parameters. We have to implicitly define what these parameters are. In definition of
nn.Conv2d, the authors of PyTorch defined the weights and biases to be parameters to that of a layer. However, notice on thing, that when we defined
net, we didn't need to add the
parameters of
nn.Conv2d to
parameters of
net. It happened implicitly by virtue of setting
nn.Conv2d object as a member of the
net object.
This is internally facilitated by the
nn.Parameter class, which subclasses the
Tensor class. When we invoke
parameters() function of a
nn.Module object, it returns all it's members which are
nn.Parameter objects.
Infact, all the training weights of
nn.Module classes are implemented as
nn.Parameter objects. Whenever, a
nn.Module (
nn.Conv2d in our case) is assigned as a member of another
nn.Module, the "parameters" of the assignee object (i.e. the weights of
nn.Conv2d) are also added the "parameters" of the object which is being assigned to (parameters of
net object). This is called registering "parameters" of a
nn.Module
If you try to assign a tensor to the
nn.Module object, it won't show up in the
parameters() unless you define it as
nn.Parameter object. This has been done to facilitate scenarios where you might need to cache a non-differentiable tensor, example in case, caching previous output in case of RNNs.
class net1(nn.Module): def __init__(self): super().__init__() self.conv = nn.Linear(10,5) self.tens = torch.ones(3,4) # This won't show up in a parameter list def forward(self, x): return self.linear(x) myNet = net1() print(list(myNet.parameters())) ########################################################## class net2(nn.Module): def __init__(self): super().__init__() self.conv = nn.Linear(10,5) self.tens = nn.Parameter(torch.ones(3,4)) # This will show up in a parameter list def forward(self, x): return self.linear(x) myNet = net2() print(list(myNet.parameters())) ########################################################## class net3(nn.Module): def __init__(self): super().__init__() self.conv = nn.Linear(10,5) self.net = net2() # Parameters of net2 will show up in list of parameters of net3 def forward(self, x): return self.linear(x) myNet = net3() print(list(myNet.parameters()))
nn.ModuleList and nn.ParameterList()
I remember I had to use a
nn.ModuleList when I was implementing YOLO v3 in PyTorch. I had to create the network by parsing a text file which contained the architecture. I stored all the
nn.Module objects corresponding in a Python list and then made the list a member of my
nn.Module object representing the network.
To simplify it, something like this.
layer_list = [nn.Conv2d(5,5,3), nn.BatchNorm2d(5), nn.Linear(5,2)] class myNet(nn.Module): def __init__(self): super().__init__() self.layers = layer_list def forward(x): for layer in self.layers: x = layer(x) net = myNet() print(list(net.parameters())) # Parameters of modules in the layer_list don't show up.
As you see, unlike when we would register individual modules, assigning a Python List doesn't register the parameters of Modules inside the list. To fix this, we wrap our list with the
nn.ModuleList class, and then assign it as a member of the network class.
layer_list = [nn.Conv2d(5,5,3), nn.BatchNorm2d(5), nn.Linear(5,2)] class myNet(nn.Module): def __init__(self): super().__init__() self.layers = nn.ModuleList(layer_list) def forward(x): for layer in self.layers: x = layer(x) net = myNet() print(list(net.parameters())) # Parameters of modules in layer_list show up.
Similarly, a list of tensors can be registered by wrapping the list inside a
nn.ParameterList class.
Weight Initialisation
Weight initialisation can influence the results of your training. What's more, you may require different weight initialisation schemes for different sort of layers. This can be accomplished by the
modules and
apply functions.
modules is a member function of
nn.Module class which returns an iterator containing all the member
nn.Module members objects of a
nn.Module function. Then use the
apply function can be called on each nn.Module to set it's initialisation.
import matplotlib.pyplot as plt %matplotlib inline class myNet(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d(10,10,3) self.bn = nn.BatchNorm2d(10) def weights_init(self): for module in self.modules(): if isinstance(module, nn.Conv2d): nn.init.normal_(module.weight, mean = 0, std = 1) nn.init.constant_(module.bias, 0) Net = myNet() Net.weights_init() for module in Net.modules(): if isinstance(module, nn.Conv2d): weights = module.weight weights = weights.reshape(-1).detach().cpu().numpy() print(module.bias) # Bias to zero plt.hist(weights) plt.show()
There are a plethora of inplace initialisation functions to be found in the
torch..nn.init module.
modules() vs children()
A very similar function to
modules is
children. The difference is a slight but an important one. As we know, a
nn.Module object can contain other
nn.Module objects as it's data members.
children() will only return a list of the
nn.Module objects which are data members of the object on which
children is being called.
On other hand,
nn.Modules goes recursively inside each
nn.Module object, creating a list of each
nn.Module object that comes along the way until there are no
nn.module objects left. Note,
modules() also returns the
nn.Module on which it has been called as a part of the list.
Note, that the above statement remains true for all objects / classes that subclass from
nn.Module class.
class myNet(nn.Module): def __init__(self): super().__init__() self.convBN = nn.Sequential(nn.Conv2d(10,10,3), nn.BatchNorm2d(10)) self.linear = nn.Linear(10,2) def forward(self, x): pass Net = myNet() print("Printing children\n------------------------------") print(list(Net.children())) print("\n\nPrinting Modules\n------------------------------") print(list(Net.modules()))
So, when we initialize the weights, we might want to use
modules() function since we can't go inside the
nn.Sequential object and initialise the weight for its members.
Printing Information About the Network
We may need to print information about the network, whether be it for the user or for debugging purposes. PyTorch provides a really neat way to print a lot of information about out network using it's
named_* functions. There are 4 such functions.
named_parameters. Returns an iterator which gives a tuple containing name of the parameters (if a convolutional layer is assigned as
self.conv1, then it's parameters would be
conv1.weightand
conv1.bias) and the value returned by the
__repr__function of the
nn.Parameter
2.
named_modules. Same as above, but iterator returns modules like
modules() function does.
3.
named_children Same as above, but iterator return modules like
children() returns
4.
named_buffers Return buffer tensors such as running mean average of a Batch Norm layer.
for x in Net.named_modules(): print(x[0], x[1], "\n-------------------------------")
Different Learning Rates For Different Layers
In this section, we will learn how to use different learning rates for our different layers. In general, we will cover how to have different hyperparameters for different groups of parameters, whether it be different learning rate for different layers, or different learning rate for biases and weights.
The idea to implement such a thing is fairly simple. In our previous post, where we implemented a CIFAR classifier, we passed all the parameters of network as a whole to the optimiser object.
class myNet(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(10,5) self.fc2 = nn.Linear(5,2) def forward(self, x): return self.fc2(self.fc1(x)) Net = myNet() optimiser = torch.optim.SGD(Net.parameters(), lr = 0.5)
However, the
torch.optim class allows us to provide different sets of parameters having different learning rates in form of a dictionary.
optimiser = torch.optim.SGD([{"params": Net.fc1.parameters(), 'lr' : 0.001, "momentum" : 0.99}, {"params": Net.fc2.parameters()}], lr = 0.01, momentum = 0.9)
In the above scenario, the parameters of `fc1` use a learning rate of 0.01 and momentum of 0.99. If a hyperparameter is not specified for a group of parameters (like `fc2`), they use the default value of that hyperparameter, given as input argument to the optimiser function. You could create parameter lists on basis of different layers, or either whether the parameter is a weight or a bias, using the
named_parameters() function we covered above.
Learning Rate Scheduling
Scheduling your learning rate is going to follow is a major hyperparameter that you want to tune. PyTorch provides support for scheduling learning rates with it's
torch.optim.lr_scheduler module which has a variety of learning rate schedules. The following example demonstrates one such example.
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimiser, milestones = [10,20], gamma = 0.1)
The above scheduler, multiplies the learning rate by
gamma each time when we reach epochs contained in the
milestones list. In our case, the learning rate is multiplied by 0.1 at the 10nth and the 20nth epoch. You will also have to write the line
scheduler.step in the loop in your code that goes over the epochs so that the learning rate is updated.
Generally, training loop is made of two nested loops, where one loop goes over the epochs, and the nested one goes over the batches in that epoch. Make sure you call
scheduler.step at start of the epoch loop so your learning rate is updated. Be careful not to write it in the batch loop, otherwise your learning rate may be updated at the 10th batch rather than 10nth epoch.
Also remember that
scheduler.step is no replacement for
optim.step and you'll have to call
optim.step everytime you backprop backwards. (This would be in the "batch" loop).
Saving your Model
You might wanna save your model for later use for inference, or just might want to create training checkpoints. When it comes to saving models in PyTorch one has two options.
First is to use
torch.save. This is equivalent to serialising the entire
nn.Module object using Pickle. This saves the entire model to disk. You can load this model later in the memory with
torch.load.
torch.save(Net, "net.pth") Net = torch.load("net.pth") print(Net)
The above will save the entire model with weights and architecture. If you only need to save the weights, instead of saving the entire model, you can save just the
state_dict of the model. The
state_dict is basically a dictionary which maps the
nn.Parameter objects of a network to their values.
As demonstrated above, one can load an existing
state_dict into a
nn.Module object. Note that this doesn't involve saving of entire model but only the parameters. You will have to create the network with layers before you load the state dict. If the network architecture is not exactly the same as the one whose
state_dict we saved, PyTorch will throw up an error.
for key in Net.state_dict(): print(key, Net.state_dict()[key]) torch.save(Net.state_dict(), "net_state_dict.pth") Net.load_state_dict(torch.load("net_state_dict.pth"))
An optimiser object from
torch.optim also has a
state_dict object which is used to store the hyperparameters of optimisation algorithms. It can be saved and loaded in a similar way we did above by calling
load_state_dict on an optimiser object.
Conclusion
This completes our discussion on some of the more advanced features of PyTorch. I hope the things that you've read in this posts will help you implement complex deep learning ideas that you might have come up with. Here are links for further study shall you be interested.
- A list of learning rate scheduling options in PyTorch
- Saving and Loading Models - Official PyTorch tutorials
- What is torch.nn really?
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/pytorch-101-advanced/ | CC-MAIN-2019-47 | refinedweb | 2,663 | 60.92 |
These notes are for developing a C++ application. If you are a developing a .NET/C# application please see our .NET Wiki instead.
Checklist
Before we begin, you’ll need the following:
Install the SDK
- After downloading the SDK, extract it to a folder in your home directory.
- Open the Terminal.
- Change directories to the SDK files you just extracted.
- Run
sudo make allto install the library to your machine.
Run the Samples
You should now be able to run the samples. Change directories to the
./bin folder and run some samples:
The Hello World example loads up Google and renders it to a JPG on disk:
./awesomium_sample_hello
The WebFlow sample depends on SDL 1.2 and OpenGL, demonstrates a simple 3D browser:
./awesomium_sample_webflow
Using Awesomium
To use most of the Awesomium API in your C++ code, just include the following:
#include <Awesomium/WebCore.h>
To link against Awesomium 1.7.4 in your applications, just add “-lawesomium-1-7”
g++ main.cpp -lawesomium-1-7 | http://wiki.awesomium.com/getting-started/setting-up-on-linux.html | CC-MAIN-2017-04 | refinedweb | 166 | 61.73 |
import "github.com/golang/go/src/cmd/compile/internal/types"
etype_string.go identity.go pkg.go scope.go sym.go type.go utils.go
const ( IgnoreBlankFields componentsIncludeBlankFields = false CountBlankFields componentsIncludeBlankFields = true )
MaxPkgHeight is a height greater than any likely package height.
var ( // Predeclared alias types. Kept separate for better error messages. Bytetype *Type Runetype *Type // Predeclared error interface type. Errortype *Type // Types to represent untyped string and boolean constants. Idealstring *Type Idealbool *Type // Types to represent untyped numeric constants. Idealint = New(TIDEAL) Idealrune = New(TIDEAL) Idealfloat = New(TIDEAL) Idealcomplex = New(TIDEAL) )
var ( // TSSA types. Haspointers assumes these are pointer-free. TypeInvalid = newSSA("invalid") TypeMem = newSSA("mem") TypeFlags = newSSA("flags") TypeVoid = newSSA("void") TypeInt128 = newSSA("int128") )
var ( Widthptr int Dowidth func(*Type) Fatalf func(string, ...interface{}) Sconv func(*Sym, int, int) string // orig: func sconv(s *Sym, flag FmtFlag, mode fmtMode) string Tconv func(*Type, int, int) string // orig: func tconv(t *Type, flag FmtFlag, mode fmtMode) string FormatSym func(*Sym, fmt.State, rune, int) // orig: func symFormat(sym *Sym, s fmt.State, verb rune, mode fmtMode) FormatType func(*Type, fmt.State, rune, int) // orig: func typeFormat(t *Type, s fmt.State, verb rune, mode fmtMode) TypeLinkSym func(*Type) *obj.LSym Ctxt *obj.Link FmtLeft int FmtUnsigned int FErr int )
The following variables must be initialized early by the frontend. They are here to break import cycles. TODO(gri) eliminate these dependencies.
List of .inittask entries in imported packages, in source code order.
NewPtrCacheEnabled controls whether *T Types are cached in T. Caching is disabled just before starting the backend. This allows the backend to run concurrently.
ParamsResults is like RecvsParamsResults, but omits receiver parameters.
RecvsParams is like RecvsParamsResults, but omits result parameters.
RecvsParamsResults stores the accessor functions for a function Type's receiver, parameters, and result parameters, in that order. It can be used to iterate over all of a function's parameter lists.
Types stores pointers to predeclared named types.
It also stores pointers to several special types:
- Types[TANY] is the placeholder "any" type recognized by substArgTypes. - Types[TBLANK] represents the blank variable's type. - Types[TNIL] represents the predeclared "nil" value's type. - Types[TUNSAFEPTR] is package unsafe's Pointer type.
CleanroomDo invokes f in an environment with no preexisting packages. For testing of import/export only.
Identical reports whether t1 and t2 are identical types, following the spec rules. Receiver parameter types are ignored.
IdenticalIgnoreTags is like Identical, but it ignores struct tags for struct identity.
IsExported reports whether name is an exported Go symbol (that is, whether it begins with an upper-case letter).
Markdcl records the start of a new block scope for declarations.
Popdcl pops the innermost block scope and restores all symbol declarations to their previous state.
Pushdcl pushes the current declaration for symbol s (if any) so that it can be shadowed by a new declaration within a nested block scope.
type Array struct { Elem *Type // element type Bound int64 // number of elements; <0 if unknown yet }
Array contains Type fields specific to array types.
Chan contains Type fields specific to channel types.
ChanArgs contains Type fields specific to TCHANARGS types.
ChanDir is whether a channel can send, receive, or both.
const ( // types of channel // must match ../../../../reflect/type.go:/ChanDir Crecv ChanDir = 1 << 0 Csend ChanDir = 1 << 1 Cboth ChanDir = Crecv | Csend )
Cmp is a comparison between values a and b. -1 if a < b
0 if a == b 1 if a > b
EType describes a kind of type.
const ( Txxx EType = iota TINT8 TUINT8 TINT16 TUINT16 TINT32 TUINT32 TINT64 TUINT64 TINT TUINT TUINTPTR TCOMPLEX64 TCOMPLEX128 TFLOAT32 TFLOAT64 TBOOL TPTR TFUNC TSLICE TARRAY TSTRUCT TCHAN TMAP TINTER TFORW TANY TSTRING TUNSAFEPTR // pseudo-types for literals TIDEAL // untyped numeric constants TNIL TBLANK // pseudo-types for frame layout TFUNCARGS TCHANARGS // SSA backend types TSSA // internal types used by SSA backend (flags, memory, etc.) TTUPLE // a pair of types, used by SSA backend NTYPE )
type Field struct { Embedded uint8 // embedded field Pos src.XPos Sym *Sym Type *Type // field type Note string // literal string annotation // For fields that represent function parameters, Nname points // to the associated ONAME Node. Nname *Node // Offset in bytes of this field or method within its enclosing struct // or interface Type. Offset int64 // contains filtered or unexported fields }
A Field represents a field in a struct or a method in an interface or associated with a named type.
End returns the offset of the first byte immediately after this field.
IsMethod reports whether f represents a method rather than a struct field.
Fields is a pointer to a slice of *Field. This saves space in Types that do not have fields or methods compared to a simple slice of *Field.
Append appends entries to f.
Index returns the i'th element of Fields. It panics if f does not have at least i+1 elements.
Len returns the number of entries in f.
Set sets f to a slice. This takes ownership of the slice.
Slice returns the entries in f as a slice. Changes to the slice entries will be reflected in f.
type Forward struct { Copyto []*Type // where to copy the eventual value to Embedlineno src.XPos // first use of this type as an embedded type }
Forward contains Type fields specific to forward types.
Fnstruct records the kind of function argument
const ( FunargNone Funarg = iota FunargRcvr // receiver FunargParams // input parameters FunargResults // output results )
type Func struct { Receiver *Type // function receiver Results *Type // function results Params *Type // function params Nname *Node // Argwid is the total width of the function receiver, params, and results. // It gets calculated via a temporary TFUNCARGS type. // Note that TFUNC's Width is Widthptr. Argwid int64 Outnamed bool // contains filtered or unexported fields }
Func contains Type fields specific to func types.
// FuncArgs contains Type fields specific to TFUNCARGS types.
Interface contains Type fields specific to interface types.
type Map struct { Key *Type // Key type Elem *Type // Val (elem) type Bucket *Type // internal struct type representing a hash bucket Hmap *Type // internal struct type representing the Hmap (map header object) Hiter *Type // internal struct type representing hash iterator state }
Map contains Type fields specific to maps.
Dummy Node so we can refer to *Node without actually having a gc.Node. Necessary to break import cycles. TODO(gri) try to eliminate soon
type Pkg struct { Path string // string literal used in import statement, e.g. "runtime/internal/sys" Name string // package name, e.g. "sys" Prefix string // escaped path for use in symbol table Syms map[string]*Sym Pathsym *obj.LSym // Height is the package's height in the import graph. Leaf // packages (i.e., packages with no imports) have height 0, // and all other packages have height 1 plus the maximum // height of their imported packages. Height int Imported bool // export data of this package was parsed Direct bool // imported directly }
ImportedPkgList returns the list of directly imported packages. The list is sorted by package path.
NewPkg returns a new Pkg for the given package path and name. Unless name is the empty string, if the package exists already, the existing package name and the provided name must match.
LookupOK looks up name in pkg and reports whether it previously existed.
Ptr contains Type fields specific to pointer types.
Slice contains Type fields specific to slice types.
type Struct struct { // Maps have three associated internal structs (see struct MapType). // Map links such structs back to their map type. Map *Type Funarg Funarg // type of function arguments for arg struct // contains filtered or unexported fields }
StructType contains Type fields specific to struct types.
type Sym struct { Importdef *Pkg // where imported definition was found Linkname string // link name Pkg *Pkg Name string // object name // saved and restored by dcopy Def *Node // definition: ONAME OTYPE OPACK or OLITERAL Block int32 // blocknumber to catch redeclaration Lastlineno src.XPos // last declaration for diagnostic Label *Node // corresponding label (ephemeral) Origpkg *Pkg // original package for . import // contains filtered or unexported fields }
Sym represents an object name in a segmented (pkg, name) namespace. Most commonly, this is a Go identifier naming an object declared within a package, but Syms are also used to name internal synthesized objects.
As an exception, field and method names that are exported use the Sym associated with localpkg instead of the package that declared them. This allows using Sym pointer equality to test for Go identifier uniqueness when handling selector expressions.
Ideally, Sym should be used for representing Go language constructs, while cmd/internal/obj.LSym is used for representing emitted artifacts.
NOTE: In practice, things can be messier than the description above for various reasons (historical, convenience).
Less reports whether symbol a is ordered before symbol b.
Symbols are ordered exported before non-exported, then by name, and finally (for non-exported symbols) by package height and path.
Ordering by package height is necessary to establish a consistent ordering for non-exported names with the same spelling but from different packages. We don't necessarily know the path for the package being compiled, but by definition it will have a height greater than any other packages seen within the compilation unit. For more background, see issue #24693.
PkgDef returns the definition associated with s at package scope.
SetPkgDef sets the definition associated with s at package scope.
type Type struct { // Extra contains extra etype-specific fields. // As an optimization, those etype-specific structs which contain exactly // one pointer-shaped field are stored as values rather than pointers when possible. // // TMAP: *Map // TFORW: *Forward // TFUNC: *Func // TSTRUCT: *Struct // TINTER: *Interface // TFUNCARGS: FuncArgs // TCHANARGS: ChanArgs // TCHAN: *Chan // TPTR: Ptr // TARRAY: *Array // TSLICE: Slice Extra interface{} // Width is the width of this Type in bytes. Width int64 // valid if Align > 0 Nod *Node // canonical OTYPE node Orig *Type // original type (type literal or predefined type) // Cache of composite types, with this type being the element type. Cache struct { // contains filtered or unexported fields } Sym *Sym // symbol containing name, for named types Vargen int32 // unique name for OTYPE/ONAME Etype EType // kind of type Align uint8 // the required alignment of this type, in bytes (0 means Width and Align have not yet been computed) // contains filtered or unexported fields }
A Type represents a Go type.
FakeRecvType returns the singleton type used for interface method receivers.
New returns a new Type of the specified kind.
NewArray returns a new fixed-length array Type.
NewChan returns a new chan Type with direction dir.
NewChanArgs returns a new TCHANARGS type for channel type c.
NewFuncArgs returns a new TFUNCARGS type for func type f.
NewMap returns a new map Type with key type k and element (aka value) type v.
NewPtr returns the pointer type pointing to t.
NewSlice returns the slice Type with element type elem.
SubstAny walks t, replacing instances of "any" with successive elements removed from types. It returns the substituted type.
ArgWidth returns the total aligned argument size for a function. It includes the receiver, parameters, and results.
ChanArgs returns the channel type for TCHANARGS type t.
ChanDir returns the direction of a channel type t. The direction will be one of Crecv, Csend, or Cboth.
ChanType returns t's extra channel-specific fields.
Compare compares types for purposes of the SSA back end, returning a Cmp (one of CMPlt, CMPeq, CMPgt). The answers are correct for an optimizer or code generator, but not necessarily typechecking. The order chosen is arbitrary, only consistency and division into equivalence classes (Types that compare CMPeq) matters.
Elem returns the type of elements of t. Usable with pointers, channels, arrays, slices, and maps.
Field returns the i'th field/method of struct/interface type t.
FieldSlice returns a slice of containing all fields/methods of struct/interface type t.
ForwardType returns t's extra forward-type-specific fields.
FuncArgs returns the func type for TFUNCARGS type t.
FuncType returns t's extra func-specific fields.
HasHeapPointer reports whether t contains a heap pointer. This is used for write barrier insertion, so it ignores pointers to go:notinheap types.
HasNil reports whether the set of values determined by t includes nil.
IsEmptyInterface reports whether t is an empty interface type.
IsFuncArgStruct reports whether t is a struct representing function parameters.
IsKind reports whether t is a Type of the specified kind.
IsPtr reports whether t is a regular Go pointer type. This does not include unsafe.Pointer.
IsPtrElem reports whether t is the element of a pointer (to t).
IsPtrShaped reports whether t is represented by a single machine pointer. In addition to regular Go pointer types, this includes map, channel, and function types and unsafe.Pointer. It does not include array or struct types that consist of a single pointer shaped type. TODO(mdempsky): Should it? See golang.org/issue/15028.
IsUnsafePtr reports whether t is an unsafe pointer.
IsUntyped reports whether t is an untyped type.
IsVariadic reports whether function type t is variadic.
Key returns the key type of map type t.
LongString generates a complete description of t. It is useful for reflection, or when a unique fingerprint or hash of a type is required.
MapType returns t's extra map-specific fields.
Nname returns the associated function's nname.
NumComponents returns the number of primitive elements that compose t. Struct and array types are flattened for the purpose of counting. All other types (including string, slice, and interface types) count as one element. If countBlank is IgnoreBlankFields, then blank struct fields (and their comprised elements) are excluded from the count. struct { x, y [3]int } has six components; [10]struct{ x, y string } has twenty.
Pkg returns the package that t appeared in.
Pkg is only defined for function, struct, and interface types (i.e., types with named elements). This information isn't used by cmd/compile itself, but we need to track it because it's exposed by the go/types API.
Recv returns the receiver of function type t, if any.
SetFields sets struct/interface type t's fields/methods to fields.
Nname sets the associated function's nname.
SetPkg sets the package that t appeared in.
ShortString generates a short description of t. It is used in autogenerated method names, reflection, and itab names.
SoleComponent returns the only primitive component in t, if there is exactly one. Otherwise, it returns nil. Components are counted as in NumComponents, including blank fields.
StructType returns t's extra struct-specific fields.
Tie returns 'T' if t is a concrete type, 'I' if t is an interface type, and 'E' if t is an empty interface type. It is used to build calls to the conv* and assert* runtime routines.
ToUnsigned returns the unsigned equivalent of integer type t.
Package types imports 9 packages (graph). Updated 2020-01-20. Refresh now. Tools for package owners. | https://godoc.org/github.com/golang/go/src/cmd/compile/internal/types | CC-MAIN-2020-34 | refinedweb | 2,454 | 58.58 |
Placing new names under standard namespaces?
Discussion in 'Ruby' started by Ammar Ali, Oct 25, 2010.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Namespaces, classes, and using standard modulesDan Rawson, Aug 13, 2003, in forum: Python
- Replies:
- 3
- Views:
- 360
- Andrew Dalke
- Aug 13, 2003
list comprehensions put non-names into namespaces!Lonnie Princehouse, May 25, 2006, in forum: Python
- Replies:
- 6
- Views:
- 358
- Just
- May 26, 2006
Generating new element names from old element names, Oct 19, 2006, in forum: XML
- Replies:
- 2
- Views:
- 596
combining names and namespaces into a one URI, Feb 25, 2007, in forum: XML
- Replies:
- 4
- Views:
- 571
- Joe Kesselman
- Feb 25, 2007
placing new characters at the end of string, May 6, 2008, in forum: C++
- Replies:
- 7
- Views:
- 353
- Andrew Koenig
- May 9, 2008 | http://www.thecodingforums.com/threads/placing-new-names-under-standard-namespaces.864976/ | CC-MAIN-2015-35 | refinedweb | 171 | 72.6 |
NAME
led -- API for manipulating LED's, lamps and other annunciators
SYNOPSIS
#include <dev/led/led.h> typedef void led_t(void *priv, int onoff); struct cdev * led_create_state(led_t *func, void *priv, char const *name, int state); struct cdev * led_create(led_t *func, void *priv, char const *name); void led_destroy(struct cdev *);
DESCRIPTION
The led driver provides generic support for handling LEDs, lamps and other annunciators. The hardware driver must supply a function to turn the annunciator on and off and the device name of the annunciator relative to /dev/led/. The priv argument is passed back to this on/off function and can be used however the hardware driver sees fit. The lamp can be controlled by opening and writing ASCII strings to the /dev/led/bla device. In the following, we will use this special notation to indicate the resulting output of the annunciator: * The annunciator is on for 1/10th second. _ The annunciator is off for 1/10th second. State can be set directly, and since the change happens immediately, it is possible to flash the annunciator with very short periods and synchronize it with program events. It should be noted that there is a non-trivial overhead, so this may not be usable for benchmarking or measuring short intervals. 0 Turn the annunciator off immediately. 1 Turn the annunciator on immediately. Flashing can be set with a given period. The pattern continues endlessly. f _* f1 _* f2 __** f3 ___*** ... f9 _________********* Three high-level commands are available: d%d Numbers. Each digit is blinked out at 1/10th second, zero as ten pulses. Between digits a one second pause and after the last digit a two second pause after which the sequence is repeated. s%s String. This gives full control over the annunciator. Letters 'A' ... 'J' turn the annunciator on for from 1/10th to one full second. Letters 'a' ... 'j' turn the annunciator off for 1/10th to one full second. Letters 'u' and 'U' turn the annunciator off and on respectively when the next UTC second starts. Unless terminated with a '.', the sequence is immediately repeated. m%s Morse. '.' becomes '_*' '-' becomes '_***' ' ' becomes '__' '\n' becomes '____' The sequence is repeated after a one second pause.
FILES
/dev/led/*
EXAMPLES
A 'd12' flashes the lamp *__________*_*______________________________ A 'sAaAbBa' flashes *_*__**_ /usr/games/morse -l "Soekris rocks" > /dev/led/error
SEE ALSO
morse(6)
HISTORY
The led driver first appeared in FreeBSD 5.2.
AUTHORS
This software was written by Poul-Henning Kamp <phk@FreeBSD.org>. This manual page was written by Sergey A. Osokin <osa@FreeBSD.org> and Poul-Henning Kamp <phk@FreeBSD.org>. | http://manpages.ubuntu.com/manpages/precise/man4/led.4freebsd.html | CC-MAIN-2014-52 | refinedweb | 447 | 56.76 |
TypeName
TypeNamesShouldNotMatchNamespaces
CheckId
CA1724
Category
Microsoft.Naming
Breaking Change
Breaking
A type name matches one of the following namespace names in a case-insensitive comparison:
Collections
Forms
System
UI
Type names should not match the names of namespaces defined in the .NET Framework class library. Violating this rule can reduce the usability of the library.
Select a type name that does not match the name of a .NET Framework class library namespace.
For new development, there are no known scenarios where you must exclude a warning from this rule. For shipping libraries, you might have to exclude a warning from this rule.
The four names above are not the only ones that this rule complains about. This violation was raised over my use of "Security" as an object name, saying that it conflicted "in part" with "System.Web.Security". I work in the financial trading industry where a "security" is something that can be traded, and it is something that I have to represent in my object model. There is no synonym that will be widely recognized by others who are using my code; and we don't use the Web namespace at all, so the chance for confusion is minimal. I love FxCop, but once again Microsoft shows its astounding arrogance: "For new development, there are no known scenarios where you must exclude a warning from this rule." Sorry guys, but you can't possibly predict all possible scenarios, and you should stop assuming that you can.
1) The type names under "Cause" should either list all names which the rule might complain about, or it should be changed to say "A type name matches a namespace name from the .NET Framework class library, such as..."
2) Under "When to Exclude Warnings," instead of the first sentence, it should say something like: "Duplicating namespace and object names could cause confusion and reduce the usability of your code. Before excluding a warning from this rule, carefully consider whether there is an alternate name you could use that would be less confusing."
It in fact seems that this rule is triggered whenever a class/type name is the same as the namespace it is contained in. For instance, I encountered this rule creating a type called "Documents" in a Xxx.Yyy.Documents namespace, and from a business point of view, naming the class "Documents" makes perfect sense (especially since this is a WCF service contract, where I want to make sure that the contract names make as much sense to the consumers as possible).
I would argue that the rule can be excluded in this case. | http://msdn.microsoft.com/en-us/library/ms182257(VS.80).aspx | crawl-002 | refinedweb | 435 | 60.24 |
Introduction
Data integration is an essential part of getting the intelligent, data-driven insights that your organization needs to beat your competitors and serve your customers better. In the data integration process, information from multiple sources is brought together under a single “roof” for more accessible analysis and reporting.
Among the actions to perform during the data integration process, you may need to convert data between different formats, such as converting CSV to SQL. In this article, we’ll discuss the purpose of doing this conversion’ and the best tools to convert CSV to SQL.
Table of Contents:
CSV, SQL, and ETL
The ETL (extract, transform, load) process is the most common way of performing data integration. Data is first extracted from its source, then transformed to comply with standards, and finally loaded into the target location.
During the transformation stage of ETL, you might find yourself converting CSV to SQL in preparation for the load stage.
Many tools output data as comma-separated values (CSV); it is a standard but straightforward tabular data format of plaintext, which can quickly be processed. Each line represents a single record; a record consists of the same number of fields or columns. Usually, the delimiter between the available fields is either a comma (,), a semi-colon (;), a space, or a tabulator.
Database Management Systems (DBMS) like MySQL, MariaDB, PostgreSQL, and SQLite differ. They store data in a non-plaintext format that is not readable by the eye to extract data. The statements have to be formulated in Structured Query Language (SQL) and evaluated by the DBMS.
A SQL file contains queries that can modify a relational database or table structure. SQL holds information through the medium SQL statements that direct how to alter and store records within a database.
For more information on our native SQL connectors, visit our Integrations page.
For example, a simple CSV file for storing customers’ names and ZIP codes may look like:
John,Doe,02184 Jane,Roe,49120
Meanwhile, the SQL file format contains a SQL database that has been exported as a file. The SQL file contains information in SQL statements about how to reconstruct the database and the records inside.
CSV files are straightforward and human-readable but not designed for optimal efficiency and data analytics. To get the best performance during the data integration process, it’s often necessary to convert CSV into SQL.
Integrate Your Data Today!
Try Xplenty free for 14 days. No credit card required.
Tools to Convert CSV to SQL
There are various tools for converting CSV to SQL, depending on your unique scenario. If you are prototyping and need something immediately, you can use several browser-based tools as a quick hack. See the Simple Solutions section for more information.
And of course, there are secure methods for long term storage, such as using Xplenty's low code solution or utilizing SQL Server Management Studio. For more information on these methods, refer to the Long-term Solution section.
When choosing your method, one must have a general awareness of the data. For example, duplicate columns and headers may corrupt the transfer to correct SQL code. The CSV may have no headers at all or types, leading to similar problems. Some tools can handle these discrepancies, and some cannot, so choose wisely.
Simple Solutions for Converting CSV to SQL
You may simply want to convert CSV to SQL by taking a CSV file and creating a database from it.
If you are an Excel wizard and need a quick solution, you can do the following:
- Open the CSV within excel and select find and replace.
- Find, and replace the character with ,.
- Find ^ and replace the character with insert myTable values\('
- And lastly, find $ and replace with '\);
Another quick manual trick is to convert CSV to SQL using the SQL console.
load data in file 'c:/temp.csv' into table tablename fields terminated by ',';
See the official SQL docs for more information.
However, it much wiser to use a ready-to-go tool. Several websites claim to convert CSV into SQL databases, including:
Some of these tools have more settings and customizability than others, so experiment with different ones to see which results you prefer.
The websites listed above may work for simple conversions, but they may not have the optimal performance for larger CSV files. Besides, it’s not wise to use these websites with CSV files that contain sensitive or confidential information.
For instances where performance or privacy is a concern, it’s better to use software tools that run on your computer. Csvsql is an example of a command-line tool that can generate SQL statements from a CSV file. Converting CSV to SQL is also possible in the phpMyAdmin software application. Our favorite is Data Transformer; it keeps your data on your machine and offline. Unlike most other conversion programs and websites, which send your information on the public Internet. It is possible to create SQL scripts from CSV, JSON, XML, or YML.
However, there are certain situations where these more straightforward tools aren't the best option. Suppose you plan to convert from CSV to SQL repeatedly, you have a lot of CSV files to convert, or if you have other file formats to convert as well. It’s a good idea to invest in a dedicated data integration software that can efficiently perform the conversion for you.
Long-term Solutions for Converting CSV to SQL
Several robust tools can perform this job as part of a more extensive automated data pipeline in a drag and drop fashion - this allows one to compose a workflow with a sum of small actions. Alternatively, it is possible to import a CSV directly into a SQL server using a GUI or scripting methods.
Data integration software can streamline the data integration process and handle various file formats, including SQL and CSV. Solutions such as Xplenty make data integration as quick and painless for the end-user as possible.
If you're doing something more involved with your CSV to SQL conversion, you can use Xplenty's services to ingest data in CSV format, transform it to SQL, and then store the result is a data warehouse. Take a look at this article to see how easy it is to get started with Xplenty converting source data to the desired target format, including CSV to SQL.
Import a CSV File directly into SQL Server
Before executing SQL queries on CSV files, one needs to convert CSV files to data tables. There are numerous methods of converting CSV data into a database table format, i.e., create a table and copy all of the data from the CSV file to the table (however, this is time-consuming and not scalable with large datasets). The best way to import a CSV formatted file into your database is to use SQL Server Management Studio.
- Step one requires creating a table in your database to import the CSV file. On table creation: Log in to the database using SQL Server Management Studio.
- Right-click on the database and navigate to Tasks -> Import Data.
- Proceed by clicking the Next > button.
- Select Flat File Source for the data source and proceed to the Browse button to select the CSV file - configure the data import continuing with the Next > button.
- Select the correct database provider for the destination (e.g., for SQL Server, you can use the latest driver).
- Input the Server name and tick Use SQL Server Authentication
- Input the Password, Username, and Database, then click the Next > button.
- Within the Select Source Tables and Views window, it is possible to Edit Mappings before clicking the Next > button.
- Tick Run immediately and click the Next > button.
- And finally, tick the Finish button to run the package.
You can now execute SQL queries on the tables generated from the original CSV file.
Data Wrangling with Python
If you are comfortable with python scripting, pandas and SQL alchemy will convert a CSV file to a SQL server in a jiffy.
import pandas as pd import sqlalchemy import pyodbc # set up database connection (with username/pw if needed) engine = create_engine('mssql+pyodbc://username:password@mydsn') # read csv data to dataframe with pandas # datatypes will be assumed # pandas is smart but you can specify datatypes with the `dtype` parameter df = pandas.read_csv(r'your_data.csv') # write to sql table... pandas will use default column names and dtypes df.to_sql('table_name',engine,index=True,index_label='id') # add 'dtype' parameter to specify datatypes if needed; dtype={'column1':VARCHAR(255), 'column2':DateTime})
Conclusion
No matter which solution you choose, converting from CSV to SQL is an essential part of the data integration process. There are various options available for this conversion, depending on the exact use case.
How Xplenty Can Help
Want to learn more about the ETL process? Check out this blog post in which we go into more detail. If you’re going to invest in a robust, feature-rich solution for data integration (including capabilities for converting CSV files to SQL), Xplenty's transform stage of ETL/ELT can easily convert CSV to SQL, so you don't have to worry about the nitty-gritty. Get in touch with our team of data integration experts for a demo and risk-free trial. | https://www.xplenty.com/blog/convert-csv-to-sql-how-to-link-etl/ | CC-MAIN-2021-21 | refinedweb | 1,551 | 53.21 |
Hi,
can someone show me a little program using mixed language, C and Assembly AT&T syntax.
I can't understand how it works :( .
Thanks
Printable View
Hi,
can someone show me a little program using mixed language, C and Assembly AT&T syntax.
I can't understand how it works :( .
Thanks
I suppose the first question is going to be "what's wrong with the 38,756 examples you can find on the web?"
But generally this is probably going to be rather system-specific, but if you're using gcc why not look at the HOWTO or the gcc manual?
Hey,
I will post some code I have.
Code:
// mlpc1.c
// C language source file for mixed language programming example
#include <stdio.h>
void UpperCase(char *Str);
int main() {
char UserString[20];
fputs("Enter a string: ",stdout);
fgets(UserString,19,stdin);
fputs("\nYou entered: ",stdout);
fputs(UserString,stdout);
fputs("\nAfter call to UpperCase this becomes: ",stdout);
UpperCase(UserString);
fputs(UserString,stdout);
fputs("\n",stdout);
}
I don't get how it works...I don't get how it works...Code:
# mpla1.s
# assembly language source file for mixed language programming example
.data
.text
.global UpperCase
UpperCase:
push %ebp
mov %esp,%ebp
push %esi
push %eax
mov 8(%ebp),%esi # make esi point to the string
UCLoop:
movb (%esi),%al
cmp $0,%al
je UCExit
andb $0xdf,%al
movb %al,(%esi)
inc %esi
jmp UCLoop
UCExit:
pop %eax
pop %esi
pop %ebp
ret
.end
So what is difficult about this? This is exactly the same as you writing a C program in two .c files, except you wrote one of your .c files in assembler.
in the Uppercase function.
i don't get why the register
%esi or %eax was pushed
I have just written a simple function to add 2 numbers. And didn't have to push %eax. I could use them directly since they are "global", right?
So when do we push registers?
I know push means returning push a variable to the stack, but I don't really get a pop.
When you pop something off the stack, the esp is subtracted by 4 bytes.
but that content you popped off, is it saved somewhere? What is the difference between it and ret?
Pop gets the value from the stack and places it back in the register.
Of course you have to be careful with this... if you push A B C you need to pop C B A to restore the registers correctly. Also pushing something you do not later pop will cause a misalignment between stack and data...
so basically if it the data we have in the register is to be used later, we only push.
Ok, I will go try and do some experimenting on this.
Thanks.
the parameters are popped off in the order in which they are pushed?
One thing that is confusing me, when do we push to the stack?
Before the function is called, or after?
Like in
UpperCase
ebp is pushed to the stack. Does that make it a local?
linked to this page before. Have you ever tried actually reading that page? (Especially the part where they tell you why you have to push ebp, and why you should push any registers you intend to use in the function.) Start at "calling a __cdecl function".
read it.
cool thanks. Hopefully, I won't be back ;) (doubt it)
em, so
andb 0xdf, %cl
ands 223 with the lower case char.
How can we know what figure to use? Because if this was in the exam I would have failed it instantly. I just played with anding, and it does indeed convert it to upper.
if it was upper to lower, how would i do it? What would be my mask?
Is there an easy of figuring it out?
Thanks
Code:
Uppercase E = 69 = 0b01000101
Lowercase e = 101 = 0b01100101
so i have to or it by 0x20
E | 0x20..
that wouldn't work for everything :( | http://cboard.cprogramming.com/c-programming/133443-mixed-language-c-asm-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 666 | 83.96 |
Timeline
07/27/11:
- 23:45 Changeset [81248] by
- python/py2[67]-asn1 - update to 0.0.13; indicate they are noarch
- 23:12 Changeset [81247] by
- glib2, glib2-devel: ensure we don't try to use dtrace on Mac OS X; see …
- 22:57 Ticket #25979 (No port of pip or virtualenv for Python 2.7) closed by
- fixed: r71269, r74540, r75136, r75141
- 22:50 Changeset [81246] by
- mpab: handle 'all' in do_status.sh
- 22:46 Changeset [81245] by
- mpab: handle 'all' in gather_archives.sh
- 22:38 Ticket #30423 (swi-prolog: build fails on OS X Lion) created by
- Hi, trying to install swi-prolog on OS X Lion fails for me: […]
- 22:02 Ticket #30422 (nurses doesn't build) closed by
- worksforme
- 20:56 Changeset [81244] by
- kdelibs4: update to 4.7.0
- 20:35 Changeset [81243] by
- texlive-1.0: remove use of Tcl 8.5 feature because it's not available on …
- 20:04 Ticket #25975 (stklos-0.98 fails to build due to _OPEN_SOURCE flag) closed by
- fixed: Looks like this should have been fixed with the update to 1.0 (r70692); …
- 19:53 Ticket #25957 (GCAM: GNU Computer Aided Manufacturing) closed by
- fixed: Committed in r81242 with these changes: * removed configure.args because …
- 19:53 Changeset [81242] by
- New port: gcam (#25957)
- 19:48 Ticket #30422 (nurses doesn't build) created by
- […]
- 19:41 Changeset [81241] by
- oxygen-icons: update to 4.7.0
- 19:37 Ticket #25956 (xfig 3.2.5 crashes when accessing the menu multiple times) closed by
- worksforme
- 19:37 Changeset [81240] by
- p5-crypt-gcrypt: update to 1.25
- 18:55 Changeset [81239] by
- ntfs-3g: don't install mkfs.ntfs symlink into /sbin; see #30410
- 18:45 Changeset [81238] by
- texlive-bin: in kpathsea/paths.h, it looks like we don't need to set …
- 18:21 Changeset [81237] by
- glib2: correct license
- 17:50 Changeset [81236] by
- mpab: better error handling in subports.tcl
- 17:43 Changeset [81235] by
- buildbot: include subports of changed ports
- 17:34 Changeset [81234] by
- accept --subports for port info
- 17:33 Changeset [81233] by
- mpab: add script to find subports
- 17:23 Ticket #30421 (py27-matplotlib-basemap @1.0.1_0, python import error: obsolete library ...) created by
- I installed basemap back in June but recently upgraded a number of …
- 16:12 Ticket #30107 (dbus: org.freedesktop.dbus-session.plist already exists and does not ...) closed by
- worksforme: No response; closing.
- 16:03 Ticket #29820 (offlineimap: allow using python27) closed by
- fixed: offlineimap now uses python27 as of r81232.
- 16:03 Ticket #30146 (offlineimap @6.2.0 Update to 6.3.3) closed by
- fixed: Updated to 6.3.3 and related updates in r81232.
- 16:03 Changeset [81232] by
- offlineimap: update to 6.3.3, fix homepage and master_sites (#30146); use …
- 15:51 Changeset [81231] by
- mpab: fix recognition of 'all'
- 15:47 Changeset [81230] by
- offlineimap: update livecheck; see #30146
- 15:43 Ticket #30418 (git-core: Unable to verify file checksums) closed by
- invalid
- 15:31 Ticket #30417 (octave-image won't build.. png...) closed by
- duplicate: Duplicate of #29084.
- 15:30 Ticket #30420 (pdfjam: update to 2.08) created by
- pdfjam should probably be updated to 2.08.
- 14:54 Ticket #30294 (building gmp-5.0.2 fails for port 2.0.0 for Mac OS X 10.6.8) closed by
- fixed: No news is good news I guess.
- 14:53 Ticket #30311 (After selfupdate to ports 2.0.0 I can no longer uninstall (unused) ports) closed by
- worksforme: OK, so since the rest of this is turning into support questions and not a …
- 14:27 Changeset [81229] by
- Use a pure ChangeFilter approach to scheduling between the two builders. …
- 14:17 Changeset [81228] by
- mpab: build all ports if portlist is 'all'
- 13:38 Changeset [81227] by
- skrooge: update to version 0.9.96
- 13:30 Ticket #30419 (Groovy fails to build 10.7 Lion Macports 2 17 missing artifacts) created by
- Might be related to #30305 - removing, modifying, recreating .m2 didn't …
- 11:23 Changeset [81226] by
- Fix repo path we get from svn server
- 11:22 Ticket #30318 (Failed to install xorg-libXext) closed by
- fixed: Ok, buggy autotools... r81225
- 11:22 Changeset [81225] by
- xorg-libXext: autoreconf to avoid bug #30318
- 10:53 Ticket #30418 (git-core: Unable to verify file checksums) created by
- […]
- 10:18 Ticket #30417 (octave-image won't build.. png...) created by
- fresh macports installation on my computer, so no stale libraries […]
- 09:53 Ticket #30416 (sox: build failure when ffmpeg-devel is installed) created by
- When trying to install sox through macports, I got an error... I don't …
- 09:49 Ticket #30415 (mplayer-devel segmentation fault when scaling) created by
- I've installed Lion a couple of days ago, then port-uninstalled …
- 08:29 Changeset [81224] by
- update PySide and related packages to latest version
- 05:41 Ticket #30414 (ppl, cloog: error: C compiler cannot create executables) created by
- […]
- 05:39 Changeset [81223] by
- FScript: add livecheck
- 05:18 Ticket #30413 (glib2 build tries to use dtrace and fails) created by
- It appears that dtrace doesn't support -G. I am running Lion 10.7. xCode …
- 04:22 Ticket #30407 (Failed to install gtk2) closed by
- worksforme: Replying to fleason@…: > There does not seem to be a way to …
- 04:19 Ticket #30408 (Error when building mono for Lion 11A511) closed by
- duplicate: Duplicate of #30388.
- 04:00 Ticket #30412 (bacula @5.0.3 is broken with macports 2.0) created by
- Staging bacula into destroot breaks with […] The same problem applies …
- 03:37 Ticket #30411 (atlas fails to configure) created by
- Atlas fails to configure (let alone compile). MacPorts 2.0, Lion, MBP …
- 03:17 Ticket #30410 (ntfs-3g violating /sbin) created by
- I just upgraded ntfs-3g to the latest and I got the following waring: …
- 03:12 Ticket #30409 (dbus build fails on Lion) created by
- Installed Lion Installed Xcode and Java Installed …
- 02:43 Ticket #30408 (Error when building mono for Lion 11A511) created by
- See log for details
- 02:05 Ticket #30389 (pkgconfig @0.26_0 pkg-config broken (needs _iconv, has _libiconv)) closed by
- invalid: Remove DYLD_LIBRARY_PATH from your environment. What it does is tells …
- 00:15 Ticket #30407 (Failed to install gtk2) created by
- Installed Lion Installed Xcode and Java Installed …
07/26/11:
- 23:18 Ticket #30406 (arb: ld: in WETC/WETC.a, section not found for address 0x000002CD) created by
- I'm not able to build arb on my system. It fails with: […] I …
- 23:13 Ticket #30342 (Update ARB for Lion (OS X 10.7) and Xcode 4) closed by
- fixed: Replying to matt.cottrell@…: > ARB continues to build for me …
- 23:11 Changeset [81222] by
- arb: fix build for Lion / clang; see #30342
- 23:00 Changeset [81221] by
- ffmpeg-devel: Bump to today's git
- 22:57 Ticket #30386 (Can not install cairo on Lion 11A511) closed by
- duplicate: We already fixed this issue in #30135. Please "sudo port selfupdate" and …
- 22:48 Changeset [81220] by
- xorg-server-devel: Move to git fetch and update to current upstream …
- 22:02 Ticket #25947 (vte @0.25.1_0 bug in cursor handling) closed by
- fixed: Upstream bug says this was fixed in 0.24.1.
- 21:56 Ticket #25916 (NEW: ttf-arphic-uming) closed by
- fixed: Committed, r81219. Similar changes to ttf-arphic-ukai, plus: * used …
- 21:55 Changeset [81219] by
- New port: ttf-arphic-uming (#25916)
- 20:38 Ticket #30256 (pwman-0.4.4 new version) closed by
- fixed: Fixed in r81218.
- 20:37 Changeset [81218] by
- security/pwman - Update to 0.4.4, add license, and fix livecheck. …
- 20:32 Changeset [81217] by
- python/py27-tempita - Update to 0.5.1 and fix livecheck.regex.
- 20:19 Changeset [81216] by
- python/py27-markupsafe - Update to 0.15 and fix livecheck.regex.
- 20:15 Changeset [81215] by
- python/py26-webtest - Fix livecheck.regex.
- 20:14 Changeset [81214] by
- python/py26-weberror - Fix livecheck.regex.
- 20:10 Changeset [81213] by
- gtk-canvas: use port:-style dependencies and use giflib instead of …
- 20:09 Changeset [81212] by
- gtk-canvas: fix patch location
- 20:05 Changeset [81211] by
- python/py26-tempita - Update to 0.5.1 and fix livecheck.regex.
- 20:02 Changeset [81210] by
- gtk-canvas: set build.dir, instead of setting worksrcdir and then undoing …
- 20:01 Changeset [81209] by
- driftnet: do not hardcode version
- 20:00 Changeset [81208] by
- python/py26-scrapy - Update to 0.12.0.2543 and fix livecheck.regex.
- 19:54 Changeset [81207] by
- gtk-canvas: disable universal variant
- 19:54 Changeset [81206] by
- python/py26-scgi - Fix livecheck.regex.
- 19:53 Ticket #25915 (NEW: ttf-arphic-ukai) closed by
- fixed: Committed in r81205 with changes: * removed revision line, default …
- 19:53 Changeset [81205] by
- New port: ttf-arphic-ukai (#25915)
- 19:52 Changeset [81204] by
- python/py26-repoze.who.plugins.sa - Update to 1.0 and fix livecheck.regex.
- 19:48 Changeset [81203] by
- driftnet: license
- 19:44 Changeset [81202] by
- python/py26-repoze.who-testutil - Fix livecheck.regex.
- 19:44 Changeset [81201] by
- driftnet: restrict to PowerPC
- 19:42 Changeset [81200] by
- python/py26-repoze.who-friendlyform - Fix livecheck.regex.
- 19:40 Changeset [81199] by
- python/py26-repoze.what.plugins.sql - Fix livecheck.regex.
- 19:39 Changeset [81198] by
- python/py26-repoze.what-quickstart - Fix livecheck.regex.
- 19:38 Changeset [81197] by
- python/py26-repoze.what-pylons - Fix livecheck.regex.
- 19:36 Changeset [81196] by
- python/py26-recaptcha-client - Fix livecheck.regex.
- 19:33 Changeset [81195] by
- python/py26-markupsafe - Update to 0.15 and fix livecheck.regex.
- 19:29 Changeset [81194] by
- python/py26-formencode - Fix livecheck.regex.
- 19:26 Changeset [81193] by
- driftnet: clean up whitespace, add modeline
- 19:26 Changeset [81192] by
- python/py26-formalchemy - Update to 1.4.1 and fix livecheck.regex.
- 19:23 Changeset [81191] by
- driftnet: clean up build and destroot
- 19:20 Changeset [81190] by
- driftnet: add mirror; support build_arch; add universal variant
- 19:10 Changeset [81189] by
- php5-mongo: update to 1.2.2
- 19:09 Changeset [81188] by
- dokuwiki: update to 2011-05-25a, indicate noarch, indicate violate_mtree, …
- 19:03 Ticket #30396 (python25, python26 fail to build on lion with Xcode 4.2) closed by
- duplicate: #29771
- 18:58 Changeset [81187] by
- Allow php5-devel to satisfy php5 dependency
- 18:43 Ticket #30404 (perl5.8: build errors on Lion) closed by
- duplicate: This is the same error as in #30032. Selfupdate and try again.
- 18:40 Ticket #30403 (openvpn2 @2.2.0 Lion build failure) closed by
- duplicate: #30253
- 18:24 Ticket #30356 (Port 2.0.0 port upgrade fails) closed by
- worksforme
- 18:00 Changeset [81186] by
- buildmaster: avoid possibility of hitting ARG_MAX when writing portlist …
- 17:37 Changeset [81185] by
- Fix bug where portlist had categories after schedulers were reworked
- 17:36 Changeset [81184] by
- Support for changes being sent from svn server plus move buildbot behind …
- 17:02 Ticket #27119 (arb: ARBHOME environment variable incorrectly set) closed by
- worksforme
- 16:52 Ticket #29014 (failure to install/build ARB software) closed by
- duplicate: Duplicate of #29223.
- 16:07 Ticket #26115 (Want support for arch=x86_64 in macfuse) closed by
- invalid: I'm closing this because it's now pretty clear there won't be any further …
- 15:38 Ticket #30405 (cairo: LLVM ERROR: Cannot yet select: 0x103896c10) created by
- In the process of installing OpenNI, I needed to install MacPorts and then …
- 15:34 Ticket #30375 (Unable to install in Lion) closed by
- invalid
- 15:14 Ticket #30369 (can't build python26 on Mac OS X Lion) closed by
- invalid
- 15:06 Changeset [81183] by
- kmymoney4-devel: update to svn revision 1243264
- 15:05 Ticket #30404 (perl5.8: build errors on Lion) created by
- Using PortAuthority to geany die results perl-5.8.9 to fail build session …
- 14:54 Ticket #30403 (openvpn2 @2.2.0 Lion build failure) created by
- Lion no longer defines SOL_IP for {get,set}sockopt(), which breaks the …
- 14:35 Ticket #30402 (qt4-mac fails to build on Lion) closed by
- duplicate: Duplicate: #29141
- 14:21 Changeset [81182] by
- ntfsprogs: Made into stub port. replaced_by ntfs-3g.
- 14:13 Ticket #30400 (qt3 @3.3.8 build failure) closed by
- wontfix: Also, this is not something I think we should fix. Upstream doesn't even …
- 14:13 Changeset [81181] by
- ntfs-3g: Update to version 2011.4.12. Adds ntfsprogs.
- 14:02 Ticket #30402 (qt4-mac fails to build on Lion) created by
- Building fails since MacOSX version isn't supported: […]
- 13:23 Ticket #30047 (sbcl build fails on 10.7 (no bootstrap defined)) closed by
- fixed: Fix committed in r81180.
- 13:20 Ticket #30399 (sbcl: fix build on Lion) closed by
- fixed: Committed my own local version in r81180. I was waiting for the +pdf …
- 13:15 Changeset [81180] by
- sbcl: Update build clause for darwin 11 Lion OS X; see #30399
- 13:14 Changeset [81179] by
- parallel: update to 20110722
- 13:09 Ticket #30393 (libiconv doesn't built) closed by
- duplicate: Duplicate of #29933, #30308, etc.
- 13:07 Ticket #30401 (parallel @20110602 build failure) created by
- I get the following error when building parallels immediately after …
- 12:59 Ticket #30400 (qt3 @3.3.8 build failure) created by
- Trying to install gnucash +gtkhtml fails when building qt3 with an error …
- 12:45 MacPortsDevelopers edited by
- Alphabetize (diff)
- 12:28 Ticket #30399 (sbcl: fix build on Lion) created by
- Could you please add a clause for Darwin 11 in the port file? I added the …
- 12:23 Changeset [81178] by
- nepomuk: remove port, stubbed 16 months ago
- 10:18 Ticket #30144 (kdepimlibs4: configure fails to find Nepomuk) closed by
- fixed: Rebuilding all the kde ports worked for me. Specifically: […] Thanks!
- 09:45 Changeset [81177] by
- contao: update my mail address
- 09:29 MacPortsDevelopers edited by
- add myself (diff)
- 09:27 Ticket #30311 (After selfupdate to ports 2.0.0 I can no longer uninstall (unused) ports) reopened by
- I have the very same problem, but not with just a port. With all the ports …
- 09:13 Ticket #30398 (Deluge v1.3.2 > v1.3.3) created by
- Some interesting bugfixes. Just change the version in portfile, here are …
- 09:12 Ticket #30390 (gnome-python-desktop build failure) closed by
- duplicate: #30391
- 09:00 Ticket #30397 (qt3 @3.3.8_11 (upgrading from @3.3.8_10) build fails on 10.5.8, ranlib ...) created by
- Upgrading qt3 from 3.3.8_10 to _11 fails with: […] with the …
- 08:20 Ticket #30396 (python25, python26 fail to build on lion with Xcode 4.2) created by
- […]
- 08:17 Ticket #30395 (libtool requires me to remove /usr/bin/gfortran) created by
- libtool does not build if I have a /usr/bin/gfortran executable. Renaming …
- 08:08 Ticket #30144 (kdepimlibs4: configure fails to find Nepomuk) reopened by
-
- 08:03 Ticket #30394 (libplist build error: #error "LONG_BIT definition appears wrong for ...) created by
- - Mac OS X 10.6.8 - Xcode 3.2.6 I see the following error trying to …
- 08:01 Ticket #30393 (libiconv doesn't built) created by
- […]
- 07:52 Ticket #30392 (splash: gcc44 should not be default variant if user chooses other gcc ...) created by
- I'm running the latest macports from trunk. I see the following strange …
- 07:43 Changeset [81176] by
- rev-upgrade: Fixed path to tests, added TODO and FIXME notes
- 07:43 Ticket #30391 (gnome-python-desktop build failure) created by
- - Mac OS X 10.6.8 - Xcode 3.2.6 I'm seeing the following build error: …
- 07:42 Ticket #30390 (gnome-python-desktop build failure) created by
- - Mac OS X 10.6.8 - Xcode 3.2.6 I'm seeing the following build error: …
- 06:41 Ticket #30389 (pkgconfig @0.26_0 pkg-config broken (needs _iconv, has _libiconv)) created by
- Currently my pkg-config is broken. I did the following: […] When I …
- 05:23 Ticket #30370 (libpixman erroneous execution with llvm-gcc-4.2) reopened by
- This fix breaks the universal build on Snow Leopard. (Log attached.) …
- 04:46 Changeset [81175] by
- merge r81171 from trunk: add lzma_path and xz_path
- 04:43 Ticket #30369 (can't build python26 on Mac OS X Lion) reopened by
- I installed Xcode 4.1, but still can't compile python26 I moved previous …
- 04:04 Changeset [81174] by
- py27-kombu: update to version 1.2.0
- 04:02 Changeset [81173] by
- py27-redis: update to version 2.4.9
- 04:01 Changeset [81172] by
- npm: update to version 1.0.22
- 03:51 Changeset [81171] by
- add lzma_path and xz_path to configure
- 03:15 Ticket #30388 (mono fails to build on 10.7) created by
- Upgraded to MacPorts 2.0.0 today and tried to install mono. The build …
- 03:08 Ticket #30387 (gtk2 +quartz+no_x11: Undefined symbols _objc_msgSend_fixup) created by
- port install gtk2 +quartz+no_x11 Crashs log : […]
- 02:50 Ticket #30386 (Can not install cairo on Lion 11A511) created by
- I was installing mono on the first place, but the compilation failed at …
- 02:20 Changeset [81170] by
- php5-file-iterator: fix livecheck
- 02:12 Ticket #30313 (rpm 4.4.9 shell command failed. "_elf_strptr", referenced from: ... ld: ...) closed by
- worksforme: Should work now, without libelf installed.
- 02:09 Ticket #30384 (rpm @4.4.9 build fails on Lion) closed by
- fixed: Fixed, r81169.
- 02:08 Changeset [81169] by
- fix db build error on lion (#30384), maintainer, livecheck
- 02:05 MacPortsDevelopers edited by
- (diff)
- 02:04 Ticket #30032 (perl @5.8.9 build failure on Lion - toke.c fails to compile) closed by
- fixed: Fixed, r81168.
- 02:03 Changeset [81168] by
- perl5.8: incorporate nm fix from perl5.12, see r76831
- 02:02 Ticket #30385 (new port: R-framework) created by
- The R.framework port uses the llvm-gcc42 compilers, installs itself in …
- 01:59 Changeset [81167] by
- textproc/pdfgrep: Add livecheck
- 01:56 Changeset [81166] by
- textproc/pdfgrep: New port
- 01:37 Changeset [81165] by
- fuse/gmailfs: Remove port as it no longer works with the current gmail …
07/25/11:
- 23:28 Changeset [81164] by
- php5-unit-selenium: update to 1.0.3
- 23:27 Changeset [81163] by
- php5-unit-mock-objects: update to 1.0.9
- 23:23 Changeset [81162] by
- php5-unit: update to 3.5.14
- 23:23 Changeset [81161] by
- php5-oauth: update to 1.2.2
- 23:21 Changeset [81160] by
- php5-gearman: update to 0.8.0
- 23:17 Changeset [81159] by
- ming: use unversioned docdir
- 23:00 Changeset [81158] by
- gettext: use system versions of awk, grep, and sed, even if MacPorts …
- 22:59 Changeset [81157] by
- detex: use an unversioned docdir
- 22:56 Changeset [81156] by
- libtorrent-devel: use an unversioned docdir
- 22:46 Ticket #30308 (libiconv pulls in gawk) closed by
- fixed: Fixed libiconv in r81155.
- 22:46 Changeset [81155] by
- libiconv: use system versions of awk, grep, and sed, even if MacPorts …
- 22:12 Changeset [81154] by
- mesa: Bump to 7.11-rc3
- 22:10 Ticket #30381 (Upgrade md5deep to version 3.9.2) closed by
- fixed: Thanks, committed in r81153.
- 22:10 Changeset [81153] by
- md5deep: maintainer update to v3.9.2 (#30381)
- 22:06 Ticket #30384 (rpm @4.4.9 build fails on Lion) created by
- On Mac OS 10.7 with Xcode 4.1 Build 4B110 rpm (with minimal dependencies) …
- 21:51 Changeset [81152] by
- gmailfs: remove direct dependency on macfuse since this port only uses the …
- 21:45 Changeset [81151] by
- various fuse filesystems: change macfuse dependency to one that can be …
- 21:38 Ticket #30382 (libiconv fails during configure) closed by
- duplicate: Duplicate of #30308.
- 20:56 Ticket #30383 (fuse filesystems should support fuse4x as dependency) created by
- I just added ports for Fuse4X, a fork of MacFUSE that provides 64-bit …
- 20:27 Ticket #30382 (libiconv fails during configure) created by
- libiconv @1.13.1 fails during configure because gawk requires version …
- 20:12 Ticket #30381 (Upgrade md5deep to version 3.9.2) created by
- Attached please find a patch file for md5deep to version 3.9.2.
- 19:35 Changeset [81150] by
- sshfs: fuse4x compatibility * rewrite macfuse dependency to allow fuse4x …
- 19:26 Ticket #29917 (Fuse4X: add port) closed by
- fixed: OK, committed in r81149. Still need to update the various filesystem …
- 19:20 Changeset [81149] by
- fuse4x, fuse4x-kext, fuse4x-framework: new ports Fuse4X is a fork of …
- 19:11 Changeset [81148] by
- fuse4x: miscellaneous nits * remove debug cflags * don't pass -v to …
- 19:03 Changeset [81147] by
- fuse4x-kext: whitespace; remove commented-out line
- 19:00 Changeset [81146] by
- move setting of startpwd outside the try block in …
- 18:57 Ticket #30380 (ManOpen @2.5.1 build failure on Lion) created by
- Trying to build ManOpen on Lion (10.7), using Xcode 4.1 Build 4B110 fails …
- 18:56 Changeset [81145] by
- procmail: add more checksum types
- 18:55 Changeset [81144] by
- procmail: support build_arch; fix universal variant
- 18:50 Changeset [81143] by
- procmail: add modeline; whitespace and formatting changes
- 18:47 Changeset [81142] by
- fuse4x*: bump version to 0.8.8 and use the release tarball instead of an …
- 18:37 Changeset [81141] by
- Share dist_subdir with non-devel ports
- 18:36 Changeset [81140] by
- Indicate conflicts between -devel and non-devel ports
- 18:35 Ticket #30177 (libtorrent and rtorrent update) closed by
- fixed: Updated libtorrent and rtorrent to the newer versions in r81139.
- 18:35 Changeset [81139] by
- rtorrent: update to 0.8.9 libtorrent: update to 0.12.9 and use an …
- 18:32 Changeset [81138] by
- rtorrent: no longer disable universal in xmlrpc variant because xmlrpc-c …
- 18:05 Changeset [81137] by
- Add modeline, adjust whitespace / formatting; see #30177
- 18:01 Changeset [81136] by
- xmlrpc-c: update to 1.16.36; support build_arch; add universal variant
- 17:29 Ticket #25910 (docbook-xsl-ns) closed by
- fixed: r81135
- 17:29 Changeset [81135] by
- New port: docbook-xsl-ns, namespaced DocBook XSL stylesheets (#25910)
- 17:20 Ticket #30379 (Error while updating in OSX Lion) closed by
- invalid: Looks like you didn't follow the Migration procedure.
- 17:18 Changeset [81134] by
- py26-jinja2, py27-jinja2: update to 2.6
- 17:13 Changeset [81133] by
- py25-macholib, py26-macholib, py27-macholib: update to 1.4.2
- 17:09 Changeset [81132] by
- xmlrpc-c: fix livecheck
- 17:08 Changeset [81131] by
- py25-modulegraph, py25-modulegraph, py27-modulegraph: update to 0.9.1
- 17:07 Changeset [81130] by
- xmlrpc-c: update master_sites to avoid redirects
- 17:06 Changeset [81129] by
- xmlrpc-c: whitespace changes / reformatting only
- 16:53 Changeset [81128] by
- Add license (#30177)
- 16:49 Ticket #30379 (Error while updating in OSX Lion) created by
- I updated the MacPort 2.0.0 after updating XCode, as suggested. While …
- 16:43 Changeset [81127] by
- phpmyadmin: update to 3.4.3.2
- 16:42 Ticket #30378 (Fix for cvs to build on Lion) created by
- This patch renames CVS' getline function to _getline to avoid conflict.
- 16:42 Changeset [81126] by
- faust: update to 0.9.43; support build_arch; add universal variant
- 16:41 Changeset [81125] by
- sleepwatcher: update to 2.1.2 (adds Lion support)
- 16:37 Ticket #30377 (boost-1.47 fails to build when go is installed) created by
- Macports was selfupdate to 2.0.0. Before selfupdate, the port-tree was …
- 16:32 Changeset [81124] by
- www: update Xcode/X11 info
- 16:24 Changeset [81123] by
- guide: update xcode info
- 16:11 Ticket #30363 (Soprano-2.6.52 fails to find correct raptor header) closed by
- fixed
- 15:55 Changeset [81122] by
- texlive-*: split distfile archives into three files. $distfile-run …
- 15:54 Changeset [81121] by
- www/soprano: Assumed maintainership (open). Add raptor2.h patch. Revbump. …
- 15:49 Ticket #30376 (chapel: compiler, archs) created by
- chapel does not ensure it's UsingTheRightCompiler nor does it respect …
- 15:38 Ticket #30275 (ruby: ruby gem installation causes segfaults) closed by
- fixed: this problem was fixed at r81110. thanks!
- 15:37 Ticket #30191 (kmymoney4 @4.5.96 (kde, finance) Build failed due to namespace and other ...) closed by
- fixed: Heya, herewith I verify that kmymoney4(-devel) again do build with Brad's …
- 15:30 Changeset [81120] by
- lang/ruby19: ruby-1.9.x built with clang or llvm-gcc does not work.
- 15:27 Changeset [81119] by
- update supported OS and Xcode versions
- 15:20 Ticket #30311 (After selfupdate to ports 2.0.0 I can no longer uninstall (unused) ports) closed by
- worksforme: The list of port locations comes straight from the registry. If your …
- 15:17 Ticket #30374 (Error updating gmp) closed by
- duplicate: Duplicate of #30294; please search before filing new tickets.
- 15:04 Ticket #30373 (archive installation fails on case-insensitive filesystems) closed by
- wontfix: Replying to ryan@…: > I'm trying out unprivileged port …
- 15:04 Changeset [81118] by
- py26-pysparse: Fix license.
- 15:02 Ticket #30375 (Unable to install in Lion) created by
- […]
- 15:00 Changeset [81117] by
- gmp: probably fix build with Xcode 4.0.x (#30294)
- 14:59 Ticket #30341 (gmp build fails: Assertion failed) closed by
- duplicate: #30294
- 14:51 Ticket #30374 (Error updating gmp) created by
- I updated to macports 2.0.0. At that very same momento gmp was to be …
- 14:47 Ticket #30373 (archive installation fails on case-insensitive filesystems) created by
- I'm trying out unprivileged port installation onto an NFS volume. Some of …
- 14:27 Ticket #30311 (After selfupdate to ports 2.0.0 I can no longer uninstall (unused) ports) reopened by
- I tried again, this time adding a dummy file in the folder …
- 14:24 Ticket #30368 (miriad 4.1.7.20110426 - update to version 4.2.2.20110722) closed by
- fixed: r81116
- 14:24 Changeset [81116] by
- miriad: maintainer update to 4.2.2.20110722; see #30368
- 14:22 Ticket #30369 (can't build python26 on Mac OS X Lion) closed by
- invalid: Xcode 3.2.x is for Snow Leopard. Xcode 4.1+ is for Lion.
- 13:33 Ticket #30372 (port load dbus looking for the wrong file) closed by
- invalid
- 13:33 Ticket #30372 (port load dbus looking for the wrong file) reopened by
-
- 13:32 Ticket #30372 (port load dbus looking for the wrong file) closed by
- fixed
- 13:21 Ticket #30372 (port load dbus looking for the wrong file) created by
- Trying to run 'port load dbus' gives the following error: daedelus:~ …
- 13:04 Ticket #30167 (kdeutils4: kpimutils/email.h: No such file or directory) closed by
- fixed: Fixed r81115
- 13:03 Changeset [81115] by
- kde/kdeutils4: Commit with maintainers permission. Add port:kdepimlibs4 to …
- 12:58 Ticket #30371 (phantomjs @1.0.0 0 Update to 1.2.0) created by
- Submitting a patch to update to version 1.2.0. Adding modeline as …
- 12:36 Ticket #30370 (libpixman erroneous execution with llvm-gcc-4.2) closed by
- fixed: Replying to ejtttje@…: > Please set libpixman's …
- 12:35 Changeset [81114] by
- libpixman, libpixman-devel: use clang instead of llvm-gcc-4.2; apparently …
- 12:35 Ticket #27478 (cairo 1.10.0 make failure on os x 10.4.11) closed by
- worksforme
- 12:32 Ticket #29842 (cairo: static libs fail to build correctly with llvm) closed by
- fixed: Replying to jeremyhu@…: > Leaving open for ryan to revert the …
- 12:31 Changeset [81113] by
- cairo, cairo-devel: no longer remove -flto since changes to muniversal …
- 12:24 Ticket #30139 (redland cannot be built while raptor is active) closed by
- fixed
- 12:21 Ticket #30370 (libpixman erroneous execution with llvm-gcc-4.2) created by
- When executing Cairo with PDF output, the resulting PDFs are always 414 …
- 12:09 Changeset [81112] by
- nodejs: fix typo accidentally introduced in r81111
- 11:52 Changeset [81111] by
- nodejs, nodejs-devel: add 'supported_archs'
- 11:23 Ticket #30369 (can't build python26 on Mac OS X Lion) created by
- […] […]
- 11:19 Changeset [81110] by
- lang/ruby: fix #30275, ruby built with clang or llvm-gcc does not work.
- 11:08 Changeset [81109] by
- sysutils/clamav-server: Add openmaintainer.
- 11:07 Changeset [81108] by
- sysutils/clamav-server: Upgrade to 0.97.2.
- 10:35 Changeset [81107] by
- texlive-bin: update distversion to 20110705; looks like this is the final …
- 10:28 Changeset [81106] by
- py26-pysparse: license.
- 10:25 Changeset [81105] by
- clamav: version bump to 0.97.2
- 10:13 Ticket #30368 (miriad 4.1.7.20110426 - update to version 4.2.2.20110722) created by
- Attached is a patch to update Miriad to upstream 4.2.2.20110722.
- 10:11 Ticket #30339 (tokyocabinet @1.4.47 configuration errors - build failure) closed by
- fixed: Fixed in r81104. PS. There's two bugs: 1. Hardcoded paths in the …
- 10:01 Changeset [81104] by
- tokyocabinet: fix for #30339
- 09:56 Changeset [81103] by
- npm: various improvements and fixes * add 'supported_archs' * add …
- 09:35 Changeset [81102] by
- www/raptor: Assumed maintainership (open).
- 09:34 Changeset [81101] by
- www/raptor2: Assumed maintainership (open).
- 09:33 Changeset [81100] by
- www/rasqal: Assumed maintainership (open).
- 09:32 Changeset [81099] by
- www/redland: Assumed maintainership (open). Remove test for raptor1 …
- 09:30 Ticket #30058 (mesa: extract failed: Cannot hard link) closed by
- fixed: This is a buggy tarball: […] 7.11-rc2 is out and shouldn't have this …
- 09:14 Changeset [81098] by
- py27-pyqwt: license.
- 09:12 Changeset [81097] by
- py26-scikits-umfpack: fix home page.
- 09:09 Changeset [81096] by
- py26-scikits-umfpack: license.
- 09:08 Ticket #30058 (mesa: extract failed: Cannot hard link) reopened by
-
- 09:02 Changeset [81095] by
- py26-pyqwt: license.
- 09:00 Changeset [81094] by
- py27-pmw: license.
- 08:59 Changeset [81093] by
- license.
- 08:59 Changeset [81092] by
- license.
- 08:52 Changeset [81091] by
- muniversal: Try using libtool if lipo fails, lipo can't sew together LLVM …
- 08:12 Ticket #30366 (p5-vcp-autrijus-snapshot is not found.) closed by
- fixed: I had thought that they weren't used any more. I've reverted my previous …
- 08:10 Changeset [81090] by
- revert r80724 (add back the vcp perl modules), fixes #30366, give up …
- 08:01 Ticket #14085 (opendx compilation problem on Leopard and newer) reopened by
- A similar change is needed for Lion. Duplicating the darwin 9 block as …
- 07:51 Changeset [81089] by
- py*-atpy: updated to 0.9.5.3
- 07:50 Changeset [81088] by
- py*-asciitable: updated to 0.7.0.2
- 07:23 Ticket #30367 (boost: checksum mismatch.) created by
- This is what happened, when I tried, as mentioned in the summary, upgrade …
- 06:14 Ticket #30366 (p5-vcp-autrijus-snapshot is not found.) created by
- I want to install svk by using Mac Ports version 2.0.0 on Lion. svk …
- 06:04 Ticket #30365 (Additional option to selfupdate required if major changes take place) created by
- My confusion in #30302 came from the fact that a) I wasn't aware that …
- 05:30 Ticket #30364 (ghc does not compile under lion) created by
- Hi there, I got the following error message when trying to install ghc …
- 05:03 Ticket #30363 (Soprano-2.6.52 fails to find correct raptor header) created by
- Build log attached. OS is latest Snow + all updates.
- 03:31 Ticket #30355 (dnsmasq @2.57 Lion build failure) closed by
- fixed: Patched in r81087.
- 03:30 Changeset [81087] by
- dnsmasq: * fix build for Lion, 30355 * remove nawk dependency (no longer …
- 03:20 Ticket #30362 (texlive-latex-extra @19548: xcomment.sty broken) created by
- […] xcomment.sty is very short: […] as you can see, it really …
- 02:33 Ticket #30361 (smlnj @110.72 Needs Lion update) created by
- From the build log: […] The script only parses uname -r of old OSX …
- 02:01 Ticket #30360 (py26-hgsubversion @1.2.1.e30ff6d5feff install failure) created by
- Hi all, The install script of py26-hgsubversion @1.2.1.e30ff6d5feff …
- 00:54 Ticket #30089 (couchdb: configure fails claiming erlang is missing openssl support) reopened by
- 10.7 is now in public, so what should i do to fix this error?
- 00:46 Ticket #30358 (perl5.8 build fails on Mac OS X Lion) closed by
- duplicate: Duplicate of #30032; please search before filing new tickets.
- 00:42 Changeset [81086] by
- graphviz-devel, graphviz-gui-devel, gvedit-devel: update to …
- 00:24 Changeset [81085] by
- hs-language-c: update to 0.3.2.1; drop maintainership
- 00:10 Ticket #30351 (Cairo+quartz fonts broken under 32bit Lion (no more ATSUI)) closed by
- fixed: Committed for cairo in r81083 and cairo-devel in r81084.
- 00:08 Changeset [81084] by
- cairo-devel: merge r81083 from cairo: allegedly fix build on Lion with …
- 00:05 Ticket #30359 (Lion mp2.0.0 cyrus-sasl2 fails) created by
- With a almost fresh install of the new macports on Lion, cyrus-sasl2 build …
- 00:04 Changeset [81083] by
- cairo: allegedly fix build on Lion with llvm-gcc-4.2 / clang (#29842) and ….
Note: See TracTimeline for information about the timeline view. | https://trac.macports.org/timeline?from=2011-07-28T15%3A31%3A21-0700&precision=second | CC-MAIN-2015-22 | refinedweb | 5,466 | 63.39 |
This is part of the Ext JS to React blog series. You can review the code from this article on the Ext JS to React Git repo.
The grid is the defining component of the Ext JS framework. When we think of Ext JS, the first image that comes to mind is its grid and for good reason. Ext JS’s grid is unmatched in features and quality. However, depending on the needs of your application, you may consider writing your own grid. You may be surprised at how easy it is with React.
Sample data generator
Ext JS relied heavily on its data package for its views consuming that data. While a mature data module is great for complex modeling, for most views we can just furnish the bits that are needed. For this and subsequent grid blogs, we will use the following data module:
const companies = [ 'Airconix', 'Qualcore', 'Hivemind', 'Thermolock', 'Sunopia' ]; const firstNames = [ 'Raymond', 'Vernon', 'Dori', 'Jason', 'Rico' ]; const lastNames = [ 'Neal', 'Dunham', 'Seabury', 'Pettey', 'Muldoon' ]; const random = (array) => array[ Math.floor(Math.random() * array.length) ]; const dataSync = ({ num = 50, startRow = 0, total = 50000 } = {}) => { const data = []; for (let i = 0; i < num; i++) { const company = random(companies); const first = random(firstNames); const last = random(lastNames); data.push({ id: i + startRow, name: `${first} ${last}`, company, email: `${first.toLowerCase()}.${last.toLowerCase()}@${company.toLowerCase()}.com` }); } return { data, total }; }; const dataAsync = async ({ delay = 500, num, startRow, total } = {}) => { return new Promise((resolve) => { setTimeout(() => { resolve(dataSync({ num, startRow, total })); }, delay); }); }; export default dataSync; export { dataAsync };
Note: In the data module above, you’ll see that there are two export statements. The
export default dataSync statement indicates that any class importing from the file with this module will receive the
dataSync object. The name of the object imported can be whatever suits the importing class as we see in the
import getData from './data' example statement below. The
export { dataAsync } statement enables another class to request the
dataAsync function by name. For more information on the ECMAScript 2015 (ES6) export convention, checkout the named export and default export sections on the MDN export guide.
Lightweight React grid class
We’ve long heard that people love Ext JS’s grid, but sometimes they need a lightweight version. With Ext JS, there simply is no lightweight grid. It has to support so many features yet come to an agreement regarding the minimal functionality people would expect. If you want to display data in a tabular format, but don’t need all the functionality under the sun, you can easily write a simple React component to be just that:
import React, { Component } from 'react'; import getData from './data'; const { data } = getData(); class Grid extends Component { render () { const { className } = this.props; return ( <table className={`grid ${className ? className : ''}`}> <thead> <tr> <th>Name</th> <th>Company</th> <th>Email</th> </tr> </thead> <tbody> { data.map(item => ( <tr key={item.id}> <td>{item.name}</td> <td>{item.company}</td> <td>{item.email}</td> </tr> )) } </tbody> </table> ); } } export default Grid;
This component shows our data in 3 columns in a table that is as lightweight as you can get. The headers will be bolded automatically as the default styling of the
<th> nodes. With a few additional simple styles, you can set the background color on the header along with row and column lines. By using a small React component class and a few styles rules, you now have a lightweight grid!
Sorting the React grid
Currently, we have a lightweight grid that simply displays our data. If the data arrives unsorted or you want to allow users to sort a column, then we need to add sorting functionality to the grid. To enable sorting, we’ll need to make a couple of changes:
- Add an
onClicklistener to the headers in order to set the sort state when a user clicks on the header. This will also toggle the existing sort state between ascending and descending.
- Display sort status in the headers
- Sort the data before creating the grid rows
The updated grid would then look like:
import React, { Component } from 'react'; import getData from './data'; const { data } = getData(); class Grid extends Component { state = {} render () { const { className, data } = this.props; const { sort } = this.state; return ( <table className={`grid ${className ? className : ''}`}> <thead> <tr> <th onClick={this.handleHeaderClick.bind(this, 'name')}> Name{this.getSort('name', sort)} </th> <th onClick={this.handleHeaderClick.bind(this, 'company')}> Company{this.getSort('company', sort)} </th> <th onClick={this.handleHeaderClick.bind(this, 'email')}> Email{this.getSort('email', sort)} </th> </tr> </thead> <tbody> { this.sortData(data, sort).map(item => ( <tr key={item.id}> <td>{item.name}</td> <td>{item.company}</td> <td>{item.email}</td> </tr> )) } </tbody> </table> ); } getSort (dataIndex, sort) { return sort && sort.dataIndex === dataIndex ? ` (${sort.direction})` : null; } handleHeaderClick = (dataIndex) => { const { sort } = this.state; const direction = sort && sort.dataIndex === dataIndex ? (sort.direction === 'ASC' ? 'DESC' : 'ASC') : 'ASC'; this.setState({ sort: { dataIndex, direction } }); } sortData (data, sort) { if (sort) { const { dataIndex, direction } = sort; const dir = direction === 'ASC' ? 1 : -1; return data.slice().sort((A, B) => { const a = A[ dataIndex ]; const b = B[ dataIndex ]; if (a > b) { return 1 * dir; } if (a < b) { return -1 * dir; } return 0; }); } return data; } } export default Grid;
React grid class instantiated
To instantiate the Grid, we pass the
data prop with the data set from our example above:
import React, { Component } from 'react'; import Grid from './Grid'; import getData from './data'; const { data } = getData(); class App extends Component { render() { return ( <Grid data={data} /> ); } } export default App;
Now we have a very lightweight, sortable grid and all in less than 90 lines of code!
Conclusion
You may have never thought about writing your own grid when coming from Ext JS. If all you need is a lightweight grid with optional sorting, try writing your own with React. While you will likely require a more feature-rich grid selectively throughout your application, it’s a great idea to start with vanilla React. This will help your understanding of how React works versus Ext JS. In the next blog, we will look at a third-party grid component and how to use selection models like you would with Ext JS.
Mitchell Simoens
Related Posts
- Ext JS to React: Migration to Open Source
Worried about Migrating from Ext JS? Modus has the Answers Idera’s acquisition of Sencha has…
- Ext JS to React: Selection Model
This is part of the Ext JS to React blog series. You can review the… | https://moduscreate.com/blog/ext-js-react-basic-grid/ | CC-MAIN-2020-40 | refinedweb | 1,066 | 55.84 |
The new collections
Posted on March 1st, 2001
To me, collection classes are one of the most powerful tools for raw programming. You might have gathered that I’m somewhat disappointed in the collections provided in Java through version 1.1. As a result, it’s a tremendous pleasure to see that collections were given proper attention in Java 1.2, and thoroughly redesigned (by Joshua Bloch at Sun). I consider the new collections to be one of the two major features in Java 1.2 (the other is the Swing library, covered in Chapter 13) because they significantly increase your programming muscle and help bring Java in line with more mature programming systems.
Some of the redesign makes things tighter and more sensible. For example, many names are shorter, cleaner, and easier to understand, as well as to type. Some names are changed to conform to accepted terminology: a particular favorite of mine is “iterator” instead of “enumeration.”
The redesign also fills out the functionality of the collections library. You can now have the behavior of linked lists, queues, and dequeues (double-ended queues, pronounced “decks”).
The design of a collections library is difficult (true of most library design problems). In C++, the STL covered the bases with many different classes. This was better than what was available prior to the STL (nothing), but it didn’t translate well into Java. The result was a rather confusing morass of classes. On the other extreme, I’ve seen a collections library that consists of a single class, “collection,” which acts like a Vector and a Hashtable at the same time. The designers of the new collections library wanted to strike a balance: the full functionality that you expect from a mature collections library, but easier to learn and use than the STL and other similar collections libraries. The result can seem a bit odd in places. Unlike some of the decisions made in the early Java libraries, these oddities were not accidents, but carefully considered decisions based on tradeoffs in complexity. It might take you a little while to get comfortable with some aspects of the library, but I think you’ll find yourself rapidly acquiring and using these new tools.
The new collections library takes the issue of “holding your objects” and divides it into two distinct concepts:
- Collection: a group of individual elements, often with some rule applied to them. A List must hold the elements in a particular sequence, and a Set cannot have any duplicate elements. (A bag, which is not implemented in the new collections library since Lists provide you with that functionality, has no such rules.)
- Map: a group of key-value object pairs (what you’ve seen up until now as a Hashtable). At first glance, this might seem like it ought to be a Collection of pairs, but when you try to implement it that way the design gets awkward, so it’s clearer to make it a separate concept. On the other hand, it’s convenient to look at portions of a Map by creating a Collection to represent that portion. Thus, a Map can return a Set of its keys, a List of its values, or a List of its pairs. Maps, like arrays, can easily be expanded to multiple dimensions without adding new concepts: you simply make a Map whose values are Maps (and the values of those Maps can be Maps, etc.).
Collections and Maps may be implemented in many different ways, according to your programming needs. It’s helpful to look at a diagram of the new collections:
This diagram can be a bit overwhelming at first, but throughout the rest of this chapter you’ll see that there are really only three collection components: Map, List, and Set, and only two or three implementations of each one [37] (with, typically, a preferred version). When you see this, the new collections should not seem so daunting.
The dashed boxes represent interfaces, the dotted boxes represent abstract classes, and the solid boxes are regular (concrete) classes. The dashed arrows indicate that a particular class is implementing an interface (or in the case of an abstract class, partially implementing that interface). The double-line arrows show that a class can produce objects of the class the arrow is pointing to. For example, any Collection can produce an Iterator, while a List can produce a ListIterator (as well as an ordinary Iterator, since List is inherited from Collection).
The interfaces that are concerned with holding objects are Collection, List, Set, and Map. Typically, you’ll write the bulk of your code to talk to these interfaces, and the only place where you’ll specify the precise type you’re using is at the point of creation. So you can create a List like this:.
In the class hierarchy, you can see a number of classes whose names begin with “ Abstract,” and these can seem a bit confusing at first. They are simply tools that partially implement a particular interface. If you were making your own Set, for example, you wouldn’t start with the Set interface and implement all the methods, instead you’d inherit from AbstractSet and do the minimal necessary work to make your new class. However, the new collections library contains enough functionality to satisfy your needs virtually all the time. So for our purposes, you can ignore any class that begins with “ Abstract.”
Therefore, when you look at the diagram, you’re really concerned with only those interfaces at the top of the diagram and the concrete classes (those with solid boxes around them). You’ll typically make an object of a concrete class, upcast it to the corresponding interface, and then use the interface throughout the rest of your code. Here’s a simple example, which fills a Collection with String objects and then prints each element in the Collection:
//: SimpleCollection.java // A simple example using the new Collections package c08.newcollections; import java.util.*; public class SimpleCollection { public static void main(String[] args) { Collection c = new ArrayList(); for(int i = 0; i < 10; i++) c.add(Integer.toString(i)); Iterator it = c.iterator(); while(it.hasNext()) System.out.println(it.next()); } } ///:~
All the code examples for the new collections libraries will be placed in the subdirectory newcollections, so you’ll be reminded that these work only with Java 1.2. As a result, you must invoke the program by saying:
java c08.newcollections.SimpleCollection
with a similar syntax for the rest of the programs in the package.
You can see that the new collections are part of the java.util library, so you don’t need to add any extra import statements to use them.
The first line in main( ) creates an ArrayList object and then upcasts it to a Collection. Since this example uses only the Collection methods, any object of a class inherited from Collection would work, but ArrayList is the typical workhorse Collection and takes the place of Vector.
The add( ) method, as its name suggests, puts a new element in the Collection. However, the documentation carefully states that add( ) “ensures that this Collection contains the specified element.” This is to allow for the meaning of Set, which adds the element only if it isn’t already there. With an ArrayList, or any sort of List, add( ) always means “put it in.”
All Collections can produce an Iterator via their iterator( ) method. An Iterator is just like an Enumeration, which it replaces, except:
- It uses a name (iterator) that is historically understood and accepted in the OOP community.
- It uses shorter method names than Enumeration: hasNext( ) instead of hasMoreElements( ), and next( ) instead of nextElement( ).
- It adds a new method, remove( ), which removes the last element produced by the Iterator. So you can call remove( ) only once for every time you call next( ).
In SimpleCollection.java, you can see that an Iterator is created and used to traverse the Collection, printing each element.
Using Collections
The following table shows everything you can do with a Collection, and thus, everything you can do with a Set or a List. ( List also has additional functionality.) Maps are not inherited from Collection, and will be treated separately.
The following example demonstrates all of these methods. Again, these work with anything that inherits from Collection; an ArrayList is used as a kind of “least-common denominator”:
//: Collection1.java // Things you can do with all Collections package c08.newcollections; import java.util.*; public class Collection1 { // Fill with 'size' elements, start // counting at 'start': public static Collection fill(Collection c, int start, int size) { for(int i = start; i < start + size; i++) c.add(Integer.toString(i)); return c; } // Default to a "start" of 0: public static Collection fill(Collection c, int size) { return fill(c, 0, size); } // Default to 10 elements: public static Collection fill(Collection c) { return fill(c, 0, 10); } // Create & upcast to Collection: public static Collection newCollection() { return fill(new ArrayList()); // ArrayList is used for simplicity, but it's // only seen as a generic Collection // everywhere else in the program. } // Fill a Collection with a range of values: public static Collection newCollection(int start, int size) { return fill(new ArrayList(), start, size); } // Moving through a List with an iterator: public static void print(Collection c) { for(Iterator x = c.iterator(); x.hasNext();) System.out.print(x.next() + " "); System.out.println(); } public static void main(String[] args) { Collection c = newCollection(); c.add("ten"); c.add("eleven"); print(c); // Find max and min elements; this means // different things depending on the way // the Comparable interface is implemented: System.out.println("Collections.max(c) = " + Collections.max(c)); System.out.println("Collections.min(c) = " + Collections.min(c)); // Add a Collection to another Collection c.addAll(newCollection()); print(c); c.remove("3"); // Removes the first one print(c); c.remove("3"); // Removes the second one print(c); // Remove all components that are in the // argument collection: c.removeAll(newCollection()); print(c); c.addAll(newCollection()); print(c); // Is an element in this Collection? System.out.println( "c.contains(\"4\") = " + c.contains("4")); // Is a Collection in this Collection? System.out.println( "c.containsAll(newCollection()) = " + c.containsAll(newCollection())); Collection c2 = newCollection(5, 3); // Keep all the elements that are in both // c and c2 (an intersection of sets): c.retainAll(c2); print(c); // Throw away all the elements in c that // also appear in c2: c.removeAll(c2); System.out.println("c.isEmpty() = " + c.isEmpty()); c = newCollection(); print(c); c.clear(); // Remove all elements System.out.println("after c.clear():"); print(c); } } ///:~
The first methods provide a way to fill any Collection with test data, in this case just ints converted to Strings. The second method will be used frequently throughout the rest of this chapter.
The two versions of newCollection( ) create ArrayLists containing different sets of data and return them as Collection objects, so it’s clear that nothing other than the Collection interface is being used.
The print( ) method will also be used throughout the rest of this section. Since it moves through a Collection using an Iterator, which any Collection can produce, it will work with Lists and Sets and any Collection that a Map produces.
main( ) uses simple exercises to show all of the methods in Collection.
The following sections compare the various implementations of List, Set, and Map and indicate in each case (with an asterisk) which one should be your default choice. You’ll notice that the legacy classes Vector, Stack, and Hashtable are not included because in all cases there are preferred classes within the new collections.
Using Lists.
//: List1.java // Things you can do with Lists package c08.newcollections; import java.util.*; public class List1 { // Wrap Collection1.fill() for convenience: public static List fill(List a) { return (List)Collection1.fill(a); } // You can use an Iterator, just as with a // Collection, but you can also use random // access with get(): public static void print(List a) { for(int i = 0; i < a.size(); i++) System.out.print(a.get(i) + " "); System.out.println(); } static boolean b; static Object o; static int i; static Iterator it; // indexOf, starting search at location 2: i = a.indexOf("1", 2); b = a.isEmpty(); // Any elements inside? it = a.iterator(); // Ordinary Iterator lit = a.listIterator(); // ListIterator lit = a.listIterator(3); // Start at loc 3 i = a.lastIndexOf("1"); // Last match i = a.lastIndexOf("1", 2); // ...after loc 2 a.remove(1); // Remove location 1 a.remove("3"); // Remove this object a.set(1, "y"); // Set location 1 to "y" // Make an array from the List: Object[] array = a.toArray(); // Keep everything that's in the argument // (the intersection of the two sets): a.retainAll(fill(new ArrayList())); // Remove elements in this range: a.removeRange(0, 2); //) { print(a); List b = new ArrayList(); fill(b); System.out.print("b = "); print(b); a.addAll(b); a.addAll(fill(new ArrayList())); print(a); // Shrink the list by removing all the // elements beyond the first 1/2 of the list System.out.println(a.size()); System.out.println(a.size()/2); a.removeRange(a.size()/2, a.size()/2 + 2); print(a); // Insert, remove, and replace elements // using a ListIterator: ListIterator x = a.listIterator(a.size()/2); x.add("one"); print(a); System.out.println(x.next()); x.remove(); System.out.println(x.next()); x.set("47"); print(); Collection1.fill(ll, 5); print(ll); // Treat it like a stack, pushing: ll.addFirst("one"); ll.addFirst("two"); print! print simply made to show the proper syntax, and while the return value is captured, it is not used. In some cases, the return value isn’t captured since it isn’t typically used. You should look up the full usage of each of these methods in your online documentation before you use them.
Using Sets
Set has exactly the same interface as Collection, so there isn’t any extra functionality as there is with the two different Lists. Instead, the Set is exactly a Collection, it just has different behavior. (This is the ideal use of inheritance and polymorphism: to express different behavior.) A Set allows only one instance of each object value to exist :
//: Set1.java // Things you can do with Sets package c08.newcollections; import java.util.*; public class Set1 { public static void testVisual(Set a) { Collection1.fill(a); Collection1.fill(a); Collection1.fill(a); Collection1.print(a); // No duplicates! // Add another set to this one: a.addAll(a); a.add("one"); a.add("one"); a.add("one"); Collection1.print(a); // Look something up: System.out.println("a.contains(\"one\"): " + a.contains("one")); } public static void main(String[] args) { testVisual(new HashSet()); testVisual(new ArraySet()); } } ///:~
Duplicate values are added to the Set, but when it is printed you’ll see the Set has accepted only one instance of each value.
When you run this program you’ll notice that the order maintained by the HashSet is different from ArraySet, since each has a different way of storing elements so they can be located later. ( ArraySet keeps them sorted, while HashSet uses a hashing function, which is designed specifically for rapid lookups.) When creating your own types, be aware that a Set needs a way to maintain a storage order, just as with the “groundhog” examples shown earlier in this chapter. Here’s an example:
//: Set2.java // Putting your own type in a Set package c08.newcollections; import java.util.*; class MyType { private int i; public MyType(int n) { i = n;} public boolean equals(Object o) { if ((o != null) && (o instanceof MyType)) return i == ((MyType)o).i; else return false; } // Required for HashSet, not for ArraySet: public int hashCode() { return i; } public String toString() { return i + " "; } } public class Set2 { public static Set fill(Set a, int size) { for(int i = 0; i < size; i++) a.add(new MyType(i)); return a; } public static Set fill(Set a) { return fill(a, 10); } public static void test(Set a) { fill(a); fill(a); // Try to add duplicates fill(a); a.addAll(fill(new ArraySet())); Collection1.print(a); } public static void main(String[] args) { test(new HashSet()); test(new ArraySet()); } } ///:~
The definitions for equals( ) and hashCode( ) follow the form given in the “groundhog” examples. You must define an equals( ) in both cases, but the hashCode( ) is necessary only if the class will be placed in a HashSet (which is likely, since that should generally be your first choice as a Set implementation).
Using Maps
The following example contains two sets of test data and a fill( ) method that allows you to fill any map with any two-dimensional array of Objects. These tools will be used in other Map examples, as well.
//: Map1.java // Things you can do with Maps package c08.newcollections; import java.util.*; public class Map1 { public final static String[][] testData1 = { { "Happy", "Cheerful disposition" }, { "Sleepy", "Prefers dark, quiet places" }, { "Grumpy", "Needs to work on attitude" }, { "Doc", "Fantasizes about advanced degree"}, { "Dopey", "'A' for effort" }, { "Sneezy", "Struggles with allergies" }, { "Bashful", "Needs self-esteem workshop"}, }; public final static String[][] testData2 = { { "Belligerent", "Disruptive influence" }, { "Lazy", "Motivational problems" }, { "Comatose", "Excellent behavior" } }; public static Map fill(Map m, Object[][] o) { for(int i = 0; i < o.length; i++) m.put(o[i][0], o[i][1]); return m; } // Producing a Set of the keys: public static void printKeys(Map m) { System.out.print("Size = " + m.size() +", "); System.out.print("Keys: "); Collection1.print(m.keySet()); } // Producing a Collection of the values: public static void printValues(Map m) { System.out.print("Values: "); Collection1.print(m.values()); } // Iterating through Map.Entry objects (pairs): public static void print(Map m) { Collection entries = m.entries(); Iterator it = entries.iterator(); while(it.hasNext()) { Map.Entry e = (Map.Entry)it.next(); System.out.println("Key = " + e.getKey() + ", Value = " + e.getValue()); } } public static void test(Map m) { fill(m, testData1); // Map has 'Set' behavior for keys: fill(m, testData1); printKeys(m); printValues(m); print(m); String key = testData1[4][0]; String value = testData1[4][1]; System.out.println("m.containsKey(\"" + key + "\"): " + m.containsKey(key)); System.out.println("m.get(\"" + key + "\"): " + m.get(key)); System.out.println("m.containsValue(\"" + value + "\"): " + m.containsValue(value)); Map m2 = fill(new ArrayMap(), testData2); m.putAll(m2); printKeys(m); m.remove(testData2[0][0]); printKeys(m); m.clear(); System.out.println("m.isEmpty(): " + m.isEmpty()); fill(m, testData1); // Operations on the Set change the Map: m.keySet().removeAll(m.keySet()); System.out.println("m.isEmpty(): " + m.isEmpty()); } public static void main(String args[]) { System.out.println("Testing ArrayMap"); test(new ArrayMap()); System.out.println("Testing HashMap"); test(new HashMap()); System.out.println("Testing TreeMap"); test(new TreeMap()); } } ///:~
The printKeys( ), printValues( ), and print( ) methods are not only useful utilities, they also demonstrate the production of Collection views of a Map. The keySet( ) method produces a Set backed by the keys in the Map; here, it is treated as only a Collection. Similar treatment is given to values( ), which produces a List containing all the values in the Map. (Note that keys must be unique, while values can contain duplicates.) Since these Collections are backed by the Map, any changes in a Collection will be reflected in the associated Map.
The print( ) method grabs the Iterator produced by entries and uses it to print both the key and value for each pair. The rest of the program provides simple examples of each Map operation, and tests each type of Map.
When creating your own class to use as a key in a Map, you must deal with the same issues discussed previously for Sets.
Choosing an implementation
From the diagram on page 363 you can see that there are really only three collection components: Map, List, and Set, and only two or three implementations of each interface. If you need to use the functionality offered by a particular interface, how do you decide which particular implementation to use?
To understand the answer, you must be aware that each different implementation has its own features, strengths, and weaknesses. For example, you can see in the diagram that the “feature” of Hashtable, Vector, and Stack is that they are legacy classes, so that existing code doesn’t break. On the other hand, it’s best if you don’t use those for new (Java 1.2) code.
The distinction between the other collections often comes down to what they are ”backed by;” that is, the data structures that physically implement your desired interface. This means that, for example, ArrayList, LinkedList , and Vector (which is roughly equivalent to ArrayList) all implement the List interface so your program will produce the same results regardless of the one you use. However, ArrayList (and Vector) is backed by an array, while the LinkedList is implemented in the usual way for a doubly-linked list, as individual objects each containing data along with handles probably faster.
As another example, a Set can be implemented as either an ArraySet or a HashSet. An ArraySet is backed by an ArrayList and is designed to support only small numbers of elements, especially in situations in which you’re creating and destroying a lot of Set objects. However, if you’re going to have larger quantities in your Set, the performance of ArraySet will get very bad, very quickly. When you’re writing a program that needs a Set, you should choose HashSet by default, and change to ArraySet only in special cases where performance improvements are indicated and necessary.Choosing between Lists
The most convincing way to see the differences between the implementations of List is with a performance test. The following code establishes an inner base class to use as a test framework, then creates an anonymous inner class for each different test. Each of these inner classes is called by the test( ) method. This approach allows you to easily add and remove new kinds of tests.
//: ListPerformance.java // Demonstrates performance differences in Lists package c08.newcollections; import java.util.*; public class ListPerformance { private static final int REPS = 100; private abstract static class Tester { String name; int size; // Test quantity Tester(String name, int size) { this.name = name; this.size = size; } abstract void test(List a); } private static Tester[] tests = { new Tester("get", 300) { void test(List a) { for(int i = 0; i < REPS; i++) { for(int j = 0; j < a.size(); j++) a.get(j); } } }, new Tester("iteration", 300) { void test(List a) { for(int i = 0; i < REPS; i++) { Iterator it = a.iterator(); while(it.hasNext()) it.next(); } } }, new Tester("insert", 1000) { void test(List a) { int half = a.size()/2; String s = "test"; ListIterator it = a.listIterator(half); for(int i = 0; i < size * 10; i++) it.add(s); } }, new Tester("remove", 5000) { void test(List a) { ListIterator it = a.listIterator(3); while(it.hasNext()) { it.next(); it.remove(); } } }, }; public static void test(List a) { // A trick to print out the class name: System.out.println("Testing " + a.getClass().getName()); for(int i = 0; i < tests.length; i++) { Collection1.fill(a, tests[i].size); System.out.print(tests[i].name); long t1 = System.currentTimeMillis(); tests[i].test(a); long t2 = System.currentTimeMillis(); System.out.println(": " + (t2 - t1)); } } public static void main(String[] args) { test(new ArrayList()); test(new LinkedList()); } } ///:~
The inner class Tester is abstract, to provide a base class for the specific tests. It contains a String to be printed when the test starts, a size parameter to be used by the test for quantity of elements or repetitions of tests, a constructor to initialize the fields,.
The List that’s handed to test( ) is first filled with elements, then each test in the tests array is timed. The results will vary from machine to machine; they are intended to give only an order of magnitude comparison between the performance of the different collections. Here is a summary of one run:
You can see that random accesses ( get( )) and iterations are cheap for ArrayLists and expensive for LinkedLists. On the other hand, insertions and removals from the middle of a list are significantly cheaper for a LinkedList than for an ArrayList. The best approach is probably to choose an ArrayList as your default and to change to a LinkedList if you discover performance problems because of many insertions and removals from the middle of the list.Choosing between Sets
You can choose between an ArraySet and a HashSet, depending on the size of the Set (if you need to produce an ordered sequence from a Set, use TreeSet[39]). The following test program gives an indication of this tradeoff:
//: SetPerformance.java // Demonstrates performance differences in Sets package c08.newcollections; import java.util.*; public class SetPerformance { private static final int REPS = 100; private abstract static class Tester { String name; Tester(String name) { this.name = name; } abstract void test(Set s, int size); } private static Tester[] tests = { new Tester("add") { void test(Set s, int size) { for(int i = 0; i < REPS; i++) { s.clear(); Collection1.fill(s,) { // A trick to print out the class name: System.out.println("Testing " + s.getClass().getName() + " size " + size); Collection1.fill(s,) { // Small: test(new ArraySet(), 10); test(new HashSet(), 10); // Medium: test(new ArraySet(), 100); test(new HashSet(), 100); // Large: test(new HashSet(), 1000); test(new ArraySet(), 500); } } ///:~
The last test of ArraySet is only 500 elements instead of 1000 because it is so slow.
HashSet is clearly superior to ArraySet for add( ) and contains( ), and the performance is effectively independent of size. You’ll virtually never want to use an ArraySet for regular programming.Choosing between Maps
When choosing between implementations of Map, the size of the Map is what most strongly affects performance, and the following test program gives an indication of this tradeoff:
//: MapPerformance.java // Demonstrates performance differences in Maps package c08.newcollections; import java.util.*; public class MapPerformance { private static final int REPS = 100; public static Map fill(Map m, int size) { for(int i = 0; i < size; i++) { String x = Integer.toString(i); m.put(x, x); } return m; } private abstract static class Tester { String name; Tester(String name) { this.name = name; } abstract void test(Map m, int size); } private static Tester[] tests = { new Tester("put") { void test(Map m, int size) { for(int i = 0; i < REPS; i++) { m.clear(); fill(m,.entries().iterator(); while(it.hasNext()) it.next(); } } }, }; public static void test(Map m, int size) { // A trick to print out the class name: System.out.println("Testing " + m.getClass().getName() + " size " + size); fill(m,) { // Small: test(new ArrayMap(), 10); test(new HashMap(), 10); test(new TreeMap(), 10); // Medium: test(new ArrayMap(), 100); test(new HashMap(), 100); test(new TreeMap(), 100); // Large: test(new HashMap(), 1000); // You might want to comment these out since // they can take a while to run: test(new ArrayMap(), 500); test(new TreeMap(), 500); } } ///:~
Because the size of the map is the issue, you’ll see that the timing tests divide the time by the size to normalize each measurement. Here is one set of results. (Yours will probably be different.)
Even for size 10, the ArrayMap performance is worse than HashMap – except for iteration, which is not usually what you’re concerned about when using a Map. ( get( ) is generally the place where you’ll spend most of your time.) The TreeMap has respectable put( ) and iteration times, but the get( ) is not so good. Why would you use a TreeMap if it has good put( ) and iteration times? So you could use it not as a Map, but as a way to create an ordered list. The behavior of a tree is such that it’s always in order and doesn’t have to be specially sorted. (The way it is ordered will be discussed later.) Once you fill a TreeMap, you can call keySet( ) to get a Set view of the keys, then toArray( ) to produce an array of those keys. You can then use the static method Array.binarySearch( ) (discussed later) to rapidly find objects in your sorted array. Of course, you would probably only do this if, for some reason, the behavior of a HashMap was unacceptable, since HashMap is designed to rapidly find things. In the end, when you’re using a Map your first choice should be HashMap, and only rarely will you need to investigate the alternatives.
There is another performance issue that the above table does not address, and that is speed of creation. The following program tests creation speed for different types of Map:
//: MapCreation.java // Demonstrates time differences in Map creation package c08.newcollections; import java.util.*; public class MapCreation { public static void main(String[] args) { final long REPS = 100000; long t1 = System.currentTimeMillis(); System.out.print("ArrayMap"); for(long i = 0; i < REPS; i++) new ArrayMap(); long t2 = System.currentTimeMillis(); System.out.println(": " + (t2 - t1)); t1 = System.currentTimeMillis(); System.out.print("TreeMap"); for(long i = 0; i < REPS; i++) new TreeMap(); t2 = System.currentTimeMillis(); System.out.println(": " + (t2 - t1)); t1 = System.currentTimeMillis(); System.out.print("HashMap"); for(long i = 0; i < REPS; i++) new HashMap(); t2 = System.currentTimeMillis(); System.out.println(": " + (t2 - t1)); } } ///:~
At the time this program was written, the creation speed of TreeMap was dramatically faster than the other two types. (Although you should try it, since there was talk of performance improvements to ArrayMap.) This, along with the acceptable and consistent put( ) performance of TreeMap, suggests a possible strategy if you’re creating many Maps, and only later in your program doing many lookups: Create and fill TreeMaps, and when you start looking things up, convert the important TreeMaps into HashMaps using the HashMap(Map) constructor. Again, you should only worry about this sort of thing after it’s been proven that you have a performance bottleneck. (“First make it work, then make it fast – if you must.”)
Unsupported operations
It’s possible to turn an array into a List with the static Arrays.toList( ) method:
//: Unsupported.java // Sometimes methods defined in the Collection // interfaces don't work! package c08.newcollections; import java.util.*; public class Unsupported { private static String[] s = { "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", }; static List a = Arrays.toList(s); static List a2 = Arrays.toList( new String[] { s[3], s[4], s[5] }); public static void main(String[] args) { Collection1.print(a); // Iteration System.out.println( "a.contains(" + s[0] + ") = " + a.contains(s[0])); System.out.println( "a.containsAll(a2) = " + a.containsAll(a2)); System.out.println("a.isEmpty() = " + a.isEmpty()); System.out.println( "a.indexOf(" + s[5] + ") = " + a.indexOf(s[5])); // Traverse backwards: ListIterator lit = a.listIterator(a.size()); while(lit.hasPrevious()) System.out.print(lit.previous()); System.out.println(); // Set the elements to different values: for(int i = 0; i < a.size(); i++) a.set(i, "47"); Collection1.print(a); // Compiles, but won't run: lit.add("X"); // Unsupported operation a.clear(); // Unsupported a.add("eleven"); // Unsupported a.addAll(a2); // Unsupported a.retainAll(a2); // Unsupported a.remove(s[0]); // Unsupported a.removeAll(a2); // Unsupported } } ///:~
You’ll discover that only a portion of the Collection and List interfaces are actually implemented. The rest of the methods cause the unwelcome appearance of something called an UnsupportedOperationException. You’ll learn all about exceptions in the next chapter, but the short story is that the Collection interface, as well as some of the other interfaces in the new collections library, contain , they will stop the program! Type safety was just thrown out the window!” It’s not quite that bad. With a Collection, List, Set, or Map, the compiler still restricts you to calling only the methods in that interface, so it’s not like Smalltalk (in which you can call any method for any object, and find out only when you run the program whether your call does anything). In addition, most methods that take a Collection as an argument only read from that Collection –all the “read” methods of Collection are not optional.
This approach prevents an explosion of interfaces in the design. Other designs for collection libraries always seem to end up with a confusing plethora of interfaces to describe each of the variations on the main theme and are thus difficult to learn. It’s not even possible to capture all of the special cases in interfaces, because someone can always invent a new interface. The “unsupported operation” approach achieves an important goal of the new collections library: it is simple to learn and use. For this approach to work, however:
- The UnsupportedOperationException must be a rare event. That is, for most classes all operations should work, and only in special cases should an operation be unsupported. This is true in the new collections library, since the classes you’ll use 99 percent of the time – ArrayList, LinkedList, HashSet, and HashMap, as well as the other concrete implementations – support all of the operations. The design does provide a “back door” if you want to create a new Collection without providing meaningful definitions for all the methods in the Collection interface, and yet still fit it into the existing library.
- When an operation is unsupported, there should be reasonable likelihood that an UnsupportedOperationException will appear at implementation time, rather than after you’ve shipped the product to the customer. After all, it indicates a programming error: you’ve used a class incorrectly. This point is less certain, and is where the experimental nature of this design comes into play. Only over time will we find out how well it works.
In the example above, Arrays.toList( ) produces a List that is backed by a fixed-size array. Therefore it makes sense that the only supported operations are the ones that don’t change the size of the array. If, on the other hand, a new interface were required to express this different kind of behavior (called, perhaps, “ FixedSizeList”), it would throw open the door to complexity and soon you wouldn’t know where to start when trying to use the library.
The documentation for a method that takes a Collection, List, Set, or Map as an argument should specify which of the optional methods must be implemented. For example, sorting requires the set( ) and Iterator.set( ) methods but not add( ) and remove( ).
Sorting and searching
Java 1.2 adds utilities to perform sorting and searching for arrays or Lists. These utilities are static methods of two new classes: Arrays for sorting and searching arrays, and Collections for sorting and searching Lists.Arrays
The Arrays class has an overloaded sort( ) and binarySearch( ) for arrays of all the primitive types, as well as for String and Object. Here’s an example that shows sorting and searching an array of byte (all the other primitives look the same) and an array of String:
//: Array1.java // Testing the sorting & searching in Arrays package c08.newcollections; import java.util.*; public class Array1 { static Random r = new Random(); static String ssource = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz"; static char[] src = ssource.toCharArray(); // Create a random String public static String randString(int length) { char[] buf = new char[length]; int rnd; for(int i = 0; i < length; i++) { rnd = Math.abs(r.nextInt()) % src.length; buf[i] = src[rnd]; } return new String(buf); } // Create a random array of Strings: public static String[] randStrings(int length, int size) { String[] s = new String[size]; for(int i = 0; i < size; i++) s[i] = randString(length); return s; } public static void print(byte[] b) { for(int i = 0; i < b.length; i++) System.out.print(b[i] + " "); System.out.println(); } public static void print(String[] s) { for(int i = 0; i < s.length; i++) System.out.print(s[i] + " "); System.out.println(); } public static void main(String[] args) { byte[] b = new byte[15]; r.nextBytes(b); // Fill with random bytes print(b); Arrays.sort(b); print(b); int loc = Arrays.binarySearch(b, b[10]); System.out.println("Location of " + b[10] + " = " + loc); // Test String sort & search: String[] s = randStrings(4, 10); print(s); Arrays.sort(s); print(s); loc = Arrays.binarySearch(s, s[4]); System.out.println("Location of " + s[4] + " = " + loc); } } ///:~
The first part of the class contains utilities to generate random String objects using an array of characters from which random letters can be selected. randString( ) returns a string of any length, and randStrings( ) creates an array of random Strings, given the length of each String and the desired size of the array. The two print( ) methods simplify the display of the sample arrays. In main( ), Random.nextBytes( ) fills the array argument with randomly-selected bytes. (There are no corresponding Random methods to create arrays of the other primitive data types.) Once you have an array, you can see that it’s only a single method call to perform a sort( ) or binarySearch( ). There’s an important warning concerning binarySearch( ): If you do not call sort( ) before you perform a binarySearch( ), unpredictable behavior can occur, including infinite loops.
Sorting and searching with Strings looks the same, but when you run the program you’ll notice something interesting: the sorting is lexicographic, so uppercase letters precede lowercase letters in the character set. Thus, all the capital letters are at the beginning of the list, followed by the lowercase letters, so ‘Z’ precedes ‘a’. It turns out that even telephone books are typically sorted this way.Comparable and Comparator
What if this isn’t what you want? For example, the index in this book would not be too useful if you had to look in two places for everything that begins with ‘A’ or ‘a’.
When you want to sort an array of Object, there’s a problem. What determines the ordering of two Objects? Unfortunately, the original Java designers didn’t consider this an important problem, or it would have been defined in the root class Object. As a result, ordering must be imposed on Objects from the outside, and the new collections library provides a standard way to do this (which is almost as good as defining it in Object).
There is a sort( ) for arrays of Object (and String, of course, is an Object) that takes a second argument: an object that implements the Comparator interface (part of the new collections library) and performs comparisons with its single compare( ) method. This method takes the two objects to be compared as its arguments and returns a negative integer if the first argument is less than the second, zero if they’re equal, and a positive integer if the first argument is greater than the second. With this knowledge, the String portion of the example above can be re-implemented to perform an alphabetic sort:
//: AlphaComp.java // Using Comparator to perform an alphabetic sort package c08.newcollections; import java.util.*; public class AlphaComp implements Comparator { public int compare(Object o1, Object o2) { // Assume it's used only for Strings... String s1 = ((String)o1).toLowerCase(); String s2 = ((String)o2).toLowerCase(); return s1.compareTo(s2); } public static void main(String[] args) { String[] s = Array1.randStrings(4, 10); Array1.print(s); AlphaComp ac = new AlphaComp(); Arrays.sort(s, ac); Array1.print(s); // Must use the Comparator to search, also: int loc = Arrays.binarySearch(s, s[3], ac); System.out.println("Location of " + s[3] + " = " + loc); } } ///:~
By casting to String, the compare( ) method implicitly tests to ensure that it is used only with String objects – the run-time system will catch any discrepancies. After forcing both Strings to lower case, the String.compareTo( ) method produces the desired results.
When you use your own Comparator to perform a sort( ), you must use that same Comparator when using binarySearch( ).
The Arrays class has another sort( ) method that takes a single argument: an array of Object, but with no Comparator. This sort( ) method must also have some way to compare two Objects. It uses the natural comparison method that is imparted to a class by implementing the Comparable interface. This interface has a single method, compareTo( ), which compares the object to its argument and returns negative, zero, or positive depending on whether it is less than, equal to, or greater than the argument. A simple example demonstrates this:
//: CompClass.java // A class that implements Comparable package c08.newcollections; import java.util.*; public class CompClass implements Comparable { private int i; public CompClass(int ii) { i = ii; } public int compareTo(Object o) { // Implicitly tests for correct type: int argi = ((CompClass)o).i; if(i == argi) return 0; if(i < argi) return -1; return 1; } public static void print(Object[] a) { for(int i = 0; i < a.length; i++) System.out.print(a[i] + " "); System.out.println(); } public String toString() { return i + ""; } public static void main(String[] args) { CompClass[] a = new CompClass[20]; for(int i = 0; i < a.length; i++) a[i] = new CompClass( (int)(Math.random() *100)); print(a); Arrays.sort(a); print(a); int loc = Arrays.binarySearch(a, a[3]); System.out.println("Location of " + a[3] + " = " + loc); } } ///:~
Of course, your compareTo( ) method can be as complex as necessary.Lists
A List can be sorted and searched in the same fashion as an array. The static methods to sort and search a List are contained in the class Collections, but they have similar signatures as the ones in Arrays: sort(List) to sort a List of objects that implement Comparable, binarySearch(List, Object) to find an object in the list, sort(List, Comparator) to sort a List using a Comparator, and binarySearch(List, Object, Comparator) to find an object in that list. [40] This example uses the previously-defined CompClass and AlphaComp to demonstrate the sorting tools in Collections:
//: ListSort.java // Sorting and searching Lists with 'Collections' package c08.newcollections; import java.util.*; public class ListSort { public static void main(String[] args) { final int SZ = 20; // Using "natural comparison method": List a = new ArrayList(); for(int i = 0; i < SZ; i++) a.add(new CompClass( (int)(Math.random() *100))); Collection1.print(a); Collections.sort(a); Collection1.print(a); Object find = a.get(SZ/2); int loc = Collections.binarySearch(a, find); System.out.println("Location of " + find + " = " + loc); // Using a Comparator: List b = new ArrayList(); for(int i = 0; i < SZ; i++) b.add(Array1.randString(4)); Collection1.print(b); AlphaComp ac = new AlphaComp(); Collections.sort(b, ac); Collection1.print(b); find = b.get(SZ/2); // Must use the Comparator to search, also: loc = Collections.binarySearch(b, find, ac); System.out.println("Location of " + find + " = " + loc); } } ///:~
The use of these methods is identical to the ones in Arrays, but you’re using a List instead of an array.
Utilities
There are a number of other useful utilities in the Collections class:
Note that min( ) and max( ) work with Collection objects, not with Lists, so you don’t need to worry about whether the Collection should be sorted or not. (As mentioned earlier, you do need to sort( ) a List or an array before performing a binarySearch( ).)Making a Collection or Map unmodifiable don’t want to treat a Collection as a more specific type), List, Set, and Map. This example shows the proper way to build read-only versions of each:
//: ReadOnly.java // Using the Collections.unmodifiable methods package c08.newcollections; import java.util.*; public class ReadOnly { public static void main(String[] args) { Collection c = new ArrayList(); Collection1.fill(c); // Insert useful data c = Collections.unmodifiableCollection(c); Collection1.print(c); // Reading is OK //! c.add("one"); // Can't change it List a = new ArrayList(); Collection1.fill(a); a = Collections.unmodifiableList(a); ListIterator lit = a.listIterator(); System.out.println(lit.next()); // Reading OK //! lit.add("one"); // Can't change it Set s = new HashSet(); Collection1.fill(s); s = Collections.unmodifiableSet(s); Collection1.print(s); // Reading OK //! s.add("one"); // Can't change it Map m = new HashMap(); Map1.fill(m, Map1.testData1); m = Collections.unmodifiableMap(m); Map1.print(m); // Reading OK //! m.put("Ralph", "Howdy!"); } } ///:~
In each case, you must fill the container with meaningful data before you make it read-only. Once it is loaded, the best approach is to replace the existing handle with the handle that is produced by the “unmodifiable” call. That way, you don’t run the risk of accidentally changing the contents once you’ve made it unmodifiable. On the other hand, this tool also allows you to keep a modifiable container as private within a class and to return a read-only handle to that container from a method call. So you can change it from within the class but everyone else can only read it.
Calling the “unmodifiable” method for a particular type does not cause compile-time checking, but once the transformation has occurred, any calls to methods that modify the contents of a particular container will produce an UnsupportedOperationException.Synchronizing a Collection or Map
The synchronized keyword is an important part of the subject of multithreading, a more complicated topic that will not be introduced until Chapter 14. Here, I shall note only that the Collections class contains a way to automatically synchronize an entire container. The syntax is similar to the “unmodifiable” methods:
//: Synchronization.java // Using the Collections.synchronized methods package c08.newcollections; there’s no chance of accidentally exposing the unsynchronized version.
The new collections also have a mechanism to prevent more than one process from modifying the contents of a container. The problem occurs if you’re iterating through a container and some other process steps in and inserts, removes, or changes an object in that container. Maybe you’ve already passed that object, maybe it’s ahead of you, maybe the size of the container shrinks after you call size( ) – there are many scenarios for disaster. The new collections library incorporates a fail fast mechanism that looks for any changes to the container other than the ones your process is personally responsible for. If it detects that someone else is modifying the container, it immediately produces a ConcurrentModificationException. This is the “fail-fast” aspect – it doesn’t try to detect a problem later on using a more complex algorithm.
[37] This chapter was written while Java 1.2 was still in beta, so the diagram does not show the TreeSet class that was added later.
[38] At the time of this writing, TreeSet had only been announced and was not yet implemented, so there are no examples here that use TreeSet.
[39] TreeSet was not available at the time of this writing, but you can easily add a test for it into this example.
[40] At the time of this writing, a Collections.stableSort( ) had been announced, to perform a merge sort, but it was unavailable for testing.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/java/tij/tij0093.shtml | CC-MAIN-2017-22 | refinedweb | 7,739 | 56.25 |
#include <wx/print.h>
This class represents the Windows or PostScript printer, and is the vehicle through which printing may be launched by an application.
Printing can also be achieved through using of lower functions and classes, but this and associated classes provide a more convenient and general method of printing.
Constructor.
Pass an optional pointer to a block of print dialog data, which will be copied to the printer object's local data.
Creates the default printing abort window, with a cancel button.
Returns true if the user has aborted the print job.
Return last error.
Valid after calling Print(), PrintDialog() or wxPrintPreview::Print().
These functions set last error to
wxPRINTER_NO_ERROR if no error happened.
Returned value is one of the following:
Returns the print data associated with the printer GetLastError() to get detailed information about the kind of the error.
Invokes the print dialog.
If successful (the user did not press Cancel and no error occurred), a suitable device context will be returned; otherwise NULL is returned; call GetLastError() to get detailed information about the kind of the error.
Default error-reporting function.
Invokes the print setup dialog. | http://docs.wxwidgets.org/3.0/classwx_printer.html | CC-MAIN-2018-34 | refinedweb | 190 | 57.06 |
In this article, we will learn how to parse a JSON response using the requests library. For example, we are using a requests library to send a RESTful GET call to a server, and in return, we are getting a response in the JSON format. Now, let’s see how to parse this JSON data in Python.
We will parse JSON response into Python Dictionary so you can access JSON data using key-value pairs. Also, you can prettyPrint JSON in the readable format.
The response of the GET request contains information we called it as a payload. We can find this information in the message body. Use attributes and methods of
Response to view payload in the different formats.
We can access payload data using the following three methods of a requests module.
response.contentused to access payload data in raw bytes format.
response.text: used to access payload data in String format.
response.json()used to access payload data in the JSON serialized format.
The JSON Response Content
The requests module provides a builtin JSON decoder, we can use it when we are dealing with JSON data. Just execute
response.json(), and that’s it.
response.json() returns a JSON response in Python dictionary format so we can access JSON using key-value pairs.
You can get a 204 error In case the JSON decoding fails. The
response.json() raises an exception in the following scenario.
- The response doesn’t contain any data.
- The response contains invalid JSON
You must check
response.raise_for_status() or
response.status_code before parsing JSON because the successful call to
response.json() does not indicate the success of the request.
In the case of HTTP 500 error, some servers may return a JSON object in a failed response (e.g., error details with HTTP 500). So you should execute
response.json() after checking
response.raise_for_status() or check
response.status_code.
Let’s see the example of how to use response.json() and parse JSON content.
In this example, I am using httpbin.org to execute a GET call. httpbin.org is a web service that allows test requests and responds with data about the request. You can use this service to test your code.
import requests from requests.exceptions import HTTPError try: response = requests.get('') response.raise_for_status() # access JSOn content jsonResponse = response.json() print("Entire JSON response") print(jsonResponse) except HTTPError as http_err: print(f'HTTP error occurred: {http_err}') except Exception as err: print(f'Other error occurred: {err}')
Output:
Entire JSON response {'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.21.0'}, 'origin': '49.35.214.177, 49.35.214.177', 'url': ''}
Iterate JSON Response
Let’s see how to iterate all JSON key-value pairs one-by-one.
print("Print each key-value pair from JSON response") for key, value in jsonResponse.items(): print(key, ":", value)
Output:
Print each key-value pair from JSON response args : {} headers : {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.21.0'} origin : 49.35.214.177, 49.35.214.177 url :
Access JSON key directly from the response using the key name
print("Access directly using a JSON key name") print("URL is ") print(jsonResponse["url"])
Output
URL is
Access Nested JSON key directly from response
print("Access nested JSON keys") print("Host is is ") print(jsonResponse["headers"]["Host"])
Output:
Access nested JSON keys URL is httpbin.org | https://pynative.com/parse-json-response-using-python-requests-library/ | CC-MAIN-2020-10 | refinedweb | 573 | 52.05 |
Greetings,
I’ve been hitting my head to the wall over this for quite a while now
and don’t seem to be able to wrap my forementioned head around it.
Basically I have a list of users in a selection box and I have two radio
buttons: Sort by first name and sort by last name. This is what I have
in my view:
<%= radio_button_tag ‘sort_by_last_name’, ‘true’, :checked => true,
:onchange => ‘#{remote_function(:url => {:action =>
“update_users”},
:with => “order=1”)}’ %> Sort by Last name
<%= radio_button_tag ‘sort_by_last_name’, ‘false’,
:onchange => ‘#{remote_function(:url => {:action =>
“update_users”},
:with => “order=0”)}’ %> Sort by First name
This is my partial view:
<%= collection_select(nil, :user_id, users, :id, :first_name,
{:prompt => “Select user”}) %>
And this is my controller function for update_users:
def update_users
if params[:order] == 1
users = User.all
else
users = User.first
end
render :update do |page|
page.replace_html ‘users’, :partial => ‘users’, :object => users
end
end
Now I’ve found a few sources describing on how this should be done, but
I haven’t been able to get them to work for some unknown reason. | https://www.ruby-forum.com/t/problems-with-onchange-for-radio-button-tag/182166 | CC-MAIN-2022-40 | refinedweb | 175 | 60.99 |
Switching to the i3 window manager
Vincent Bernat
I have been using the awesome window manager for 10 years. It is a tiling window manager, configurable and extendable with the Lua language. Using a general-purpose programming language to configure every aspect is a double-edged sword. Due to laziness and the apparent difficulty of adapting my configuration—about 3000 lines—to newer releases, I was stuck with the 3.4 version, whose last release is from 2013.
It was time for a rewrite. Instead, I have switched to the i3 window manager, lured by the possibility to migrate to Wayland and Sway later with minimal pain. Using an embedded interpreter for configuration is not as important to me as it was in the past: it brings both complexity and brittleness.
The window manager is only one part of a desktop environment. There are several options for the other components. I am also introducing them in this post.
i3: the window manager#
i3 aims to be a minimal tiling window manager. Its documentation can be read from top to bottom in less than an hour. i3 organize windows in a tree. Each non-leaf node contains one or several windows and has an orientation and a layout. This information arbitrates the window positions. i3 features three layouts: split, stacking, and tabbed. They are demonstrated in the below screenshot:
Most of the other tiling window managers, including the awesome window manager, use predefined layouts. They usually feature a large area for the main window and another area divided among the remaining windows. These layouts can be tuned a bit, but you mostly stick to a couple of them. When a new window is added, the behavior is quite predictable. Moreover, you can cycle through the various windows without thinking too much as they are ordered.
i3 is more flexible with its ability to build any layout on the fly, it can feel quite overwhelming as you need to visualize the tree in your head. At first, it is not unusual to find yourself with a complex tree with many useless nested containers. Moreover, you have to navigate windows using directions. It takes some time to get used to.
I set up a split layout for Emacs and a few terminals, but most of the other workspaces are using a tabbed layout. I don’t use the stacking layout. You can find many scripts trying to emulate other tiling window managers but I did try to get my setup pristine of these tentatives and get a chance to familiarize myself. i3 can also save and restore layouts, which is quite a powerful feature.
My configuration is quite similar to the default one and has less than 200 lines.
i3 companion: the missing bits#
i3 philosophy is to keep a minimal core and let the user implements missing features using the IPC protocol:
While using asyncio and the
i3ipc-python library. Each feature is self-contained into a
function. It implements the following components:
- make a workspace exclusive to an application
- When a workspace contains Emacs or Firefox, I would like other applications to move to another workspace, except for the terminal which is allowed to “intrude” into any workspace. The
workspace_exclusive()function monitors new windows and moves them if needed to an empty workspace or to one with the same application already running.
- implement a Quake console
- The
quake_console()function implements a drop-down console available from any workspace. It can be toggled with Mod+`. This is implemented as a scratchpad window.
- back and forth workspace switching on the same output
- With the.
- create a new empty workspace or move a window to an empty workspace
- To create a new empty workspace or move a window to an empty workspace, you have to locate a free slot and use
workspace number 4or
move container to workspace number 4. The
new_workspace()function finds a free number and use it as the target workspace.
- restart some services on output change
- When adding or removing an output, some actions need to be executed: refresh the wallpaper, restart some components unable to adapt their configuration on their own, etc. i3 triggers an event for this purpose. The
output_update()function also takes an extra step to coalesce multiple consecutive events and to check if there is a real change with the low-level library xcffib.
I will detail the other features as this post goes on. On the technical side, each function is decorated with the events it should react to:
@on(CommandEvent("previous-workspace"), I3Event.WORKSPACE_FOCUS) async def previous_workspace(i3, event): """Go to previous workspace on the same output."""
The
CommandEvent() event class is my way to send a command to the
companion, using either
i3-msg -t send_tick or binding a key to a
nop command. The latter is used to avoid spawning a shell and a
i3-msg process just to send a message. The companion listens to
binding events and checks if this is a
nop command.
bindsym $mod+Tab nop "previous-workspace"
There are other decorators to avoid code duplication:
@debounce() to
coalesce multiple consecutive calls,
@static() to define a static
variable, and
@retry() to retry a function on failure. The whole
script is a bit more than 1000 lines. I think this is
worth a read as I am quite happy with the result. 🦚
dunst: the notification daemon#
Unlike the awesome window manager, i3 does not come with a built-in
notification system. Dunst is a lightweight notification daemon. I
am running a modified version with HiDPI support for X11 and
recursive icon lookup. The i3 companion has a helper function,
notify(), to send notifications using DBus.
container_info() and
workspace_info() uses it to display information about the container
or the tree for a workspace.
![ Notification showing i3 tree for a
workspace]()
polybar: the status bar#
i3 bundles i3bar, a versatile status bar, but I have opted for Polybar. A wrapper script runs one instance for each monitor.
The first module is the built-in support for i3 workspaces. To not
have to remember which application is running in a workspace, the i3
companion renames workspaces to include an icon for each application.
This is done in the
workspace_rename() function. The icons are from
the Font Awesome project. I maintain a mapping between applications
and icons. This is a bit cumbersome but it looks great.
For CPU, memory, brightness, battery, disk, and audio volume, I am relying on the built-in modules. Polybar’s wrapper script generates the list of filesystems to monitor and they get only displayed when available space is low. The battery widget turns red and blinks slowly when running out of power. Check my Polybar configuration for more details.
For Bluetooh, network, and notification statuses, I am using Polybar’s
ipc module:
It can be updated."""
The middle of the bar is occupied by the date and a weather forecast. The latest also uses the IPC mechanism, but the source is a Python script triggered by a timer.
I don’t use the system tray integrated with Polybar. The embedded icons usually look horrible and they all behave differently. A few years back, Gnome has removed the system tray. Most of the problems are fixed by the DBus-based Status Notifier Item protocol—also known as Application Indicators or Ayatana Indicators for GNOME. However, Polybar does not support this protocol. In the i3 companion, The implementation of Bluetooth and network icons, including displaying notifications on change, takes about 200 lines. I got to learn a bit about how DBus works and I get exactly the info I want.
picom: the compositor#
I like having slightly transparent backgrounds for terminals and to reduce the opacity of unfocused windows. This requires a compositor.1 picom is a lightweight compositor. It works well for me, but it may need some tweaking depending on your graphic card.2 Unlike the awesome window manager, i3 does not handle transparency, so the compositor needs to decide by itself the opacity of each window. Check my configuration for details.
systemd: the service manager#
I use systemd to start i3 and the various services around it. My
xsession script only sets some environment variables and lets
systemd handles everything else. Have a look at this article from
Michał Góral for the rationale. Notably, each component can be
easily restarted and their logs are not mangled inside the
~/.xsession-errors file.3
I am using a two-stage setup:
i3.service depends on
xsession.target to
Then, i3 executes the second stage by invoking the
i3-session.target:
[Unit] Description=i3 session BindsTo=graphical-session.target Wants=wallpaper.service Wants=wallpaper.timer Wants=polybar-weather.service Wants=polybar-weather.timer Wants=polybar.service Wants=i3-companion.service Wants=misc-x.service
Have a look on my configuration files for more details.
rofi: the application launcher#
Rofi is an application launcher. Its appearance can be customized through a CSS-like language and it comes with several themes. Have a look at my configuration for mine.
It can also act as a generic menu application. I have a script to control a media player and another one to select the wifi network. It is quite a flexible application.
xss-lock and i3lock: the screen locker#
i3lock is a simple screen locker. xss-lock invokes it reliably
on inactivity or before a system suspend. For inactivity, it uses the
XScreenSaver events. The delay is configured using the
xset s
command..
The remaining components#
autorandr is a tool to detect the connected display, match them against a set of profiles, and configure them with
xrandr.
inputplug executes a script for each new mouse and keyboard plugged. This is quite useful to load the appropriate the keyboard map. See my configuration.
xsettingsd provides settings to X11 applications, not unlike xrdb but it notifies applications for changes. The main use is to configure the Gtk and DPI settings. See my article on HiDPI support on Linux with X11.
Redshift adjusts the color temperature of the screen according to the time of day.
maim is a utility to take screenshots. I use Prt Scn to trigger a screenshot of a window or a specific area and Mod+Prt Scn to capture the whole desktop to a file. Check the helper script for details.
I have a collection of wallpapers I rotate every hour. A script selects them using advanced machine learning algorithms and stitches them together on multi-screen setups. The selected wallpaper is reused by i3lock.
Apart from the eye candy, a compositor also helps to get tear-free video playbacks. ↩︎
My configuration works with both Haswell (2014) and Whiskey Lake (2018) Intel GPUs. It also works with AMD GPU based on the Polaris chipset (2017). ↩︎
You cannot manage two different displays this way—e.g.
:0and
:1. In the first implementation, I did try to parametrize each service with the associated display, but this is useless: there is only one DBus user session and many services rely on it. For example, you cannot run two notification daemons. ↩︎
I have only discovered later that XSecureLock ships such a dimmer with a similar implementation. But mine has a cool countdown! ↩︎ | https://vincent.bernat.ch/en/blog/2021-i3-window-manager | CC-MAIN-2021-39 | refinedweb | 1,866 | 57.16 |
Sound and ui modules not playing well together
I am trying to trigger a custom view to redraw, then to play a series of tones (a chord), then retrigger the display, rinse and repeat.
The chords play in sequence, but the display is locked until the last of the tones of the last chord is finish playing. Then it updates to the last of the displays that should have shown. Seem very asynchronous. Is there any simple way (objectiveC is consider semi-simple) to tell when a sound is finished playing.
Reading the IOS bibles, seems I need to have access to the
audioPlayeDidFinishPlaying:successfully:method in the delegate. Any chance the delegate is a hidden "feature" of sound or one I can "hack" into?
What does your code for playing the series of tones look like?
def playProgression(button): if os.path.exists('waves'): if not model._InstrumentOctave: return else: baseOctave = model._InstrumentOctave strings = model._InstrumentTuning for chordNumber in range(len(model._ProgFingerings)) # here is where I inserted code to trigger a redraw of a custom view. # the redraw happens when this loop finished thisFingering = model._ProgFingeringsPointers[chordNumber] cc = model._ProgFingerings[chordNumber][thisFingering] frets = cc[2] dead_notes = [item[3] == 'X' for item in cc[0]] tones = [] for fret,string,dead_note in zip(frets,strings,dead_notes): if dead_note: continue octave,tone = divmod(string + fret,12) tones.append((tone,octave+baseOctave)) for tone,octave in tones: sound.play_effect(getWaveName(tone,octave)) time.sleep(model.play_arpSpeed*0.25) time.sleep(3*model.play_arpSpeed) # rest between chords # the "chords" play just fine as well as the final sleeps between chords.
have you tried using the sound.Player.finished_handler?
This seems to be an undocumented feature of the Player class
import sound,ui v=ui.View() v.bg_color='red' v.present() p=sound.Player('piano:A3') def g(): v.bg_color='green' p.finished_handler=None p.play() def f(): v.bg_color='blue' p.finished_handler=g p.play() p.finished_handler=f p.play()
@JonB. YES!!!!! That's what I hoped was out there. WIll try it tomorrow when Im actually awake and functional!!! Where do you find these things?
As an alternative to using
Player.finished_handler(which would probably require pretty significant changes in your code), you could also decorate your
playProgresionfunction with
ui.in_background, like this:
@ui.in_background def playProgression(button): # ...
@polymerchm I was checking if Player had an _obj_ptr to see if it could be bridged, and noticed finished_handler in the autocomplete list...
@omz Worked as advertised. I always forget that trick. @JonB: Will try that in other situations. I guess I need to pay careful attention to the autocomplete for other tidbits. Thanks. | https://forum.omz-software.com/topic/3739/sound-and-ui-modules-not-playing-well-together/2 | CC-MAIN-2021-25 | refinedweb | 440 | 52.56 |
Solution for
Programming Exercise 6.5
THIS PAGE DISCUSSES ONE POSSIBLE SOLUTION to the following exercise from this on-line Java textbook.:
Discussion
To write this applet, you need to understand dragging, as discussed in Section 6.4. To support dragging, you have to implement both the MouseListener and MouseMotionListener interfaces and register some object to listen for both mouse and mouse motion events. The code for dragging a square is spread out over three methods, mousePressed, mouseReleased, and mouseDragged. Several instance variables are needed to keep track of what is going on while a dragging operation is being executed. A general framework for dragging is given in Section 6.4. This example is simplified a bit because while dragging the square, we only need to know the current position of the mouse so that we can move the square to that position. We don't need to keep track of the previous position of the mouse.
As always for any implementation of dragging, I use a boolean variable, dragging, to keep track of whether or not a drag operation is in progress. Not every mouse press starts a drag operation. If the use clicks the applet outside of the squares, there is nothing to drag. Since there are two squares to be dragged, we have to keep track of which is being dragged. I use a boolean variable, dragRedSquare, which is true if the red square is being dragged and is false if the blue square is being dragged. (A boolean variable is actually not the best choice in this case. It would be a problem if we wanted to add another square. A boolean variable only has two possible values, so an integer variable would probably be a better choice.) I keep track of the locations of the squares with integer instance variables x1 and y1 for the upper left corner of the red square and x2 and y2 for the upper left corner of the blue square.
There is one little problem. The mouse location is a single (x,y) point. A square occupies a whole bunch of points. When we move the square to follow the mouse, where exactly should we put the square? One possibility is to put the upper left corner of the square at the mouse location. If we did this, the mouseDragged routine would look like:public void mouseDragged(MouseEvent evt) { if (dragging == false) return; int x = evt.getX(); // Get mouse position. int y = evt.getY(); if (dragRedSquare) { // Move the red square. x1 = x; // Put top-left corner at mouse position. y1 = y; } else { // Move the blue square. x2 = x; // Put top-left corner at mouse position. y2 = y; } repaint(); }
This works, but it not very aesthetic. When the user starts dragging a square, no matter where in the square the user clicks, the square will jump so that its top-left corner is at the mouse position. This is not what a user typically expects. If I grab a square by clicking its center, then I want the center to stay under the mouse cursor as I move it. If I grab the lower right corner, I want the lower right corner to follow the mouse, not the upper left corner. There is a solution to this, and it's one that is often needed for dragging operations. We need to record the original position of the mouse relative to the upper left corner of the square. This tells us where in the square the user clicked. This is done in the mousePressed routine by assigning appropriate values to instance variables offsetX and offset; }
In mouseDragged, when the mouse moves to a new (x,y) point, we move the square so that the vertical and horizontal distances between the mouse location and the top left corner of the square remain the same:if (dragRedSquare) { // Move the red square. x1 = x - offsetX; // Offset corner from mouse location. y1 = y - offsetY; } else { // Move the blue square. x2 = x - offsetX; // Offset corner from mouse location. y2 = y - offsetY; }
There is, as usual, the question of how to divide the responsibilities of the program between the main applet class and the nested class that represents the drawing surface. In this case, I used a very simple anonymous nested class for the drawing surface. You will find this class in the applet's init() method. The only method in the anonymous class is the paintComponent() method that does the drawing.
All this leads to the complete source code, shown below.
By the way, if you wanted to stop the user from dragging the square outside the applet, you would just have to add code to the mouseDragged routine to "clamp" the variables x1, y1, x2, and y2 so that they lie in the acceptable range. Here is a modified routine that keeps the square entirely within the applet:public void mouseDragged(MouseEvent evt) { if (dragging == false) return; int x = evt.getX(); int y = evt.getY(); if (dragRedSquare) { // Move the red square. x1 = x - offsetX; y1 = y - offsetY; if (x1 < 0) // Clamp (x1,y1) so the square lies in the applet. x1 = 0; else if (x1 >= getSize().width - 30) x1 = getSize().width - 30; if (y1 < 0) y1 = 0; else if (y1 >= getSize().height - 30) y1 = getSize().height - 30; } else { // Move the blue square. x2 = x - offsetX; y2 = y - offsetY; if (x2 < 0) // Clamp (x2,y2) so the square lies in the applet. x2 = 0; else if (x2 >= getSize().width - 30) x2 = getSize().width - 30; if (y2 < 0) y2 = 0; else if (y2 >= getSize().height - 30) y2 = getSize().height - 30; } repaint(); }
The Solution
/* An applet showing a red square and a blue square that the user can drag with the mouse. The user can drag the squares off the applet and drop them. There is no way of getting them back. */ import java.awt.*; import java.awt.event.*; import javax.swing.*; public class DragTwoSquares extends JApplet implements MouseListener, MouseMotionListener { int x1, y1; // Coords of top-left corner of the red square. int x2, y2; // Coords of top-left corner of the blue square. /* Some variables used during dragging */ boolean dragging; // Set to true when a drag is in progress. boolean dragRedSquare; // True if red square is being dragged, false // if blue square is being dragged. int offsetX, offsetY; // Offset of mouse-click coordinates from // top-left corner of the square that was // clicked. JPanel drawSurface; // This is the panel on which the actual // drawing is done. It is used as the // content pane of the applet. It actually // belongs to an anonymous class which is // defined in place in the init() method. public void init() { // Initialize the applet by putting the squares in a // starting position and creating the drawing surface // and installing it as the content pane of the applet. x1 = 10; // Set up initial positions of the squares. y1 = 10; x2 = 50; y2 = 10; drawSurface = new JPanel() { // This anonymous inner class defines the drawing // surface for the applet. public void paintComponent(Graphics g) { // Draw the two squares and a black frame // around the panel. super.paintComponent(g); // Fill with background color. g.setColor(Color.red); g.fillRect(x1, y1, 30, 30); g.setColor(Color.blue); g.fillRect(x2, y2, 30, 30); g.setColor(Color.black); g.drawRect(0,0,getSize().width-1,getSize().height-1); } }; drawSurface.setBackground(Color.lightGray); drawSurface.addMouseListener(this); drawSurface.addMouseMotionListener(this); setContentPane(drawSurface); } // end init(); public void mousePressed(MouseEvent evt) { // Respond when the user presses the mouse on the panel. // Check which square the user clicked, if any, and start // dragging that square. if (dragging) // Exit if a drag is already in progress. return; int x = evt.getX(); // Location where user clicked. int y = evt.get; } } public void mouseReleased(MouseEvent evt) { // Dragging stops when user releases the mouse button. dragging = false; } public void mouseDragged(MouseEvent evt) { // Respond when the user drags the mouse. If a square is // not being dragged, then exit. Otherwise, change the position // of the square that is being dragged to match the position // of the mouse. Note that the corner of the square is placed // in the same position with respect to the mouse that it had // when the user started dragging it. if (dragging == false) return; int x = evt.getX(); int y = evt.getY(); if (dragRedSquare) { // Move the red square. x1 = x - offsetX; y1 = y - offsetY; } else { // Move the blue square. x2 = x - offsetX; y2 = y - offsetY; } drawSurface.repaint(); } public void mouseMoved(MouseEvent evt) { } public void mouseClicked(MouseEvent evt) { } public void mouseEntered(MouseEvent evt) { } public void mouseExited(MouseEvent evt) { } } // end class DragTwoSquares
[ Exercises | Chapter Index | Main Index ] | http://math.hws.edu/eck/cs124/javanotes4/c6/ex-6-5-answer.html | CC-MAIN-2017-47 | refinedweb | 1,430 | 74.29 |
Command Line Arguments in Java
Once upon a time, most Java programmers used a text-based development interface. They typed a command in a plain-looking window, usually with white text on a black background.
The plain-looking window goes by the various names, depending on the kind of operating system that you use..
In the image above, the programmer types java MakeRandomNumsFile to run the
MakeRandomNumsFile program. But the programmer follows java MakeRandomNumsFile with two extra pieces of information: MyNumberedFile.txt and 5. When the
MakeRandomNumsFile program runs, the program sucks up two extra pieces of information and uses them to do whatever the program has to do. The program sucks up
MyNumberedFile.txt 5, but on another occasion the programmer might type SomeStuff 28 or BunchONumbers 2000. The extra information can be different each time you run the program.
The next question is, “How does a Java program know that it’s supposed to snarf up extra information each time it runs?” Since you first started working with Java, you’ve been seeing this
String args[] business in the header of every
main method. Well, it’s high time you found out what that’s all about. The parameter
args[] is an array of
String values. These
String values are called command line arguments.
Some programmers write
public static void main(String args[])
and other programmers write
public static void main(String[] args)
Either way,
args is an array of
String values.
Using command line arguments in a Java program
This bit of code shows you how to use command line arguments.
This is how you generate a file of numbers
import java.util.Random;
import java.io.PrintStream;
import java.io.IOException;
public class MakeRandomNumsFile {
public static void main(String args[]) throws IOException {
Random generator = new Random();
if (args.length < 2) {
System.out.println("Usage: MakeRandomNumsFile filename number");
System.exit(1);
}
PrintStream printOut = new PrintStream(args[0]);
int numLines = Integer.parseInt(args[1]);
for (int count = 1; count <= numLines; count++) {
printOut.println(generator.nextInt(10) + 1);
}
printOut.close();
}
}
If a particular program expects some command line arguments, you can’t start the program running the same way you’d start most of the other normal programs. The way you feed command line arguments to a program depends on the IDE that you’re using — Eclipse, NetBeans, or whatever. Allmycode.com has instructions for feeding arguments to programs using various IDEs.
When the code begins running, the
args array gets its values. With the run shown in the image above, the array component
args[0] automatically takes on the value
"MyNumberedFile.txt", and
args[1] automatically becomes
"5". So the program’s assignment statements end up having the following meaning:
PrintStream printOut = new PrintStream("MyNumberedFile.txt");
int numLines = Integer.parseInt("5");
The program creates a file named
MyNumberedFile.txt and sets
numLines to
5. So later in the code, the program randomly generates five values and puts those values into
MyNumberedFile.txt. One run of the program gives you this.
After running the code, where can you find the new file (
MyNumberedFile.txt) on your hard drive? The answer depends on a lot of different things. If you use an IDE with programs divided into projects, then the new file is somewhere in the project’s folder. One way or another, you can change Listing 11-7 to specify a full path name — a name like
"c:\\Users\\MyName\\Documents\\MyNumberedFile.txt" or
"/Users/MyName/Documents/MyNumberedFile.txt".
In Windows, file path names contain backslash characters. And in Java, when you want to indicate a backslash inside a double-quoted String literal, you use a double backslash instead. That’s why “c:\\Users\\MyName\\Documents\\MyNumberedFile.txt” contains pairs of backslashes. In contrast, file paths in the Linux and Macintosh operating systems contain forward slashes. To indicate a forward slash in a Java String, use only one forward slash.
Notice how each command line argument is a
String value. When you look at
args[1], you don’t see the number 5 — you see the string
"5" with a digit character in it. Unfortunately, you can’t use that
"5" to do any counting. To get an
int value from
"5", you have to apply the
parseInt method.
The
parseInt method lives inside a class named Integer. So, to call
parseInt, you preface the name parseInt with the word Integer. The
Integer class has all kinds of handy methods for doing things with
int values.
In Java, Integer is the name of a class, and int is the name of a primitive (simple) type. The two things are related, but they’re not the same. The
Integer
class has methods and other tools for dealing with
int values.
Checking for the right number of command line arguments
What happens if the user makes a mistake? What if the user forgets to type the number 5 on the first line when you launch
MakeRandomNumsFile?
Then the computer assigns
"MyNumberedFile.txt" to
args[0], but it doesn’t assign anything to
args[1]. This is bad. If the computer ever reaches the statement
int numLines = Integer.parseInt(args[1]);
the program crashes with an unfriendly
ArrayIndexOutOfBoundsException.
What do you do about this? You check the length of the
args array. You compare
args.length with
2. If the
args array has fewer than two components, you display a message on the screen and exit from the program.
Despite the checking of
args.length, the code still isn’t crash-proof. If the user types five instead of 5, the program takes a nosedive with a
NumberFormatException. The second command line argument can’t be a word. The argument has to be a number (and a whole number, at that). You can add statements to to make the code more bulletproof.
When you’re working with command line arguments, you can enter a
String value with a blank space in it. Just enclose the value in double quote marks. For instance, you can run the code above with arguments
"My Big Fat File.txt" 7. | http://www.dummies.com/programming/java/command-line-arguments-java/ | CC-MAIN-2017-34 | refinedweb | 1,014 | 68.26 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On 17 Feb 2016 19:44, Carlos O'Donell wrote: > On 02/17/2016 05:20 PM, Mike Frysinger wrote: > > On 17 Feb 2016 16:43, Carlos O'Donell wrote: > >> It's a very good idea. I think we should stack protect libresolv, libdl, > >> nscd, etc, and we do already. Extending that is only going to be a good > >> thing. > > > > on a related note, seems like nscd should take advantage of seccomp & > > namespaces when available. that would also significantly mitigate on > > systems. any reason to not ? > > I see no reason why not. We would have to test for the availability of > that functionality in as old a kernel as we support running on, but > as newer kernels are booted the features should just turn on automatically. we'd always need to do runtime testing for features since people can disable both in their configs. doing the actual testing is pretty easy as they will return an error (EINVAL) if it's old/disabled. i've created some bugs and linked to them in the wiki's TODO at least. > For now we've just been using SELinux in nscd to restrict the damage the > daemon could do, but it could potentially be restricted even further. unfortunately SELinux is not as wide spread/adopted as one might hope. -mike
Attachment:
signature.asc
Description: Digital signature | https://sourceware.org/ml/libc-alpha/2016-02/msg00457.html | CC-MAIN-2018-30 | refinedweb | 241 | 66.64 |
NAME
aio_suspend - wait for asynchronous I/O operation or timeout
SYNOPSIS
#include <aio.h> int aio_suspend(const struct aiocb * const aiocb_list[], int nitems, const struct timespec *timeout); Link with -lrt.
DESCRIPTION).
VERSIONS
The aio_suspend() function is available since glibc 2.1.
CONFORMING TO
POSIX.1-2001, POSIX.1-2008.
NOTES.
SEE ALSO
aio_cancel(3), aio_error(3), aio_fsync(3), aio_read(3), aio_return(3), aio_write(3), lio_listio(3), aio(7), time(7)
COLOPHON
This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. 2010-10-02 AIO_SUSPEND(3) | http://manpages.ubuntu.com/manpages/precise/en/man3/aio_suspend.3.html | CC-MAIN-2016-26 | refinedweb | 104 | 59.6 |
quoter 1.0.3
A simple way to quote and wrap're constructing multi-level quoted strings, such as Unix command line arguments, SQL commands, or HTML attributes.
So this module provides an clean, consistent, higher-level alternative. Beyond just a better API, it also provides a mechanism to pre-define quoting styles that can then be later easily reused.
Usage
from quoter import * print single('this') # 'this' print double('that') # "that" print backticks('ls -l') # `ls -l` print braces('curlycue') # {curlycue}
It pre-defines callable Quoters for a handful of the most common quoting styles:
- constructs seen in markup, programming, and templating languages. So quoter couldn't possibly provide an option for every possible quoting style. Instead, it provides a general-purpose mechanism for defining your own:
from quoter import Quoter bars = Quoter('|') print bars('x') # |x| plus = Quoter('+','') print plus('x') # +x para = Quoter('<p>', '</p>') print para('this is a paragraph') # <p>this is a paragraph</p> variable = Quoter('${', '}') print variable('x') # ${x}
Note that bars is specified with just one symbol. If only one is given, the prefix and suffix are considered to be identical. If you really only want a prefix or a suffix, and not both, then instantiate the Quoter with two, one of which is an empty string, as in plus above. For symmetrical quotes, where the length of the prefix and the suffix are the same, you can specify the prefix and suffix all in one go. The prefix will be the first half, the second, the second half
In most cases, it's cleaner and more efficient to define a style, but there's nothing preventing you from an on-the-fly usage:
print Quoter('+[ ', ' ]+')( symbols use Unicode characters, yet your output medium doesn't support them directly, this is an easy fix. E.g.:
Quoter.options.encoding = 'utf-8' print curlydouble('something something')
Now curlydouble will output UTF-8 bytes. But in general, you should work in Unicode strings in Python, encoding or decoding only at the time of input and output, not as each piece of content}'
Clean Imports
As an organizational assist, quoters are available as named attributes of a pre-defined quote object. For those who like strict, minialist imports, this permits from quoter import quote without loss of generality. For example:
from quoter import quote quote.double('test') # "test" quote.braces('test') # {test} # ...and so on...
Each of these can also serve like an instance of an enumerated type, specifying for a later time what kind of quoting you'd like. Then, at the time that quoter is needed, it can simply be called. E.g.:
preferred_quoting = quote.brackets ... print preferred_quoting(data)
HTML
There is an extended quoting mode designed for XML and HTML construction.
Instead of prefix and suffix strings, they use tag names. Or more accurately, tag specifications. Like jQuery HTMLQuoter supports id and class attributes in a style similar to that of CSS selectors. This is a considerable help in Python, which defines and/or reserves some of the attribute names most used in HTML (to wit, class and id). Using the CSS selector style neatly gets around this annoyance--and is more compact to boot.
HTML quoting also understands that some elements are 'void', meaning they do not want or need closing tags.
So for example:
from quoter import * print html.p('this is great!', {'class':'emphatic'}) print html.p('this is great!', '.emphatic') print html.p('First para!', '#first')
You can also define your own customized quoters which can be called functionally or, if you name them, via the html. front-end.:
para_e = HTMLQuoter('p.emphatic', name='para_e') print para_e('this is great!') print html.para_e('this is great?', '.question') print html.
XML
There is also an XMLQuoter with an xml front-end. It offers one additional attribute beyond HTMLQuoter: ns for namespaces. Thus:
item = XMLQuoter(tag='item', ns='inv', name='item inv_item') print item('an item') print xml.item('another') print xml.inv_item('yet another') print xml.thing('something')
yields:
<inv:item>an item</inv:item> <inv:item>another</inv:item> <inv:item>yet another</inv:item> <thing>something</thing>
Note that xml.tagname auto-generates quoters just like html.tagname does on first use. There are also pre-defined utility methods such as html.comment() and xml.comment() for commenting purposes.
Named Styles
Quoting via the functional API or the attribute-accessed front-ends (quote, html, and xml) is probably the easiest way to go. But there's one more way. If you provide the name of a defined style via the style attribute, that's the style you get. So while quote('something') gives you single quotes by default ('something'), if you invoke it as quote('something', style='double'), you get double quoting as though you had used quote.double(...), double(...), or qd(...). This even works through named front.ends; quote.braces('something', style='double') still gets you "something". If you don't want to be confused by such double-bucky forms, don't use them. The best use-case for named styles is probably when you don't know how something will be quoted (or what tag it will use, in the HTML or XML case), but that decision is made dynamically. Then style=desired_style makes good sense.
Style names are stored in the class of the quoter. So all Quoter instances share the same named styles, as do HTMLQuoter, XMLQuoter, and LambdaQuoter., name='warning') print warning(12) # 12 print warning(-99) # **-99**
The trick is instantiating LambdaQuoter with a callable (e.g. lambda expression or function) that accepts one value and returns a tuple of three values: the quote prefix, the value (possibly rewritten), and the suffix.
You can access LambdaQuoter named instances through lambdaq (because lambda is a reserved word). Given the code above, lambdaq.warning is active, for example.
LambdaQuoter is an edge case, arcing over towards being a general formatting function. That has the virtue of providing a consistent mechanism for tactical output transformation with built-in margin and padding support. But, one could argue that such full transformations are "a bridge too far" for a quoting module. So use the dynamic component of``quoter``, or not, as you see fit.
Notes
- quoter provides simple transformations that could be alternatively implemented as a series of small functions. The problem is that such "little functions" tend to be constantly re-implemented, in different ways, and spread through many programs. That need to constantly re-implement such common and straightforward text formatting has led me to re-think how software should format text. quoter is one facet of a project to systematize higher-level formatting operations. See say and show for the larger effort.
- quoter is also a test case for options, a module that supports flexible option handling. In fact, it is one of options most extensive test cases, in terms of subclassing and dealing with named styles.
- In the future, additional quoting styles such as ones for Markdown or RST format styles might appear. It's not hard to subclass Quoter for new languages.
- Automated multi-version testing is managed with the magnificent pytest and tox. Now successfully packaged for, and tested against, Python 2.6, 2.7, 3.2, and 3.3, as well as PyPy 2.1 (based on 2.7.3).
- The author, Jonathan Eunice or @jeunice on Twitter welcomes your comments and suggestions.
Installation
pip install -U quoter
To easy_install under a specific Python version (3.3 in this example):
python3.3 -m easy_install --upgrade quoter
(You may need to prefix these with "sudo " to authorize installation.)
- Downloads (All Versions):
- 36 downloads in the last day
- 613 downloads in the last week
- 2209 downloads in the last month
- Author: Jonathan Eunice
- Keywords: quote wrap prefix suffix endcap
- Categories
- Development Status :: 4 - Beta
- Intended Audience :: Developers
- License :: OSI Approved :: BSD License
- Operating System :: OS Independent
- Programming Language :: Python
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.2
- Programming Language :: Python :: 3.3
- Programming Language :: Python :: Implementation :: CPython
- Programming Language :: Python :: Implementation :: PyPy
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: Jonathan.Eunice
- DOAP record: quoter-1.0.3.xml | https://pypi.python.org/pypi/quoter | CC-MAIN-2014-15 | refinedweb | 1,370 | 57.37 |
Hello
Is there a way to find the list of all the commands that are not available in the toolbars.
since
commandlist does not allow me to sort the list by existence in the toolbar
thank you
fares
Hello
Is there a way to find the list of all the commands that are not available in the toolbars.
since
commandlist does not allow me to sort the list by existence in the toolbar
thank you
fares
You can find a list of all commands here on the Rhino online help website. You can check any command to see whether it is available on any toolbar or not. Here is a small script that prints these commands:
import bs4 import requests url = " html_content = requests.get(url).text soup = bs4.BeautifulSoup(html_content, "html.parser") for headings in soup.find_all("h5"): if headings.find("img", attrs={"title": "Not on toolbars."}): for contet in headings.find("a").contents: if isinstance(contet, bs4.element.NavigableString): print(contet)
NotOnToolbars.txt (2.8 KB)
thank you mehdiar
this listing is exactly what i need
I’m wondering why the list is useful to you? Thanks.
yes indeed, it is very useful for me.
being that I am more visual when working with rhino, I tend to easily remember icons and their locations, I work 98% with icons .
icons are useful for discovering rhino, you just need to see or hover over an icon to be curious.
but if the commands are hidden there is not much chance to discover the potential and capabilities that these commands offer to the user.
the goal for me. is to sort and choose the commands that integrate well with my workflow, to then create custom icons for those commands …
believe me, i didn’t know that there were commands available only in command prompt .
I believed that only the test commands have this property.
Thanks for the details. | https://discourse.mcneel.com/t/list-of-commands-that-are-not-available-in-the-toolbars/129059 | CC-MAIN-2022-21 | refinedweb | 317 | 65.73 |
Up to [DragonFly] / src
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
Add note on move of kernel and modules to boot directory.
Remove some lines warning about 'make upgrade' before 'make installworld'. The upgrade mechanism now ensures that installworld has run.
Merge: Remove some more leftovers from _ntp and add _sdpd where necessary.
Remove some more leftovers from _ntp and add _sdpd where necessary. Reviewed-by: swildner@
Quickly update UPDATING with 1.8 -> 1.9+ documentation.
Fix file-/pathnames which have changed now.
* s/FreeBSD/DragonFly/ * Fix spelling & grammar mistakes. Submitted-by: Trevor Kendall <trevorjkendall@gmail.com>
Remove ports related documentation and adjust disk usage.
Fix spelling mistake. Noticed-by: Trevor Kendall <trevorjk@gmail.com>
* Remove example supfile for dfports. * Add a sentence of documentation about the release example supfiles. * Replace ports/dfports section in the upgrading notes with some information about pkgsrc.
Add -P to the cvs example commands.
Enumerate extra steps needed for someone upgrading from 1.2 to 1.4. Approved-by: "Matthew Dillon" <dillon@crater.dragonflybsd.org>
- Note that 'make upgrade' not only upgrades /etc but the whole system. - Add some words of warning about running 'make upgrade' before 'make installworld'.
Add some words about PAM. Reminder: corecode
Explicitly note that updating from pre-1.2 to PREVIEW or HEAD is not supported, the intermediate step to 1.2 is required.
Fix CVS root directory name. Noticed by: Thomas E. Spanjaard <tgen@netphreax.net>
Fix supfile name. Noticed by: Thomas E. Spanjaard <tgen@netphreax.net>
Reduce foot-shooting potential by adding -dP for cvs update and -P for checkout. Remove the paragraph about /usr/dfports/distfiles, there's no need for such a thing.
Correct spelling.
Switch to OpenNTPD by default. For the moment, the documentation is not in-sync, -S [Do not set the time immediately], because when a default route exists, but DNS traffic is lost, the startup time is *big*. You can change this by overriding ntpd_flags in /etc/rc.conf.
Add a section to UPDATING describing the users and groups that might have to be added. Remove the authpf user requirement. Only an authpf group is required. Reported-by: esmith <esmith@patmedia.net>
Revamp UPDATING with separate instructions for upgrading from sources on a FreeBSD platform, installing fresh from the CD, and installing/upgrading from sources on an existing DragonFly platform.
clarify the solution for typical build snafus in UPDATING.
rename /usr/dports to /usr/dfports. Includes some CVS surgery.
Linux emulation has been working well for a while now, remove notice.
Add UPDATING note on /usr/dports
First stab at our own UPDATING file. If you make any change worth mentioning to people using the source tree, feel free to add to the text. Do not worry about spelling, layout or the like, it will get fixed up shortly afterwards.
import from FreeBSD RELENG_4 1.73.2.81 | http://www.dragonflybsd.org/cvsweb/src/UPDATING?r1=1.16.2.1 | CC-MAIN-2014-52 | refinedweb | 481 | 53.58 |
>> from PA import A, ..... <whatever you need>
I am only using dtml and not writing directly python code !
So I do not believe this is possible.
>> Almost unbelievable.
But true.
But I will have again to do some test.
(I do not have the time right now, and I did most of my
test with the previous version of Zope)
What I will have to do:
- create a Package (Zope product).
- create a class in this package, and a method (my_method)
- create another package.
- create a class in this package that inherits the class of the other package.
- check that the method my_method is visible from within the second package.
Thierry Nabeth
Research Fellow
INSEAD CALT (the Centre for Advanced Learning Technologies)
-----Original Message-----
From: Dieter Maurer [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 18, 2000 9:49 PM
To: NABETH Thierry
Cc: '[EMAIL PROTECTED]'
Subject: RE: [Zope] Product inhetitance question (similar question)
NABETH Thierry writes:
> And what happen if the Class A is in a package PA.
> Clabb B is in a package PB.
>
> How do you access the namespace of PA from PB ?
from PA import A, ..... <whatever you need>
class B(A): ....
> When I have tried, the inherited methods from A where not visible
> from B, and I and to recreate a copie of the method.
> (which is ugly !!!).
Almost unbelievable.
Dieter | https://www.mail-archive.com/zope@zope.org/msg06534.html | CC-MAIN-2017-17 | refinedweb | 224 | 73.47 |
With eCommerce becoming more mainstream, companies are shipping goods directly to their consumers now more than ever. Once consumers buy something online, they want to know where their order is in the fulfillment process and when it should arrive, and that’s where Twilio and EasyPost come in handy.
In this tutorial, you’ll see how easy it is to track the movement of shipments with the EasyPost API for Tracking, and programmatically notify individuals via the Twilio SMS API and the Flask framework for Python.
Below is an example of the type of notifications that you’ll be sending automatically with this app:
Tutorial requirements
To follow this tutorial you need the following components:
- Python 3.6 or newer. If your operating system does not provide a Python interpreter, you can go to python.org to download an installer.
- Flask. We will create a web application that responds to incoming webhooks from EasyPost with it.
- ngrok. We will use this handy utility to connect the Flask application running on your system to a public URL that EasyPost can connect to. This is necessary for the development version of the notification app because your computer is likely behind a router or firewall, so it isn’t directly reachable on the Internet. If you don’t have ngrok installed, you can download a copy for Windows, MacOS or Linux.
- A Twilio account. If you are new to Twilio create a free account now. Use this link to sign up and you’ll receive a $10 credit when you upgrade to a paid account.
- An EasyPost account. If you are new to EasyPost create a free account now. EasyPost offers a free test environment that will be sufficient to complete this tutorial.
Create a Python virtual environment
Following Python best practices, we are going to make a separate directory for our shipment notification shipment-notifications $ cd shipment-notifications $ python3 -m venv shipment-notifications-venv $ source shipment-notifications-venv/bin/activate (shipment-notifications-venv) $ pip install twilio easypost flask
For those of you following the tutorial on Windows, enter the following commands in a command prompt window:
$ md shipment-notifications $ cd shipment-notifications $ python -m venv shipment-notifications-venv $ shipment-notifications-venv\Scripts\activate (shipment-notifications-venv) $ pip install twilio easypost flask EasyPost Python Client library, to work with the EasyPost APIs
For your reference, at the time this tutorial was released these were the versions of the above packages and their dependencies tested:
certifi==2019.11.28 chardet==3.0.4 Click==7.0 easypost==4.0.0 Flask==1.1.1 idna==2.9 itsdangerous==1.1.0 Jinja2==2.11.1 MarkupSafe==1.1.1 PyJWT==1.7.1 python-dotenv==0.12.0 pytz==2019.3 requests==2.23.0 six==1.14.0 twilio==6.35.5 urllib3==1.25.8 Werkzeug==1.0.0
Create a Flask shipment notification service
Time to start writing some code, so we can delight our customers!
In this tutorial we’re going to build a very basic service to initiate EasyPost tracking on our shipments, receive webhook events, and trigger Twilio notifications. This is fairly straightforward functionality, but along the way I’ll point out some ways that you could expand the functionality to support more sophisticated use cases.
Setting up a webhook
We’ll need to define an endpoint in our application that we share with EasyPost so it knows where to send us shipment notifications.
This is as simple as adding a route to a basic Flask App. Below is an example of how easy it is to create a webhook in Flask:
from flask import Flask app = Flask(__name__) @app.route('/events', methods=['POST']) def events(): # put webhook logic here print("Webhook Received") return '', 204 if __name__ == "__main__": app.run()
If you save this code in a app.py file, you should be able to run the following command in your terminal to get your app running.
(shipment-notifications-venv) $ python app.py * Serving Flask app "app" .
Congrats! In just a few lines of code, you already have a basic Flask app and webhook running locally!
At this point, all your app does is print “Webhook Received” in the terminal when it gets an HTTP POST, but before long it’ll do a lot more.
To make this service reachable from the Internet we need to use ngrok.
Open a second terminal window and run
ngrok http 5000 to allocate a temporary public domain that redirects HTTP requests to our local port 5000. On a Unix or Mac OS computer you may need to use
./ngrok http 5000 if you have the ngrok executable in your current directory. The output of ngrok should be something like this:
Note the lines beginning with “Forwarding”. These show the public URL that ngrok uses to redirect requests into our service. What we need to do now is tell EasyPost to use this URL to send us shipment event notifications.
Getting started with EasyPost
Now that you have a very basic webhook running, let’s set up your EasyPost account.
Sharing your Flask webhook with EasyPost
Registering your webhook with EasyPost is incredibly simple. Once you’ve created and verified your account, go to the “Webhooks & Events” section of your dashboard. Click “Add Webhook” and make sure you select “Test” as the environment. If you accidentally create a production webhook, none of your test events will be received.
You’ve created a “/events” route in our app so you’ll append that to the end of the ngrok URL generated previously. Your webhook URL should look something like “” and once you click “Create Webhook”, it should show up among your “Test Webhooks” as in the image below.
Retrieving your EasyPost API key
To get your API Key from EasyPost, log into your dashboard and click on your email address which will prompt the drop down menu shown below. Click “API Keys” and you should see a screen that has a “Production API Keys” and “Test API Keys” section. Copy your test API key from the next screen. If you use your production key by mistake, you will run into issues using the test webhook events we’ll trigger later, and you might incur charges from EasyPost unintentionally.
Now that you have your test key copied to your clipboard. Let’s save it as an environment variable.
(shipment-notifications-venv) $ export EASYPOST_API_KEY=<YOUR TEST API KEY>
Getting started with Twilio SMS
In order to use Twilio SMS, you’ll need a working phone number from which you can send outbound messages.
If you do not already have one, you can get it from the Twilio console. Clicking the red “Get a Trial Number” button will give you a number you can use with your trial credit, but all messages will be prefixed with text indicating you are using a trial number. To send messages without this trial text, you’ll need to upgrade to a paid account.
If you want more flexibility in choosing a phone number for your preferred country and area, use the Phone Numbers page to search and browse available numbers.
Setting and accessing environment variables
Now that you have your account with both EasyPost and Twilio set up, we need to set a few more environment variables so that our app knows what credentials to use to talk to these services, as well as what phone numbers we want to send our SMS notifications to go to and from for testing purposes.
Set the following environment variables, replacing the angle brackets and inner text with the appropriate values for your accounts:
(shipment-notification-venv) $ export TWILIO_ACCOUNT_SID=<YOUR TWILIO SECRET> (shipment-notification-venv) $ export TWILIO_AUTH_TOKEN=<YOUR TWILIO AUTH TOKEN> (shipment-notification-venv) $ export NOTIFICATION_PHONE=<YOUR CELL PHONE NUMBER> (shipment-notification-venv) $ export TWILIO_PHONE=<YOUR TWILIO PHONE NUMBER>
Note that if you are following this tutorial on a Windows computer you have to use
set instead of
export.
Great, now to access these values, we need to change a few things in our code.
First, we need to import some dependencies. Then we need to create the Twilio client, so that we can send a message once we receive the webhook event. Your app.py file should now look like the following:
from flask import Flask, request import os from twilio.rest import Client import json app = Flask(__name__) account_sid = os.environ.get('TWILIO_ACCOUNT_SID') auth_token = os.environ.get('TWILIO_AUTH_TOKEN') client = Client(account_sid, auth_token) @app.route('/events', methods=['POST']) def events(): # put webhook logic here print("Webhook Received") return '', 204 if __name__ == "__main__": app.run()
Creating trackers in EasyPost
In order to tell EasyPost which shipments we want to track, we need to create a “tracker”. This can happen in one of two ways:
- Any shipping label purchased through EasyPost automatically creates a corresponding tracker.
- Creating a tracker object independently of label creation.
For the purpose of this tutorial we’re going to use the later, and EasyPost provides a very handy test environment in which we can trigger simulated test shipments events, without having to worry about waiting hours or days for a carrier to transport a physical package.
To simulate our shipments in this tutorial we’ll need to make a simple script that we can run apart from our app that will send tracker creation requests to EasyPost.
Save a new file called test_tracker.py and input the following code:
import os import easypost easypost.api_key = os.environ.get('EASYPOST_API_KEY') tracker = easypost.Tracker.create( tracking_code="EZ4000000004", carrier="USPS" ) print(tracker)
I’ve set my test tracker to leverage one of the test trackers provided in EasyPost’s documentation, which you can also see below:
The code I used will simulate a delivered shipment, but you can use any of the available test codes. These are the only codes that can be used with your EasyPost test API key, but conveniently you can expect the simulated events to arrive at your webhook within a few minutes.
In test mode, you’ll get a few consecutive events of the same status, this will give you the ability to test deduplication logic if you like, as it’s common for some carriers to have different events with the same status, particularly ‘in_transit’ events. For the purpose of this tutorial we are not going to worry about managing duplicate events, but I’ll go over this problem a bit more later.
When we are ready to test our app, you’ll just run the app, and then open a separate terminal window to run the test_tracker.py script.
Handling webhook events
The events we receive from EasyPost will be JSON strings so we’ll use the standard json package to help us parse the events. When we receive an event, we’ll use the Flask request object to retrieve the data and load it into JSON format.
@app.route('/events', methods=['POST']) def events(): data = json.loads(request.data) ...
Adding notification rules
Every time there are new details for a given shipment, EasyPost will make a POST request to your webhook to make you aware, but sending an SMS message for every single event will probably get a bit noisy for our hypothetical customers.
Instead let’s add some business logic to make sure that we only send messages when a shipment is in the “out_for_delivery” or “delivered” status. Now my code looks like this, but feel free to add your own business logic if you like.
... @app.route('/events', methods=['POST']) def events(): data = json.loads(request.data) result = data['result'] print(result['carrier'] + " - " + result['tracking_code'] + ": " + result['status']) if result['status'] in ["out_for_delivery","delivered"]: #send notification via Twilio message = client.messages.create( body=result['status'], from_=os.environ.get('TWILIO_PHONE'), to=os.environ.get('NOTIFICATION_PHONE') ) print(message.sid) return '', 204
Assuming you followed my logic, when using the EasyPost testing API you should only receive SMS messages if you used test tracking code EZ3000000003 or EZ4000000004, as those are the ones that trigger the appropriate tracker statuses.
Sending messages for humans
This is a great start, but we have a bit of a problem. If we send a message with just the status of the shipment without any context, it won’t be very useful. Additionally, it’s not a great experience if the message looks like it was meant for a machine. It should be human readable to make the person receiving it feel like they are receiving a real message.
To achieve this outcome, we can create a simple dictionary with static data that maps the shipment statuses we expect to see from EasyPost to human readable text we want to send to our customers. I’ll make a few small changes to app.py, and I should be good to go.
... STATUSES = { "pre_transit":"is ready to go, but hasn't been shipped yet.", "in_transit":"is on it's way!", "out_for_delivery":"is out for delivery! It should be there soon!", "delivered":"has been delivered! Enjoy!" } @app.route('/events', methods=['POST']) def events(): data = json.loads(request.data) result = data['result'] print(result['carrier'] + " - " + result['tracking_code'] + ": " + result['status']) if result['status'] in ["out_for_delivery","delivered"]: #send notification via Twilio human_readable = STATUSES[result['status']] message_body = "Your {0} package with tracking number, {1}, {2}".format(result['carrier'], result['tracking_code'], human_readable) message = client.messages.create( body=message_body, from_=os.environ.get('TWILIO_PHONE'), to=os.environ.get('NOTIFICATION_PHONE') ) print(message.sid) return '', 204 ...
This is much better, now we have some text that references the carrier and tracking code and looks like it was meant for a human.
Testing the service
Now that we have some handling for events, let’s make sure that our app and ngrok are running. Keep in mind that if you stop and restart ngrok you will be assigned a different public URL, so you will need to go back to the EasyPost configuration and update the webhook URL. While this is tedious to do, it is only necessary in a testing environment, since ngrok will not be used in production.
Once you’ve confirmed that things are running, let’s run our script to create test trackers in another terminal window. Assuming all goes well, we should be able to watch our SMS notifications come through from Twilio.
(shipment-notification-venv) $ python test_tracker.py
A minute or two after you run the script you should receive an SMS message from the Twilio number you created previously, like the one below.
Congrats, you’re now processing webhooks from EasyPost and using them to trigger SMS messages!
Notes on production deployment
There are several things that you’ll want to consider when deploying this app in production. For starters, you should not use the Flask development server. As the warning that presents itself states when you run your app, it is not made for production traffic.
The two most common production ready web servers for Python web applications are gunicorn and uWSGI, both installable on your virtual environment with
pip. For example, here is how to run the application with gunicorn:
(shipment-notification-venv) $ gunicorn -b :5000 app:app
Also keep in mind that for a production deployment you will be running the service on a cloud server and not out of your own computer, so there is no need to use ngrok.
Handling high volume traffic
When planning your deployment, you’ll also want to think carefully about the amount of traffic you expect to be flowing to your webhook. If you’re shipping and tracking a lot of packages, you’ll want to make sure that your webhook can handle surges in event traffic which can happen unpredictably.
While you can scale up the number of instances of your app, this is not very efficient as you might have periods of low activity, which you’ll be wasting production resources hosting. Instead, I’d recommend keeping your ingestion of webhooks as lightweight as possible, and processing them asynchronously.
In this tutorial we checked the body of the webhook and triggered an SMS message with Twilio whenever we received the key events we were looking for from EasyPost. This is relatively lightweight, but in a production scenario we would likely need to take on a bit more technical overhead to process the event. Specifically, we’d need to check the shipment associated with the event, correlate it with an order and corresponding customer in our hypothetical order management system, find the customer contact information that we should use to send our notification, and send the notification through Twilio. Depending on the size of your customer base and dependent application architecture, this could result in multiple database queries.
Instead of doing this synchronously upon receiving the POST from EasyPost, I’d highly recommend just adding the request bodies to a queue for asynchronous processing. Redis is a powerful in-memory datastore that is commonly used to manage queues like this in conjunction with RQ. You can check out Sam Agnew’s blog post about this technique here.
If for some reason you’re getting so many events that you can’t process them at the time they are posted to your webhook, don’t worry, EasyPost has built in redundancy with progressive backoff retry logic to help ensure your webhook is able to receive the event later should your webhook be temporarily unavailable.
Securing the webhook
Adding security is also an important consideration. In our tutorial example, any POST data sent to our webhook will be received and processed, but this presents a pretty glaring security vulnerability. If this were a production application, we would want to add some authentication to our webhook to ensure that we are getting notifications from known sources and not bad actors.
EasyPost supports basic authentication to secure your webhook, which is covered in their webhook guide. It would also be a good idea to rotate your basic authentication credentials periodically to decrease the likelihood that they become compromised. You can do this automatically via EasyPost’s Webhooks API.
Event deduplication
As you probably noticed in testing, EasyPost sends multiple webhook events in test mode as I mentioned, and this is to help you test deduplication logic for events. If you’re sending out notifications for events like we are in this tutorial, you don’t want to spam your customers with duplicate events. To help manage this, I recommend maintaining a database that helps you track events. SQLAlchemy plays very nicely with Flask via the Flask-SQLAlchemy package and is a great tool to help manage the state of shipments and events with a relational database.
When events come in, you can check if you’ve received a similar event already, and decide what you want to do from there. In all likelihood, you probably won’t care about every in_transit event that the package has in it’s journey, but EasyPost sends you the events to provide as much granularity as possible as other business use cases require it.
Conclusion
As the popularity of e-commerce grows, and consumers’ attention spans shrink, it’s increasingly important to provide customers with the best buying experience possible.
Merely sending a notification to customers is just the beginning though. Perhaps you want to automatically provide a coupon if an order gets delayed or lost. That’s easy to link to in your Twilio message, and you’ve just turned a negative customer experience into a positive one!
Hopefully you have found this to be a useful tutorial. Below are some additional resources that cover some of the things I alluded to in this tutorial that we didn’t get to cover in depth.
I’d love to see what you build, find me on LinkedIn! | https://www.twilio.com/blog/build-shipment-notification-service-python-flask-twilio-easypost | CC-MAIN-2021-10 | refinedweb | 3,287 | 51.18 |
In the first part of this three-part tutorial series, we saw how to write RESTful APIs using Flask as the web framework. The previous approach provided a lot of flexibility but previous part to maintain context and continuity. The full source code for the previous project can be found in our GitHub repo..
Add the following lines to the flask_app/my_app/__init__.py file:
from flask_restless import APIManager manager = APIManager(app, flask_sqlalchemy_db=db)
Just adding the above couple of lines to the existing code should suffice. In the code above, we create the Flask-Restless API manager.
flask_app/my_app/product/views.py
This file comprises the bulk of the changes from the previous part. Below is the complete rewritten file.
from my_app import db, app, manager catalog = Blueprint('catalog', __name__) @catalog.route('/') @catalog.route('/home') def home(): return "Welcome to the Catalog Home." manager.create_api(Product, methods=['GET', 'POST'])
It is pretty self-explanatory how the above code would work. We just imported the manager created in a previous file, and it is used to create an API for the
Product model with the listed methods. We can add more methods like
DELETE,
PUT, and
PATCH.
We don't need to create any views since Flask Restless will automatically generate them. The API endpoints specified above will be available at /api/<tablename> by default.
The API() {'total_pages': 0, 'objects': [], 'num_results': 0, 'page': 1} >>> d = {'name': 'Macbook Air', 'price': 2000} >>> res = requests.post('', data=json.dumps(d), headers={'Content-Type': 'application/json'}) >>> res.json() {'price': 2000, 'id': 1, 'name': 'Macbook Air'}
Here is how to add products using Postman:
How to Customize the API
It is convenient to have the RESTful APIs created automatically, but each application has some business logic that calls for customizations, validations, and clever/secure handling of requests.
Here, request preprocessors and postprocessors come to the rescue. As the names signify, methods designated as preprocessors run before processing the request, and methods designated as postprocessors run after processing the request.
create_api() is the place where they are defined as dictionaries of the request type (e.g.
GET or
POST) and the methods which will act as preprocessors or postprocessors on the specified request are listed. methods by adding a couple of lines to an SQLAlchemy-based model.
In the next and last part of this series, I will cover how to create a RESTful API using another popular Flask extension, but this time, the API will be independent of the modeling tool used for the database.
This post has been updated with contributions from Esther Vaati. Esther is a software developer and writer for Envato Tuts+.
| https://code.tutsplus.com/tutorials/building-restful-apis-with-flask-an-orm-with-sqlalchemy--cms-26706 | CC-MAIN-2022-40 | refinedweb | 440 | 55.54 |
How To Read XML Data into a DataSet by Using Visual C# .NET
This article was previously published under Q311566
For a Microsoft Visual Basic .NET version of this article, see 309702.
For a Microsoft Visual C++ .NET version of this article, see 311570.
This article refers to the following Microsoft .NET Framework Class Library namespaces:
For a Microsoft Visual C++ .NET version of this article, see 311570.
This article refers to the following Microsoft .NET Framework Class Library namespaces:
- System.Data
- System.Data.SqlClient
IN THIS TASK
This article demonstrates how to read Extensible Markup Language (XML) data into an ADO.NET DataSet object.
SUMMARY
back to the top
RequirementsThe following list outlines the recommended hardware, software, network infrastructure, and service packs that you need:
- Microsoft Windows 2000 Professional, Windows 2000 Server, Windows 2000 Advanced Server, or Windows NT 4.0 Server
- Microsoft Visual Studio .NET
- Visual Studio .NET
- ADO.NET fundamentals and syntax
- XML fundamentals
Description of the TechniqueYou can use the ReadXml method to read XML schema and data into a DataSet. XML data can be read directly from a file, a Stream object, an XmlWriter object, or a TextWriter object.
You can use one of two sets of overloaded methods for the ReadXml method, depending on your needs. The first set of four overloaded methods takes just one parameter. The second set of four overloaded methods take an additional parameter (XmlReadMode) along with one of the parameters from the first set.
The following list outlines the first set of overloaded methods, which take one parameter:
- The code to follow uses a specified file to read XML schema and data into the DataSet:
Overloads Public Sub ReadXml(String)
- The code to follow uses a specified TextReader to read XML schema and data into the DataSet. TextReader is designed for character input.
Overloads Public Sub ReadXml(TextReader)
- The code to follow uses a specified System.IO.Stream to read XML schema and data into the DataSet. The Stream class is designed for byte input and output.
Overloads Public Sub ReadXml(Stream)
- The code to follow uses a specified XmlReader to read XML schema and data into the DataSet. This method provides fast, non-cached, forward-only access to XML data that conforms to the World Wide Web Consortium (W3C) XML 1.0 specification and the namespaces in the XML specification.
Overloads Public Sub ReadXml(XmlReader)
- DiffGram. Reads a DiffGram, and applies changes from the DiffGram to the DataSet.
- Fragment. Reads XML documents that contain inline XML-Data Reduced (XDR) schema fragments (such as those that are generated when you run FOR XML schemas that include inline XDR schema against an instance of Microsoft SQL Server).
- IgnoreSchema. Ignores any inline schema and reads data into the existing DataSet schema.
- InferSchema. Ignores any inline schema, infers schema from the data, and loads the data. If the DataSet already contains a schema, InferSchema extends the current schema by adding columns to tables that exist and by adding new tables if tables do not exist.
- ReadSchema. Reads any inline schema, and loads the data.
- Auto. Default. Performs the most appropriate action.
Create Project and Add CodeThis example uses a file named MySchema.xml. To create MySchema.xml, follow the steps in the following Microsoft Knowledge Base article:ReadXml. For other examples, refer to MSDN for individual overload topics of this method.
- Start Visual Studio .NET.
- Create a new Windows Application project in Visual C# .NET. Form1 is added to the project by default.
- Make sure that your project contains a reference to the System.Data namespace, and add a reference to this namespace if it does not.
- Place two Button controls and one DataGrid control on Form1. Change the Name property of Button1 to btnReader, and change its Text property to Reader.
Change the Name property of Button2 to btnFile, and change its Text property to File.
- Use the using statement on the System, System.Data, and System.Data.SqlClient namespaces so that you are not required to qualify declarations in those namespaces later in your code.
using System;using System.Data;using System.Data.SqlClient;
- Add the following code in the event handler that corresponds to the buttons:
private void btnReader_Click(object sender, System.EventArgs e){ string myXMLfile = @"C:\MySchema.xml"; DataSet ds = new DataSet(); // Create new FileStream with which to read the schema. System.IO.FileStream fsReadXml = new System.IO.FileStream (myXMLfile, System.IO.FileMode.Open); try { ds.ReadXml(fsReadXml); dataGrid1.DataSource = ds; dataGrid1.DataMember = "Cust"; } catch (Exception ex) { MessageBox.Show(ex.ToString()); } finally { fsReadXml.Close(); }} private void btnFile_Click(object sender, System.EventArgs e){ string myXMLfile = "C:\\MySchema.xml"; DataSet ds = new DataSet(); try { ds.ReadXml(myXMLfile); dataGrid1.DataSource = ds; dataGrid1.DataMember = "Cust"; } catch (Exception ex) { MessageBox.Show(ex.ToString()); }}
- Modify the path to the XML file (MyXmlFile) as appropriate for your environment.
- Save your project. On the Debug menu, click Start to run your project.
- Click any of the buttons to read the XML data from the specified file. Notice that the XML data appears in the grid.
Additional Notes
- To read only the XML schema, you can use the ReadXmlSchema method.
- To get only the XML representation of the data in the DataSet instead of persisting it onto a stream or a file, you can use the GetXml method.
For additional information, click the article numbers below to view the articles in the Microsoft Knowledge Base:
REFERENCES
Accessing Data with ADO.NETback to the top
Properties
Article ID: 311566 - Last Review: 05/13/2007 05:03:15 - Revision: 2.4
Microsoft ADO.NET 1.1, Microsoft ADO.NET 1.0, Microsoft Visual C# .NET 2003 Standard Edition, Microsoft Visual C# .NET 2002 Standard Edition
- kbhowtomaster kbsystemdata KB311566 | https://support.microsoft.com/en-us/kb/311566 | CC-MAIN-2016-50 | refinedweb | 949 | 60.31 |
#include <hallo.h> * George Danchev [Sat, Jul 29 2006, 10:23:58AM]: > On Saturday 29 July 2006 00:42, Marco d'Itri wrote: > > On Jul 28, Joey Hess <joeyh@debian.org> wrote: > > > "innovation" is the industy's current buzzword. Doing things well even > > > if someone else had a similar idea before will outlive it. > > > > We used to take pride in inventing stuff like update-alternatives which > > solve long-time problems. > > > > > Or do you really think that udev is a useless project? After all, all > > > the innovation was done in devfs and hotplug. > > > > The innovation in udev (with HAL, new kernel features and other stuff) > > is allowing implementing new features which used to not be possible or > > required very complex hacks. > > There is a middle ground between useless and innovative, BTW. > > Could you please give your definition for `innovation' ? (If it is about to > distribute more and more non-free stuff, then I'm glad we have different > definitions for innovation.) Haha, now the thread reaches the point where every fanatic uncovers his favorite issue and begins to "interpret" into it. Can we stop here right now? Or move it to a separate thread, thanks. Eduard. -- <yath> bla. mach ichs halt als root. <erich> <yath's rechner> Oh ja, machs mir als root! | https://lists.debian.org/debian-devel/2006/07/msg01272.html | CC-MAIN-2014-10 | refinedweb | 212 | 66.44 |
View all headers
Received: from MIT.EDU (SOUTH-STATION-ANNEX.MIT.EDU [18.72.1.2]) by bloom-picayune.MIT.EDU (8.6.13/2.3JIK) with SMTP id OAA03476; Sat, 20 Apr 1996 14:53:15 -0400
Received: from [199.164.164.1] by MIT.EDU with SMTP
id AA07977; Sat, 20 Apr 96 14:13:31 EDT
Received: by questrel.questrel.com (940816.SGI.8.6.9/940406.SGI)
for news-answers-request@mit.edu id LAA25285; Sat, 20 Apr 1996 11:14:26 (probability), part 31 of 35
Message-Id: <puzzles/archive/probability: 1312
Xref: senator-bedfellow.mit.edu rec.puzzles:25020 news.answers:11540 rec.answers:1940
Apparently-To: news-answers-request@mit.edu
View main headers
==>?
==> probability/amoeba.s <==
If p is the probability that a single amoeba's descendants will die
out eventually, the probability that N amoebas' descendents will all
die out eventually must be p^N, since each amoeba is independent of
every other amoeba. Also, the probability that a single amoeba's
descendants will die out must be independent of time when averaged
over all the possibilities. At t=0, the probability is p, at t=1 the
probability is 0.25(p^0+p^1+p^2+p^3), and these probabilities must be
equal. Extinction probability p is a root of f(p)=p. In this case,
p = sqrt(2)-1.
The generating function for the sequence P(n,i), which gives the
probability of i amoebas after n minutes, is f^n(x), where f^n(x) ==
f^(n-1) ( f(x) ), f^0(x) == x . That is, f^n is the nth composition
of f with itself.
Then f^n(0) gives the probability of 0 amoebas after n minutes, since
f^n(0) = P(n,0). We then note that:
f^(n+1)(x) = ( 1 + f^n(x) + (f^n(x))^2 + (f^n(x))^3 )/4
so that if f^(n+1)(0) -> f^n(0) we can solve the equation.
The generating function also gives an expression for the expectation
value of the number of amoebas after n minutes. This is d/dx(f^n(x))
evaluated at x=1. Using the chain rule we get f'(f^(n-1)(x))*d/dx(f^(n-1)(x))
and since f'(1) = 1.5 and f(1) = 1, we see that the result is just
1.5^n, as might be expected.
==> probability/apriori.p <==
An urn contains one hundred white and black balls. You sample one hundred
balls with replacement and they are all white. What is the probability
that all the balls are white?
==> probability/apriori.s <==
This question cannot be answered with the information given.
In general, the following formula gives the conditional probability
that all the balls are white given you have sampled one hundred balls
and they are all white:
P(100 white | 100 white samples) =
P(100 white samples | 100 white) * P(100 white)
-----------------------------------------------------------
sum(i=0 to 100) P(100 white samples | i white) * P(i white)
The probabilities P(i white) are needed to compute this formula. This
does not seem helpful, since one of these (P(100 white)) is just what we
are trying to compute. However, the following argument can be made:
Before the experiment, all possible numbers of white balls from zero to
one hundred are equally likely, so P(i white) = 1/101. Therefore, the
odds that all 100 balls are white given 100 white samples is:
P(100 white | 100 white samples) =
1 / ( sum(i=0 to 100) (i/100)^100 ) =
63.6%
This argument is fallacious, however, since we cannot assume that the urn
was prepared so that all possible numbers of white balls from zero to one
hundred are equally likely. In general, we need to know the P(i white)
in order to calculate the P(100 white | 100 white samples). Without this
information, we cannot determine the answer.
This leads to a general "problem": our judgments about the relative
likelihood of things is based on past experience. Each experience allows
us to adjust our likelihood judgment, based on prior probabilities. This
is called Bayesian inference. However, if the prior probabilities are not
known, then neither are the derived probabilities. But how are the prior
probabilities determined? For example, if we are brains in the vat of a
diabolical scientist, all of our prior experiences are illusions, and
therefore all of our prior probabilities are wrong.
All of our probability judgments indeed depend upon the assumption that
we are not brains in a vat. If this assumption is wrong, all bets are
off.
==> probability/bayes.p <==?
==> probability/bayes
==> probability/birthday/line.p <==, etc., what position in line
gives you the greatest chance of being the first duplicate birthday?
==> probability/birthday/line.s <==
Suppose you are the Kth person in line. Then you win if and only if the
K-1 people ahead all have distinct birtdays AND your birthday matches
one of theirs. Let
A = event that your birthday matches one of the K-1 people ahead
B = event that those K-1 people all have different birthdays
Then
Prob(you win) = Prob(B) * Prob(A | B)
(Prob(A | B) is the conditional probability of A given that B occurred.)
Now let P(K) be the probability that the K-th person in line wins,
Q(K) the probability that the first K people all have distinct
birthdays (which occurs exactly when none of them wins). Then
P(1) + P(2) + ... + P(K-1) + P(K) = 1 - Q(K)
P(1) + P(2) + ... + P(K-1) = 1 - Q(K-1)
P(K) = Q(K-1) - Q(K) <--- this is what we want to maximize.
Now if the first K-1 all have distinct birthdays, then assuming
uniform distribution of birthdays among D days of the year,
the K-th person has K-1 chances out of D to match, and D-K+1 chances
not to match (which would produce K distinct birthdays). So
Q(K) = Q(K-1)*(D-K+1)/D = Q(K-1) - Q(K-1)*(K-1)/D
Q(K-1) - Q(K) = Q(K-1)*(K-1)/D = Q(K)*(K-1)/(D-K+1)
Now we want to maximize P(K), which means we need the greatest K such
that P(K) - P(K-1) > 0. (Actually, as just given, this only
guarantees a local maximum, but in fact if we investigate a bit
farther we'll find that P(K) has only one maximum.)
For convenience in calculation let's set K = I + 1. Then
Q(I-1) - Q(I) = Q(I)*(I-1)/(D-I+1)
Q(I) - Q(I+1) = Q(I)*I/D
P(K) - P(K-1) = P(I+1) - P(I)
= (Q(I) - Q(I+1)) - (Q(K-2) - Q(K-1))
= Q(I)*(I/D - (I-1)/(D-I+1))
To find out where this is last positive (and next goes negative), solve
x/D - (x-1)/(D-x+1) = 0
Multiply by D*(D+1-x) both sides:
(D+1-x)*x - D*(x-1) = 0
Dx + x - x^2 - Dx + D = 0
x^2 - x - D = 0
x = (1 +/- sqrt(1 - 4*(-D)))/2 ... take the positive square root
= 0.5 + sqrt(D + 0.25)
Setting D=365 (finally deciding how many days in a year!),
desired I = x = 0.5 + sqrt(365.25) = 19.612 (approx).
The last integer I for which the new probability is greater then the old
is therefore I=19, and so K = I+1 = 20. You should try to be the 20th
person in line.
Computing your chances of actually winning is slightly harder, unless
you do it numerically by computer. The recursions you need have already
been given.
-- David Karr (karr@cs.cornell.edu)
==> probability/birthday/same.day.p <==
How many people must be at a party before you have even odds or better
of two having the same bithday (not necessarily the same year, of course)?
==> probability/birthday/same.day.s <==
23.
See also:
archive entry "coupon"
==> probability/cab.p <==
A cab was involved in a hit and run accident at night. Two cab companies,
the Green and the Blue, operate in the city. Here is some data:
a) Although the two companies are equal in size, 85% of cab
accidents in the city involve Green cabs and 15% involve Blue cabs.
b) A witness identified the cab in this particular accident?
If it looks like an obvious problem in statistics, then consider the
following argument:
The probability that the color of the cab was Blue is 80%! After all,
the witness is correct 80% of the time, and this time he said it was Blue!
What else need be considered? Nothing, right?
If we look at Bayes theorem (pretty basic statistical theorem) we
should get a much lower probability. But why should we consider statistical
theorems when the problem appears so clear cut? Should we just accept the
80% figure as correct?
==> probability/cab.s <==
The).
==> probability/coupon.p <==?
==> probability/coupon" familiar to people
interested in hashing algorithms: With a party of 23 persons,
you are likely (i.e. with probability >50%) to find two with
the same birthday. The non equiprobable case was solved by:
M. Klamkin and D. Newman, Extensions of the birthday
surprise, J. Comb. Th. 3 (1967), 279-282.
==> probability/darts.p <==
Peter throws two darts at a dartboard, aiming for the center. The
second dart lands farther from the center than the first. If Peter now
throws another dart at the board, aiming for the center, what is the
probability that this third throw is also worse (i.e., farther from
the center) than his first? Assume Peter's skilfulness is constant.
==> probability/darts.s <==
Since the three darts are thrown independently,
they each have a 1/3 chance of being the best throw. As long as the
third dart is not the best throw, it will be worse than the first dart.
Therefore the answer is 2/3.
Ranking the three darts' results from A (best) to C
(worst), there are, a priori, six equiprobable outcomes.
possibility # 1 2 3 4 5 6
1st throw A A B B C C
2nd throw B C A C A B
3rd throw C B C A B A
The information from the first two throws shows us that the first
throw will not be the worst, nor the second throw the best. Thus
possibilities 3, 5 and 6 are eliminated, leaving three equiprobable
cases, 1, 2 and 4. Of these, 1 and 2 have the third throw worse than
the first; 4 does not. Again the answer is 2/3.
==> probability/derangement.p <==
12 men leave their hats with the hat check. If the hats are randomly
returned, what is the probability that nobody gets the correct hat?
==> probability/derangement)
==> probability/family.p <==?
==> probability/family.s <==
The ratio will be 50-50 in both cases. We are not killing off any
fetuses or babies, and half of all conceptions will be male, half
female. When a family decides to stop does not affect this fact.
==> probability/flips/once.in.run.p <==
What are the odds that a run of one H or T (i.e., THT or HTH) will occur
in n flips of a fair coin?
==> probability/flips/once.in.run.s <==
References:
John P. Robinson, Transition Count and Syndrome are Uncorrelated, IEEE
Transactions on Information Theory, Jan 1988.
First we define a function or enumerator P(n,k) as the number of length
"n" sequences that generate "k" successes. For example,
P(4,1)= 4 (HHTH, HTHH, TTHT, and THTT are 4 possible length 4 sequences).
I derived two generating functions g(x) and h(x) in order to enumerate
P(n,k), they are compactly represented by the following matrix
polynomial.
_ _ _ _ _ _
| g(x) | | 1 1 | (n-3) | 4 |
| | = | | | |
| h(x) | | 1 x | |2+2x |
|_ _| |_ _| |_ _|
The above is expressed as matrix generating function. It can be shown
that P(n,k) is the coefficient of the x^k in the polynomial
(g(x)+h(x)).
For example, if n=4 we get (g(x)+h(x)) from the matrix generating
function as (10+4x+2x^2). Clearly, P(4,1) (coefficient of x) is 4 and
P(4,2)=2 ( There are two such sequences THTH, and HTHT).
We can show that
mean(k) = (n-2)/4 and sd= square_root(5n-12)/4
We need to generate "n" samples. This can be done by using sequences of length
(n+2). Then our new statistics would be
mean = n/4
sd = square_root(5n-2)/4
Similar approach can be followed for higher dimensional cases.
==> probability/flips/twice.in.run.p <==
What is the probability in n flips of a fair coin that there will be two
heads in a row?
==> probability/flips/twice.in.run.s <==
Well, the question is then how many strings of n h's and t's contain
hh? I would guess right off hand that its going to be easier to
calculate the number of strings that _don't_ contain hh and then
subtract that from the total number of strings.
So we want to count the strings of n h's and t's with no hh in them.
How many h's and t's can there be? It is fairly clear that there must
be from 0 to n/2 h's, inclusive. (If there were (n/2+1) then there
would have to be two touching.)
How many strings are there with 0 h's? 1
How many strings are there with 1 h? Well, there are (n-1) t's, so
there are a total of n places to put the one h. So the are nC1 such
strings. How many strings are there with 2 h's? Well, there are (n-1)
places to put the two h's, so there are (n-1)C2 such strings.
Finally, with n/2 h's there are (n/2+1) places to put them, so there
are (n/2+1)C(n/2) such strings.
Therefore the total number of strings is
Sum (from i=0 to n/2) of (n-i+1)C(i)
Now, here's where it get's interesting. If we play around with Pascal's
triangle for a while, we see that this sum equals none other than the
(n+2)th Fibonacci number.
So the probability that n coin tosses will give a hh is:
2^n-f(n+2)
----------
2^n
(where f(x) is the xth Fibanocci number (so that f(1) is and f(2) are both 1))
==> probability/flips/unfair.p <==
Generate even odds from an unfair coin. For example, if you
thought a coin was biased toward heads, how could you get the
equivalent of a fair coin with several tosses of the unfair coin?
==> probability/flips/unfair.s <==
Toss twice. If both tosses give the same result, repeat this process
(throw out the two tosses and start again). Otherwise, take the first
of the two results.
==> probability/flips/waiting.time.p <==
Compute the expected waiting time for a sequence of coin flips, or the
probabilty that one sequence of coin flips will occur before another.
==> probability/flips/waiting.time.s <==
Here's a C program I had lying around that is relevant to the
current discussion of coin-flipping. The algorithm is N^3 (for N flips)
but it could certainly be improved. Compile with
cc -o flip flip.c -lm
-- Guy
_________________ Cut here ___________________
#include <stdio.h>
#include <math.h>
char *progname; /* Program name */
#define NOT(c) (('H' + 'T') - (c))
/* flip.c -- a program to compute the expected waiting time for a sequence
of coin flips, or the probabilty that one sequence
of coin flips will occur before another.
Guy Jacobson, 11/1/90
*/
main (ac, av) int ac; char **av;
{
char *f1, *f2, *parseflips ();
double compute ();
progname = av[0];
if (ac == 2) {
f1 = parseflips (av[1]);
printf ("Expected number of flips until %s = %.1f\n",
f1, compute (f1, NULL));
}
else if (ac == 3) {
f1 = parseflips (av[1]);
f2 = parseflips (av[2]);
if (strcmp (f1, f2) == 0) {
printf ("Can't use the same flip sequence.\n");
exit (1);
}
printf ("Probability of flipping %s before %s = %.1f%%\n",
av[1], av[2], compute (f1, f2) * 100.0);
}
else
usage ();
}
char *parseflips (s) char *s;
{
char *f = s;
while (*s)
if (*s == 'H' || *s == 'h')
*s++ = 'H';
else if (*s == 'T' || *s == 't')
*s++ = 'T';
else
usage ();
return f;
}
usage ()
{
printf ("usage: %s {HT}^n\n", progname);
printf ("\tto get the expected waiting time, or\n");
printf ("usage: %s s1 s2\n\t(where s1, s2 in {HT}^n for some fixed n)\n",
progname);
printf ("\tto get the probability that s1 will occur before s2\n");
exit (1);
}
/*
compute -- if f2 is non-null, compute the probability that flip
sequence f1 will occur before f2. With null f2, compute
the expected waiting time until f1 is flipped
technique:
Build a DFA to recognize (H+T)*f1 [or (H+T)*(f1+f2) when f2
is non-null]. Randomly flipping coins is a Markov process on the
graph of this DFA. We can solve for the probability that f1 precedes
f2 or the expected waiting time for f1 by setting up a linear system
of equations relating the values of these unknowns starting from each
state of the DFA. Solve this linear system by Gaussian Elimination.
*/
typedef struct state {
char *s; /* pointer to substring string matched */
int len; /* length of substring matched */
int backup; /* number of one of the two next states */
} state;
double compute (f1, f2) char *f1, *f2;
{
double solvex0 ();
int i, j, n1, n;
state *dfa;
int nstates;
char *malloc ();
n = n1 = strlen (f1);
if (f2)
n += strlen (f2); /* n + 1 states in the DFA */
dfa = (state *) malloc ((unsigned) ((n + 1) * sizeof (state)));
if (!dfa) {
printf ("Ouch, out of memory!\n");
exit (1);
}
/* set up the backbone of the DFA */
for (i = 0; i <= n; i++) {
dfa[i].s = (i <= n1) ? f1 : f2;
dfa[i].len = (i <= n1) ? i : i - n1;
}
/* for i not a final state, one next state of i is simply
i+1 (this corresponds to another matching character of dfs[i].s
The other next state (the backup state) is now computed.
It is the state whose substring matches the longest suffix
with the last character changed */
for (i = 0; i <= n; i++) {
dfa[i].backup = 0;
for (j = 1; j <= n; j++)
if ((dfa[j].len > dfa[dfa[i].backup].len)
&& dfa[i].s[dfa[i].len] == NOT (dfa[j].s[dfa[j].len - 1])
&& strncmp (dfa[j].s, dfa[i].s + dfa[i].len - dfa[j].len + 1,
dfa[j].len - 1) == 0)
dfa[i].backup = j;
}
/* our dfa has n + 1 states, so build a system n + 1 equations
in n + 1 unknowns */
eqsystem (n + 1);
for (i = 0; i < n; i++)
if (i == n1)
equation (1.0, n1, 0.0, 0, 0.0, 0, -1.0);
else
equation (1.0, i, -0.5, i + 1, -0.5, dfa[i].backup, f2 ? 0.0 : -1.0);
equation (1.0, n, 0.0, 0, 0.0, 0, 0.0);
free (dfa);
return solvex0 ();
}
/* a simple gaussian elimination equation solver */
double *m, **M;
int rank;
int neq = 0;
/* create an n by n system of linear equations. allocate space
for the matrix m, filled with zeroes and the dope vector M */
eqsystem (n) int n;
{
char *calloc ();
int i;
m = (double *) calloc (n * (n + 1), sizeof (double));
M = (double **) calloc (n, sizeof (double *));
if (!m || !M) {
printf ("Ouch, out of memory!\n");
exit (1);
}
for (i = 0; i < n; i++)
M[i] = &m[i * (n + 1)];
rank = n;
neq = 0;
}
/* add a new equation a * x_na + b * x_nb + c * x_nc + d = 0.0
(note that na, nb, and nc are not necessarily all distinct.) */
equation (a, na, b, nb, c, nc, d) double a, b, c, d; int na, nb, nc;
{
double *eq = M[neq++]; /* each row is an equation */
eq[na + 1] += a;
eq[nb + 1] += b;
eq[nc + 1] += c;
eq[0] = d; /* column zero holds the constant term */
}
/* solve for the value of variable x_0. This will go nuts if
therer are errors (for example, if m is singular) */
double solvex0 ()
{
register i, j, jmax, k;
register double max, val;
register double *maxrow, *row;
for (i = rank; i > 0; --i) { /* for each variable */
/* find pivot element--largest value in ith column*/
max = 0.0;
for (j = 0; j < i; j++)
if (fabs (M[j][i]) > fabs (max)) {
max = M[j][i];
jmax = j;
}
/* swap pivot row with last row using dope vectors */
maxrow = M[jmax];
M[jmax] = M[i - 1];
M[i - 1] = maxrow;
/* normalize pivot row */
max = 1.0 / max;
for (k = 0; k <= i; k++)
maxrow[k] *= max;
/* now eliminate variable i by subtracting multiples of pivot row */
for (j = 0; j < i - 1; j++) {
row = M[j];
if (val = row[i]) /* if variable i is in this eq */
for (k = 0; k <= i; k++)
row[k] -= maxrow[k] * val;
}
}
/* the value of x0 is now in constant column of first row
we only need x0, so no need to back-substitute */
val = -M[0][0];
free (M);
free (m);
return val;
}
_________________________________________________________________
Guy Jacobson (201) 582-6558 AT&T Bell Laboratories
uucp: {att,ucbvax}!ulysses!guy 600 Mountain Avenue
internet: guy@ulysses.att.com Murray Hill NJ, 07974
==> probability/flush.p <==
==> probability/flush.s <==
An arbitrary hand can have two aces but a flush hand can't. The
average number of aces that appear in flush hands is the same as the
average number of aces in arbitrary hands, but the aces are spread out
more evenly for the flush hands, so set #3 contains a higher fraction
of flushes.
Aces of spades, on the other hand, are spread out the same way over
possible hands as they are over flush hands, since there is only one of
them in the deck. Whether or not a hand is flush is based solely on a
comparison between different cards in the hand, so looking at just one
card is necessarily uninformative. So the other sets contain the same
fraction of flushes as the set of all possible hands.
==> probability/hospital.p <==?
==> probability/hospital.s <==
The small one. If there are 2N babies born, then the probability of an
even split is
(2N choose N) / (2 ** 2N) ,
where (2N choose N) = (2N)! / (N! * N!) .
This is a DECREASING function.
If there are two babies born, then the probability of a split is 1/2
(just have the second baby be different from the first). With 2N
babies, If there is a N,N-1 split in the first 2N-1, then there is a
1/2 chance of the last baby making it an even split. Otherwise there
can be no even split. Therefore the probability is less than 1/2
overall for an even split.
As N goes to infinity the probability of an even split approaches zero
(although it is still the most likely event).
==>?
==> probability/icos.s <==
It is easily seen that if any two of the three dice agree that the
house wins. The probability that this does not happen is 19*18/(20*20).
If the three numbers are different, the probability of winning is 1/3.
So the chance of winning is 19*18/(20*20*3) = 3*19/200 = 57/200.
==> probability/intervals.p <==
Given two random points x and y on the interval 0..1, what is the average
size of the smallest of the three resulting intervals?
==> probability/intervals.s <==
In between these positions the surface forms a series of planes.
Thus the volume under it consists of 2 pyramids each with an
altitude of 1/3 and an (isosceles triangular) base of area 1/2,
yielding a total volume of 1/9.
==> probability/killers.and.pacifists.p <==).
==> probability/leading.digit.p <==
What is the probability that the ratio of two random reals starts with a 1?
What about 9?
==> probability/leading.digit.
==>.
==> probability/lights.s <==
Let E(m,n) be this number, and let (x)C(y) = x!/(y! (x-y)!). A model
for this problem is the following nxm grid:
^ B---+---+---+ ... +---+---+---+ (m,0)
| | | | | | | | |
N +---+---+---+ ... +---+---+---+ (m,1)
<--W + E--> : : : : : : : :
S +---+---+---+ ... +---+---+---+ (m,n-1)
| | | | | | | | |
v +---+---+---+ ... +---+---+---E (m,n)
where each + represents a traffic light. We can consider each
traffic light to be a direction pointer, with an equal chance of
pointing either east or south.
IMHO, the best way to approach this problem is to ask: what is the
probability that edge-light (x,y) will be the first red edge-light
that the pedestrian encounters? This is easy to answer; since the
only way to reach (x,y) is by going south x times and east y times,
in any order, we see that there are (x+y)C(x) possible paths from
(0,0) to (x,y). Since each of these has probability (1/2)^(x+y+1)
of occuring, we see that the the probability we are looking for is
(1/2)^(x+y+1)*(x+y)C(x). Multiplying this by the expected number
of red lights that will be encountered from that point, (n-k+1)/2,
we see that
m-1
-----
\
E(m,n) = > ( 1/2 )^(n+k+1) * (n+k)C(n) * (m-k+1)/2
/
-----
k=0
n-1
-----
\
+ > ( 1/2 )^(m+k+1) * (m+k)C(m) * (n-k+1)/2 .
/
-----
k=0
Are we done? No! Putting on our Captain Clever Cap, we define
n-1
-----
\
f(m,n) = > ( 1/2 )^k * (m+k)C(m) * k
/
-----
k=0
and
n-1
-----
\
g(m,n) = > ( 1/2 )^k * (m+k)C(m) .
/
-----
k=0
Now, we know that
n
-----
\
f(m,n)/2 = > ( 1/2 )^k * (m+k-1)C(m) * (k-1)
/
-----
k=1
and since f(m,n)/2 = f(m,n) - f(m,n)/2, we get that
n-1
-----
\
f(m,n)/2 = > ( 1/2 )^k * ( (m+k)C(m) * k - (m+k-1)C(m) * (k-1) )
/
-----
k=1
- (1/2)^n * (m+n-1)C(m) * (n-1)
n-2
-----
\
= > ( 1/2 )^(k+1) * (m+k)C(m) * (m+1)
/
-----
k=0
- (1/2)^n * (m+n-1)C(m) * (n-1)
= (m+1)/2 * (g(m,n) - (1/2)^(n-1)*(m+n-1)C(m)) - (1/2)^n*(m+n-1)C(m)*(n-1)
therefore
f(m,n) = (m+1) * g(m,n) - (n+m) * (1/2)^(n-1) * (m+n-1)C(m) .
Now, E(m,n) = (n+1) * (1/2)^(m+2) * g(m,n) - (1/2)^(m+2) * f(m,n)
+ (m+1) * (1/2)^(n+2) * g(n,m) - (1/2)^(n+2) * f(n,m)
= (m+n) * (1/2)^(n+m+1) * (m+n)C(m) + (m-n) * (1/2)^(n+2) * g(n,m)
+ (n-m) * (1/2)^(m+2) * g(m,n) .
Setting m=n in this formula, we see that
E(n,n) = n * (1/2)^(2n) * (2n)C(n),
and applying Stirling's theorem we get the beautiful asymptotic formula
E(n,n) ~ sqrt(n/pi).
==> probability/lottery.p <==
There.
==> probability.
==> probability/oldest.girl.p <==?
==> probability/oldest.girl.s <==
There are four possibilities:
Oldest child Youngest child
1. Girl Girl
2. Girl Boy
3. Boy Girl
4. Boy Boy
If your friend says "My oldest child is a girl," he has eliminated cases
3 and 4, and in the remaining cases both are girls 1/2 of the time. If
your friend says "At least one of my children is a girl," he has
eliminated case 4 only, and in the remaining cases both are girls 1/3
of the time.
==>.
==> probability/particle.in.box.s <==
Let.
==> probability/pi.p <==
Are the digits of pi random (i.e., can you make money betting on them)?
==> probability/pi.s <==
No,.
==> probability/random.walk.p <==
Waldo?
==> probability/random.walk.s <==
I can show the probability that Waldo returns to 0 is 1. Waldo's
wanderings map to an integer grid in the plane as follows. Let
(X_t,Y_t) be the cumulative sums of the length 1 and length 2 steps
respectively taken by Waldo through time t. By looking only at even t,
we get the ordinary random walk in the plane, which returns to the
origin (0,0) with probability 1. In fact, landing at (2n, n) for any n
will land Waldo on top of his keys too. There's no need to look at odd
t.
Similar considerations apply for step sizes of arbitrary (fixed) size.
==> probability/reactor.p <==
There is a reactor in which a reaction is to take place. This reaction
stops if an electron is present in the reactor. The reaction is started
with 18 positrons; the idea being that one of these positrons would
combine with any incoming electron (thus destroying both). Every second,
exactly one particle enters the reactor. The probablity that this particle
is an electron is 0.49 and that it is a positron is 0.51.
What is the probability that the reaction would go on for ever?
Note: Once the reaction stops, it cannot restart.
==> probability/reactor.
==> probability/roulette.p <==
You?
==> probability/roulette.s <==
All you need to consider are the six possible bullet configurations
B B B E E E -> player 1 dies
E B B B E E -> player 2 dies
E E B B B E -> player 1 dies
E E E B B B -> player 2 dies
B E E E B B -> player 1 dies
B B E E E B -> player 1 dies
One therefore has a 2/3 probability of winning (and a 1/3 probability of
dying) by shooting second. I for one would prefer this option.
==> probability/transitivity.p <==
Can you number dice so that die A beats die B beats die C beats die A?
What is the largest probability p with which each event can occur?
==> probability/transitivity.s <==
Yes. The actual values on the dice faces don't matter, only their
ordering. WLOG we may assume that no two faces of the same or
different dice are equal. We can assume "generalised dice", where the
faces need not be equally probable. These can be approximated by dice
with equi-probable faces by having enough faces and marking some of
them the same.
Take the case of three dice, called A, B, and C. Picture the different
values on the faces of the A die. Suppose there are three:
A A A
The values on the B die must lie in between those of the A die:
B A B A B A B
With three different A values, we need only four different B values.
Similarly, the C values must lie in between these:
C B C A C B C A C B C A C B C
Assume we want A to beat B, B to beat C, and C to beat A. Then the above
scheme for the ordering of values can be simplified to:
B C A B C A B C A B C
since for example, the first C in the previous arrangement can be moved
to the second with the effect that the probability that B beats C is
increased, and the probabilities that C beats A or A beats B are
unchanged. Similarly for the other omitted faces.
In general we obtain for n dice A...Z the arrangement
B ... Z A B ... Z ...... A B ... Z
where there are k complete cycles of B..ZA followed by B...Z. k must be
at least 1.
CONJECTURE: The optimum can be obtained for k=1.
So the arrangement of face values is B ... Z A B ... Z. For three dice
it is BCABC. Thus one die has just one face, all the other dice have two
(with in general different probabilities).
CONJECTURE: At the optimum, the probabilities that each die beats the
next can be equal.
Now put probabilities into the BCABC arrangement:
B C A B C
x y 1 x' y'
Clearly x+x' = y+y' = 1.
Prob. that A beats B = x'
B beats C = x + x'y'
C beats A = y
Therefore x' = y = x + x'y'
Solving for these gives x = y' = 1-y, x' = y = (-1 + sqrt(5))/2 = prob.
of each die beating the next = 0.618...
For four dice one obtains the probabilities:
B C D A B C D
x y z 1 x' y' z'
A beats B: x'
B beats C: x + x'y'
C beats D: y + y'z'
D beats A: z
CONJECTURE: for any number of dice, at the optimum, the sequence of
probabilities abc...z1a'b'c...z' is palindromic.
We thus have the equalities:
x+x' = 1
y+y' = 1
z+z' = 1
x' = z = x + x'y' = x + x'y'
y = y' (hence both = 1/2)
Solving this gives x = 1/3, z = 2/3 = prob. of each die beating the next.
Since all the numbers are rational, the limit is attainable with
finitely many equiprobable faces. E.g. A has one face, marked 0. C has
two faces, marked 2 and -2. B has three faces, marked 3, -1, -1. D has
three faces, marked 1, 1, -3. Or all four dice can be given six faces,
marked with numbers in the range 0 to 6.
Finding the solution for 5, 6, or n dice is left as an exercise.
-- ____.
Martin Gardner (of course!) wrote about notransitive dice, see the Oct '74
issue of Scientific American, or his book "Wheels, Life and Other Mathematical
Amusements", ISBN 0-7167-1588-0 or ISBN 0-7167-1589-9 (paperback).
In the book, Gardner cites Bradley Efron of Stanford U. as stating that
the maximum number for three dice is approx .618, requiring dice with more
than six sides. He also mentions that .75 is the limit approached as the
number of dice increases. The book shows three sets of 6-sided dice, where
each set has 2/3 as the advantage probability. | http://www.faqs.org/faqs/puzzles/archive/probability/ | CC-MAIN-2018-22 | refinedweb | 5,731 | 72.05 |
This article explains C# basics with C# code examples including C# data types, class, objects, properties, and methods. You'll also learn basic OOP concepts such as overloading, polymorphism, abstraction, and interfaces. The article also covers common iteration statements including for, while, do while, and foreach.
This article explains basics of C#. C# is an object-oriented programming language. The foundation of an object-oriented programming is a type system universe. In the type system universe, everything revolves around classes and objects.
In C#, a class is a representation of a type of object. It is a blueprint / plan / template that describes the details of an object. In simple terms, a class is a concept and an object is real entity with value.
For example, a Person is a class. A person has some attributes such as a person has a name, a date of birth, and sex. All real people are objects that are of a Person type. Each person object has a name, a date of birth, and sex but the values of these attributes may be different. The attributes of a class are called properties.
Not only a person has attributes, a person can also do something. For example, a person can eat, sleep, talk, or walk. The activities of classes are represented in form of events and methods.
Declaring a class
The following code example shows how to create a class and objects.
A class always starts with the keyword class followed by a class accessibility level, public or private. The following code snippet declares a public class named Person. That means, the class is public and accessible anywhere in the code.
A class has an access modifier. The access modifiers sets the boundaries and access levels of classes. C# supports public, private, protected, internal, and protected internal access modifiers. Learn more about access modifiers here: C# Access Modifiers with Examples.
Creating objects
Objects in C# are created using the new keyword. The following code snippet creates an object of class Person.
Object p in the above code is also called an instance of Person class.
You can create as many objects required from a class. The following code snippet creates three instances of Person.
Each instance in the above code reserves its own memory allocation in a computer memory.
Learn more about objects and classes in C# here: Object-Oriented Programming Using C#.NET
Types of classes
C# language has different types of classes, such as static classes, abstract classes, partial classes, and sealed classes. Learn more about classes here: Types of Classes In C#
Class Members
A class consists of members such as fields, variables, properties, events, enumerations, stucts, and methods. Each member of the class has a specific purpose.
Properties
A property in C# is a member of a class that provides a flexible mechanism for classes to expose private fields. Internally, properties are special methods called accessors..
The following code snippet adds three properties, name, sex, and student to the Person class. As you can see from the code, all three properties provide access to three fields of the class.
To set public properties of a class, you need the setter of a property to set the value. The following code snippet sets the values of properties of a Person.
To access a class properties, you use the getter of the property. The following code snippet gets the values of properties of a Person.
Learn more about properties, read Understanding Properties in C#
Classes in C# has a purpose. Not only classes represent data, but classes can also do process something. The processing is usually is the execution of the code to achieve some task. The processing in a class is done via events and methods.
A method usually a code snippet that does a specific task. For example, you may have a class name Calculator with methods, Add and Subtract. The Add method takes two numbers and returns the total of the two. The Subtract method takes two numbers and subtracts the second number from the first and returns the result.
The following code snippet declares a method, Sleep in Person class. The method returns a string.
The following code snippet calls the method, Sleep.
If you want to learn more about methods and object-oriented programming, here is a free eBook download.
Let’s look at the following code snippet. In the Person class, we’ve a method, Add with three different signatures. The method will be executed based on the arguments passed by the caller class.
The following code class the Add method twice with different signatures and both time, different code is executed in the Person class.
Method overriding is a language feature that allows a class to override a specific implementation of a a base class. The derived class can give its own definition and functionality to the method. However, the method signature must be the same as of the base class.
Here is an article on method overriding.
C# is an object-oriented programming language. Inheritance is one of the key features of an object-oriented programming language.
Inheritance allows a class to be reused by other classes that may need the same functionality. Inheritance works as a parent and child, where the parent is called a base class and the child is called a derived class. A derived class inherits almost all of the functionality of a base class unless it is restricted by private access modifier.
Let’s see, we derive a class, Author from class Person. As you can see from the following code example, the Author class has one property, Genre, and one method, Write().
Now, when we create an Author class instance, we can actually access the Person (the base class) class’s members. The following code creates an instance of the Author class and sets its Name, Sex, Student properties, that are declared in the Person class.
Here is a more detailed article on Inheritance:
Consider the above diagram where Fish is a base class. In class Fish, there is a method, Eat. Fish eat. There are two types of Fish, Dolphin and Sword. The both eat different food but they eat. The abstraction makes sure that both Fish inherited classes implement the Eat method. The inherited
classes now can override the eat() method and provide its own implementation of it.
Interfaces are not often use types in C#. C# language support single interface.
An interface provides a skeleton of a class that must be implemented by the inherited class. It enforces derived classes to implement certain functionality.
In C#, a class is a reference type and an interface is a value type.
C# code
Want to learn more about interfaces in C#, here is a detailed article on interfaces in C#: Interfaces Best Examples in C#
The if..else statement
The if statetement checks for a condition and if the condition is true, code is executed. The following code check if the condition is true.
The if..else statement checks a condition and executes different code if the condition is true or not.
The if..else if .. statement can have several if and else statement.
The For loop
The for statement executes a statement or a block of statements while a specified Boolean expression evaluates to true. The following code snippet uses a for loop to execute code until the counter is < 10.
The do loop.
Example
The While loop statement.
The foreach loop
A foreach loop operates on collections of items such as an array. The following code snippet loops through an array of numbers and displays array numbers.
Switch case
The switch statement is a control statement that handles multiple selections by passing control to one of the case statements within its body.
The following code snippet checks a matching expression and executes the matched case statement.
This article is a basic introduction to C# language. If you want to learn more C#, here is a list of several beginner tutorials.
View All
View All | https://www.c-sharpcorner.com/UploadFile/e9fdcd/basics-of-C-Sharp/ | CC-MAIN-2021-31 | refinedweb | 1,338 | 66.54 |
Next: Creating a distribution, Previous: Adding a configure flag, Up: Tutorial [Contents][Index]
Some programs need to access data files when they are run. For example, suppose we print a message from a file, message.
hello.c:
#include <config.h> #include <stdio.h> #include <stdlib.h> #define MESSAGE_FILE "message" int main (void) { FILE *in; char *line = 0; size_t n = 0; ssize_t result; in = fopen (MESSAGE_FILE, "r"); if (!in) exit (1); result = getline (&line, &n, in); if (result == -1) exit (1); printf ("%s", line); }
Instead of hard-coding the location of the data file, we could define ‘MESSAGE_FILE’ in Makefile.am:
AM_CPPFLAGS = -DMESSAGE_FILE=\"$(pkgdatadir)/message\"
AM_CPPFLAGS specifies C preprocessor flags for
make to
use when
building the program.
pkgdatadir is a Make variable that is
automatically set from the
pkgdatadir Autoconf output variable,
which in turn is among the several output variables that Autoconf
configure scripts always set specifying installation directories. By
default, this variable will have the value
‘/usr/local/share/hello’, so the program will look for the data file at
‘/usr/local/share/hello/message’. The user of the program can
change this value by giving the --datadir option to
configure.
The ‘AM_’ prefix on
AM_CPPFLAGS is there to distinguish it
from the
CPPFLAGS Makefile variable, which may be set by the
configure script and/or overridden from the command-line.
We also need the following line in Makefile.am to distribute and install the data file:
dist_pkgdata_DATA = message
The ‘DATA’ part of the variable name corresponds to the kind of
object being listed in the variable’s value, in this case data files.
The value here is a single file called ‘message’. The
‘pkgdata_’ prefix indicates that the file should be installed in
pkgdatadir by ‘make install’, and the ‘dist_’ prefix
indicates that the file should be included in the distribution archive
created by ‘make dist’.
Next: Creating a distribution, Previous: Adding a configure flag, Up: Tutorial [Contents][Index] | http://buildsystem-manual.sourceforge.net/Adding-a-data-file-to-be-installed.html | CC-MAIN-2017-17 | refinedweb | 322 | 53.51 |
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Hello All,
How to show and hide custom fields based on other custom field value selection?
Currently, i am using below script for show and hide custom field based on other field(single select list) value selection in behaviors. In the below script if option No is selected from the custom field Reason why custom field will display below that.
def otherFaveField = getFieldByName("Reason why") //Give a text field custom field name that you have created
def faveFruitField = getFieldById(getFieldChanged())
def selectedOption = faveFruitField.getValue() as String
def isOtherSelected = selectedOption == "NO" //instead of Other You can give another option that you want
otherFaveField.setHidden(! isOtherSelected)
otherFaveField.setRequired(isOtherSelected)
Same way i want to show and hide two custom fields based on the single select list custom field value selection.
For Example:
If an value 1 is selected from the custom field(single select list) below that two custom field called Test URL and ERL URL should display. If value 2 is selected both custom fields should hide.
Any help would be greatly appreciated.
Thanks in advance,
Mani
We know that great teams require amazing project management chops. It's no surprise that great teams who use Jira have strong project managers, effective workflows, and secrets that bring planning ... | https://community.atlassian.com/t5/Adaptavist-questions/How-to-show-and-hide-custom-fields-based-on-other-custom-field/qaq-p/1173973 | CC-MAIN-2022-40 | refinedweb | 225 | 53.21 |
Analitza
#include <plotter3d_es.h>
Detailed Description
This class manage the OpenGL scene where the plots will be rendered.
Plotter3DES provides an agnostic way to manage a 3d scene for draw math plots, Contains just OpenGL calls, so is uncoupled with QWidget nor QtQuick. This class needs the PlotsModel (to create the geometry for 3D plots) and also exposes some methods to change the scene (like hide/show the axis or reference planes for example)
Definition at line 55 of file plotter3d_es.h.
Constructor & Destructor Documentation
Member Function Documentation
Fix the rotation around
direction.
Hide the current indicator of the axis.
Query if the rotation is fixed by a specific direction.
Definition at line 102 of file plotter3d_es.h.
Get information about the current rotarion approach: if return true then rotation is simple.
Definition at line 114 of file plotter3d_es.h.
Definition at line 79 of file plotter3d_es.h.
Definition at line 81 of file plotter3d_es.h.
Definition at line 84 of file plotter3d_es.h.
Force OpenGL to render the scene.
QGLWidget should call updateGL in this method.
Implemented in Plotter3DRenderer.
sets the view to the initial perspective
Rotates by
dx and
dy in screen coordinates.
saves the currently displayed plot in
url
- Returns
- whether it was saved successfully
Set the scale of all the scene by
factor.
Query if there is a valid axis arrow for
x and
y screen coordinates.
Definition at line 82 of file plotter3d_es.h.
Definition at line 133 of file plotter3d_es.h.
If the flag
simplerot is true the rotation ignores any fixed or free direction.
Show a little indicator (as a hint) next to the arrow of
axis.
Force the plots from
start to
end to be recalculated.
The documentation for this class was generated from the following file:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Fri Jan 17 2020 03:25:57 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdeedu-apidocs/analitza/html/classAnalitza_1_1Plotter3DES.html | CC-MAIN-2020-05 | refinedweb | 330 | 59.7 |
Werkzeug originally had a magical import system hook that enabled everything to be imported from one module and still loading the actual implementations lazily as necessary. Unfortunately this turned out be slow and also unreliable on alternative Python implementations and Google’s App Engine.
Starting with 0.7 we recommend against the short imports and strongly encourage starting importing from the actual implementation module. Werkzeug 1.0 will disable the magical import hook completely.
Because finding out where the actual functions are imported and rewriting them by hand is a painful and boring process we wrote a tool that aids in making this transition.
For instance, with Werkzeug < 0.7 the recommended way to use the escape function was this:
from werkzeug import escape
With Werkzeug 0.7, the recommended way to import this function is directly from the utils module (and with 1.0 this will become mandatory). To automatically rewrite all imports one can use the werkzeug-import-rewrite script.
You can use it by executing it with Python and with a list of folders with Werkzeug based code. It will then spit out a hg/git compatible patch file. Example patch file creation:
$ python werkzeug-import-rewrite.py . > new-imports.udiff
To apply the patch one of the following methods work:
hg:
hg import new-imports.udiff
git:
git apply new-imports.udiff
patch:
patch -p1 < new-imports.udiff
A few things in Werkzeug will stop being supported and for others, we’re suggesting alternatives even if they will stick around for a longer time.
Do not use: | http://werkzeug.pocoo.org/docs/0.9/transition/ | CC-MAIN-2014-41 | refinedweb | 261 | 58.69 |
C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL), 5 of n
6, Stephan guides us into the logical and beautiful world of algorithms. STL shines)
Thanks for the video!
It is possible to write a recursive lambda, if you use a Fixed Point Combinator.
From the video I get the sense recursive lambda are bad idea.
Why not? Wouldn't the compiler eliminate the tail recursion and maybe even in-line it. ?
I going to admit that I didn't know how to implement this c++0x.
So the snippet is in vb.net. (FTW).
Fibonacci Sequence.
''' <summary> ''' A Fix point Combinator ''' </summary> Public Function Fix(Of T, TResult)(ByVal f As Func(Of Func(Of T, TResult), Func(Of T, TResult))) As Func(Of T, TResult) Return Function(x) Return f(Fix(f))(x) End Function End Function ' ' A example (Fibonacci number) ' 'Module Module1 ' ' Sub Main() ' Dim fib = Fix(Of Integer, Integer)( ' Function(f) ' Return Function(x) ' Return If(x <= 1, 1, f(x - 1) + f(x - 2)) ' End Function ' End Function) ' Dim fr = fib(11) ' End Sub 'End Module
VB.NET on a C++ lecture thread. Are you crazy, man?
McCoy to the bridge!!
C
If I were sane would I be using vb.net.
My question is still a valid one though.
Observation. Emoticons are different to what is in the list.
[AdamSpeight2008]
> From the video I get the sense recursive lambda are bad idea.
> Why not? Wouldn't the compiler eliminate the tail recursion and maybe even in-line it. ?
Syntactically, trying to get an *unnamed* function object to call itself is a huge headache. (There are ways, and they're all bad.) Getting *named* function objects to call themselves is trivial, and perfectly fine. Here's what that looks like:
C:\Temp>type fib.cpp #include <iostream> #include <ostream> using namespace std; struct fib { int operator()(const int n) const { return n < 2 ? n : (*this)(n - 1) + (*this)(n - 2); } }; int main() { fib f; for (int i = 0; i < 20; ++i) { cout << f(i) << " "; } cout << endl; } C:\Temp>cl /EHsc /nologo /W4 /MT /O2 /GL fib.cpp fib.cpp Generating code Finished generating code C:\Temp>fib 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181
WOW!!
Thank you, Charles , AdamSpeight2008 and STL.
Good video and Good Article.
Another cristal clear presentation. Thanks.
You need to keep doing them, even when you're done with the STL. You're a born teacher!
By the way, since you talked about it, is that the latest one?
@STL: The fact that you've take the time to reply to my question, make this series extra special.
It's like having a lecture followed by, a tea & biscuit chat with the professor.
As an observation you may have missed a few subtle things in the snippet I provided.
Automatic type-inference has obscured it.
The variable fib is actually a lambda and that Fix function can also accept a lambda as an argument.
So forgive if the syntax is off in this. I think the code is something more like this.
auto fib =fix( [](f){ return [](x) { return x < 2 ? n : f(x-1) + f(x-2); } } ) auto fr = fib(11);
Note also that function Fix also returns a Lambda.
blah, Spetum, Garp: You're welcome!
[Garp]
> You need to keep doing them, even when you're done with the STL. You're a born teacher!
That's very kind of you to say. I may do an advanced series on the STL's internal implementation, or on Core Language features, especially C++0x features. Even walking through something "simple" like template argument deduction would probably be valuable.
> By the way, since you talked about it, is that the latest one?
>
Yes.
[AdamSpeight2008]
> As an observation you may have missed a few subtle things in the snippet I provided.
I only glanced at it; the last time that I read any form of Basic was in the 20th century, and I'd like to keep it that way.
I find lambda calculus-speak to be intensely confusing, and that's despite doing Scheme (and some LISP) for years in college. Surprisingly, C++ is a far better language for functional programming, because its machine model is simple. In C++, function objects are just classes, and everyone knows how member functions and data members work. The only enlightening sentence in is this: "In programming languages that support function literals, fixed point combinators allow the definition and use of anonymous recursive functions, i.e. without having to bind such functions to identifiers." In C++, if you want recursion, you should simply use a named function object class, as I demonstrated above.
The lecture was too short, only 37 minutes, time flies when watching something interesting, C++0x goodness.
More, please
and having STL answering questions, is an extra bonus point.
Now if you only could find a competent video editor...
The sound quality of the "High Quality WMV" is not so high quality at all
Sounds like it was encoded wrong, perhaps wrong settings were used ? Sloppy!
Haven't you fired that guy yet, Charles ?
At least no black bars this time....
These kinds of mistakes could be avoided by checking before release ! Same old Vista mistake over and over again. ..sigh..
15 hours ago, STL wroteI may do an advanced series on the STL's internal implementation, or on Core Language features, especially C++0x features. Even walking through something "simple" like template argument deduction would probably be valuable.
Yes, please, to all of the above!
Keep 'em coming Stephan. You are definitely one of the best/clearest presenters on Ch.9.
Another awesome vid by Stephen. Thanks Charles and Steven for the awesome content.
Stephen, is there anyway to use any of the STL algorithms on multi dimensional arrays.
for example.
int mines = [][MAX_DEPTH][9];
and also is a vector of a vector better for that.
Tom
> Perhaps it has to do with the application you use to play the media?
A valid point, so i tested it in both 'Windows Media Player' and VLC, and still the same low quality.
> There is nothing wrong with the audio.
Perhaps you can't hear badness ?
> At any rate, tone down your hostility please...
I'm not hostile, not even close, i'm just honest, speaks the truth and my observations.
You just don't like me pointing out the bad / problems and that is your problem not mine.
If you don't like the feedback you're getting perhaps it's time for you to actually fix the problems reported and stop trying to sweep them under the rug ( like microsoft likes to do btw, sadly enough)
Though i should be a bit hostile since you ignore my request for your e-mail, for the detailed problem report. But i won't.
But i guess that just shows how interested you are in fixing problems.
But refusing to fix bug is normal at microsoft so...
What is it you call it "by design" ?
PS: That was not hostile, it was an observation.
Anyway, thanks for reading my posts even though you seem to ignore what you read.
Have a good day.
@Mr Crash: There is nothing wrong with the audio. If anything, it is a bit too loud (it could stand to be a little less). I have no problems with honesty. In this case, the problem you express with passion is not a problem (or at least not one what I can hear using WMP on Windows 7).
Can you be more specific? Also, let's take this conversation off of this thread. Send me a mail throught the contact us form (I get those mails....).
C
[Mr Crash]
> The lecture was too short, only 37 minutes, time flies when watching something interesting, C++0x goodness.
Like Parts 4 and 5, Parts 6 and 7 were filmed back-to-back, so Part 7 is already in the pipeline.
> The sound quality of the "High Quality WMV" is not so high quality at all
The first 20 seconds of the High Quality WMV, the MP3, and the MP4 sound perfectly fine to me (through my cheap headphones; my awesome wireless headphones are at home). The MP3 seemed to have greater compression artifacts than the others.
Maybe I just sound (and look) weird.
By the way, the end of Part 7 has a hilarious mishap near the end that was entirely my fault. See if you notice it when the video comes out.
[ryanb]
> Keep 'em coming Stephan. You are definitely one of the best/clearest presenters on Ch.9.
Thanks - comments like this definitely motivate me to keep going.
[Tominator2005]
> Stephen, is there anyway to use any of the STL algorithms on multi dimensional arrays.
It depends on what you're trying to do. Writing an iterator to present a linear view of a multi-dimensional container is certainly possible, and not too difficult.
> and also is a vector of a vector better for that.
Yes. Vectors are superior to built-in arrays for many reasons, one of which being that vectors know their own sizes. (std::array also knows its own size. If you look at my Nurikabe solver, I used a std::array when I knew the size at compile-time.)
Hi Stephan, thanks for these great lectures! They are very, very useful and unique to have an introduction into the new STL features available in VS2010.
There is one special subject that I would love to hear more about in one of your future STL videos - and that is: Allocators..
I'm especially interested in this subject because I am maintaining a library (currently still in VS2008) which deals with very large datasets (detailled road networks for Europe) and does shortest path calculations on the network based on a preprocessed graph which allows for much faster path calculations.
This mentioned preprocessing uses STL and I'm having constant trouble with memory management, especially memory fragmentation, because the algorithm used by this preprocessing happens to need a huge amount of inserts and removes of small objects from map and list containers. I've already done a lot of changes and optimizations of the algorithm itself but I am pretty sure that I could make a major step forward by replacing the STL default allocator by some other allocator which can help to reduce memory fragmentation.
So I was happy to see now a thing like "allocator_chunklist" and the other allocators in the library which allow to customize memory management in STL.
Well, perhaps this wish is too special for your introduction, but it would be great if you could spend a few minutes on this subject sooner or later!
Anyway, if so or not, I'm prepared for a download of the next lecture and hope you still have a lot of stuff to talk about!
Thanks again for your very motivating lectures!
The ability to make a workable Y-Combinator(Fix point recursion of annonymous functions) aren't really possible to do (DIrectly) with the MS version of the STL at this moment, but you can make your own class to make the combiner since this seems to be whats limiting:
// IMPLEMENT _STD tr1::bind // PLACEHOLDERS template<int _Nx> class _Ph { // placeholder };
Seeing that in the VS2010 headers make me sad inside
BUT an implementation should be find if you use the bost::bind or tbb::combine_each to allow strict binding for a solution with 1 combiner function for the usage in the C++0x lambda calculation.
aha did anybody else notice that the program is called meow? ;)
These short classes are excellent, thx for doing them!
Hi STL,
First I just want to say I've been following these videos since they started and I love them, you have a natural knack for disseminating quality information
It just so happened that as I was watching this video I was also working on an old and nefarious piece of code: A network message that is essentially a string in char16_t* format that contains 5 integers, formatted as strings and separated by spaces, followed by a text message (this is actually a network message for SpatialChat in the MMORPG project I'm working on). The 5 integers at the beginning are id's for things such as the mood of the speaker, the form of delivery (shouting, whispering, etc).. It was my task to resolve this issue and given that the bulk of the processing is in a while loop I wanted to see if this could be eliminated using STL algorithms.
Here is the original code as it existed in the source, note the lack of comments which made figuring out what the code was actually doing a big chore:
string chatData; message->getStringUnicode16(chatData); chatData.convert(BSTRType_ANSI); int8* data = chatData.getRawData(); uint16 len = chatData.getLength(); char chatElement[5][32]; uint8 element = 0; uint8 elementIndex = 0; uint16 byteCount = 0; while(element < 5) { if(*data == ' ') { chatElement[element][elementIndex] = 0; byteCount++; element++; data++; elementIndex = 0; continue; } chatElement[element][elementIndex] = *data; elementIndex++; byteCount++; data++; } // Convert the chat elements to logical types before passing them on. uint64_t chat_target_id; try { chat_target_id = boost::lexical_cast<uint64>(chatElement[0]); } catch(boost::bad_lexical_cast &) { chat_target_id = 0; } SocialChatType chat_type_id = static_cast<SocialChatType>(atoi(chatElement[1])); MoodType mood_id = static_cast<MoodType>(atoi(chatElement[2])); string chatMessage(data);
After careful examination you can see that the string is converted from utf16 to ansi format, then the string is looped over looking for the 5 integers (in string format) and finally placing the leftover data into a new string. Very complicated, brittle, and failes horribly in the case of strings that require utf16 format. Someone in the past also had decided it would be a good idea to typedef a custom BString class to string, which has been an endless source of confusion as well.
Here is my "homework" solution which makes use of several STL algorithms to achieve the desired results:
// Get the unicode data and convert it to ansii, then get the raw data. std::u16string chat_data = message->getStringUnicode16(); std::vector<std::u16string> tmp; std::vector<uint64_t> chat_elements; int elements_size = 0; // The spatial chat data is all in a ustring. This consists of 5 chat elements // and the text of the spatial chat. The 5 chat elements are integers that are // sent as strings so here we use an istream_iterator which splits the strings // at spaces. std::basic_istringstream<char16_t> iss(chat_data); std::copy_n(std::istream_iterator<std::u16string, char16_t, std::char_traits<char16_t>>(iss), 5, std::back_inserter<std::vector<std::u16string>>(tmp)); // Now we use the STL transform to convert the vector of std::u16string to a vector of uint64_t. try { std::transform(tmp.begin(), tmp.end(), std::back_inserter<std::vector<uint64_t>>(chat_elements), [&elements_size] (const std::u16string& s) -> uint64_t { // Convert the element to a uint64_t uint64_t output = boost::lexical_cast<uint64_t>(std::string(s.begin(), s.end())); // After successful conversion update we need to store how long // the string was (plus 1 for the space delimiter that came after it). elements_size += s.size() + 1; return output; }); } catch(const boost::bad_lexical_cast& e) { LOG(ERROR) << e.what(); return; // We suffered an unrecoverable error, bail out now. } // After pulling out the chat elements store the rest of the data as the spatial text body. std::u16string spatial_text(chat_data.begin()+elements_size, chat_data.end());
The above works like a charm! My only complaint is the lack of string literal identifiers in vc2010 when working with std::u16string and std::u32string, however, it's not a dealbreaker (ie., I can't do something like:
std::u16string mystring(u"Some text goes here");
).
Thanks again for a great series, you're definitely making an impact in the C++ community!
[Slauma]
>.
We added the Dinkum Allocators Library to VS 2010. Picking it up wasn't too much work, and we hoped that it would help customers with advanced allocator needs.
Because it's a non-Standard library, it's subject to change in the future, especially in response to customer feedback.
> I'm especially interested in this subject because I am maintaining a library...
Several suggestions:
1. Upgrade to VS 2010. Its move semantics increases the performance of STL-using applications. The magnitude of the increase depends on your application (it could range from unobservable to order-of-magnitude), but it certainly can't hurt. Additionally, _ITERATOR_DEBUG_LEVEL (the successor to _SECURE_SCL) now defaults to 0 in release mode. If you weren't aware of this before, it exacted up to a 2x perf penalty in VS 2005 and 2008. It is possible to manually disable it in 2005/2008, but doing so is fraught with peril; upgrading to 2010 is much, much better (also thanks to our deterministic linker checks for _ITERATOR_DEBUG_LEVEL mismatch).
2. If you're compiling for 32-bit and running on XP (or its server counterpart, Server 2003), then you're not getting Windows' Low Fragmentation Heap by default. It can be manually requested, and doing so is worth trying (it takes like 5 lines and immediately affects your entire application). When you compile for 64-bit, the CRT enables the LFH for you. And when you run on Vista or higher, Windows automagically detects allocations that would benefit from the LFH, and enables it for you; I am told that this automagic detection is so good, there is no benefit to manually enabling the LFH. That's why only 32-bit programs running on XP are affected.
3. Consider using the Boost Graph Library; replumbing your application could be a significant amount of work, but the BGL is very advanced.
4. Parallelization, if possible.
5. Looking into allocators is definitely appropriate here.
> Well, perhaps this wish is too special for your introduction, but it would be great if you could spend a few minutes on this subject sooner or later!
I have a blog post about writing STL allocators, , but it covers satisfying the (tedious) requirements, and doesn't demonstrate an actually useful allocator (or a stateful allocator, although it sketches out where stateful allocators behave differently).
As it so happens, I've been experimenting with allocators in my Nurikabe solver, whose performance is currently dominated by set manipulation (list, map, and set are all node-based containers, so they're the same as far as allocators are concerned except for the size of the nodes). More recently, I've discovered that avoiding set manipulation in one crucial part of the algorithm, instead using vectors, is even better for performance - but I may go back and polish up the allocator changes later.
Here is my code, if you'd like to get some ideas from it. WARNING DANGER HAZARD CAUTION, this is HIGHLY EXPERIMENTAL and is best regarded as a SKETCH of what a real allocator should do. Using this verbatim in production code would be an EXTREMELY BAD IDEA. (For example, it currently depends on using namespace std; which makes it unsuitable for inclusion in a header, and that is the least of its issues.)
#include <stddef.h> #include <stdlib.h> #include <new> using namespace std; class FancyBlocks { template <typename T> friend class FancyAllocator; private: static void * alloc(const size_t bytes) { if (should_use_blocks(bytes)) { void *& block = get_block(bytes); if (block) { void * const ret = block; void *& guts = *static_cast<void **>(block); block = guts; return ret; } } void * const pv = malloc(bytes); if (pv == nullptr) { throw bad_alloc(); } return pv; } static void dealloc(void * const p, const size_t bytes) { if (should_use_blocks(bytes)) { void *& block = get_block(bytes); void *& guts = *static_cast<void **>(p); guts = block; block = p; } else { free(p); } } static bool should_use_blocks(const size_t bytes) { return bytes <= THRESHOLD && bytes % sizeof(void *) == 0; } static void *& get_block(const size_t bytes) { return s_blocks[(bytes - 1) / sizeof(void *)]; } static const size_t THRESHOLD = 128; __declspec(thread) static void * s_blocks[THRESHOLD / sizeof(void *)]; }; __declspec(thread) void * FancyBlocks::s_blocks[FancyBlocks::THRESHOLD / sizeof(void *)]; // template <typename T> class FancyAllocator { public: T * allocate(const size_t n) const { if (n == 0) { return nullptr; } if (n > max_size()) { throw bad_alloc(); } const size_t bytes = n * sizeof(T); return static_cast<T *>(FancyBlocks::alloc(bytes)); } void deallocate(T * const p, const size_t n) const { const size_t bytes = n * sizeof(T); FancyBlocks::dealloc(p, bytes); } typedef T * pointer; typedef const T * const_pointer; typedef T& reference; typedef const T& const_reference; typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; template <typename U> struct rebind { typedef FancyAllocator<U> other; }; FancyAllocator() { } FancyAllocator(const FancyAllocator&) { } template <typename U> FancyAllocator(const FancyAllocator<U>&) { } ~FancyAllocator() { } bool operator==(const FancyAllocator&) const { return true; } bool operator!=(const FancyAllocator&) const { return false; } size_t max_size() const { return static_cast<size_t>(-1) / sizeof(T); } T * address(T& r) const { return addressof(r); } const T * address(const T& s) const { return addressof(s); } void construct(T * const p, const T& t) const { new (static_cast<void *>(p)) T(t); } void destroy(T * const p) const { p->~T(); (void) p; // A compiler bug causes it to believe that p->~T() doesn't reference p. } template <typename U> T * allocate(const size_t n, const U *) const { return allocate(n); } private: FancyAllocator& operator=(const FancyAllocator&); };
This increases my performance on nikoli_9 from 172.772 seconds to 111.355 seconds, a 55% speedup. Briefly summarized, the allocator just calls malloc()/free() except for certain sizes. When 128 bytes or less (this is rather arbitrarily chosen; my nodes of interest are 20 bytes on x86), and evenly divisible by the size of a pointer (true for nodes, which contain pointers - I did this to avoid worrying about alignment), it activates a special scheme. For each size, on each thread (remember multithreading!), there's a void * block. If this pointer is null, it malloc()s a fresh chunk of data, and returns that. If this pointer is not null, it removes the block from a singly-linked list by hand (this is the crazy casting game), and returns that block. When a block is available, this requires no locking, and no calls to the CRT or Windows. It's just a few tests and pointer surgery. Deallocation is the reverse; for non-magic sizes it just calls free(). For magic sizes, it adds the block to the singly-linked list.
Note that this does not handle allocating memory from one thread and deallocating it from another, which ordinary new/malloc() handles just fine (at the cost of a lock). Also, blocks are currently never freed; with more thought (and especially stateful allocators) this would be possible. Here are some helper typedefs that I used:
template <typename K, typename V> struct FancyMap { typedef map<K, V, less<K>, FancyAllocator<pair<const K, V>>> type; };template <typename T> struct FancyQueue { typedef queue<T, list<T, FancyAllocator<T>>> type; };template <typename T> struct FancySet { typedef set<T, less<T>, FancyAllocator<T>> type; };template <typename T> struct FancyVector { typedef vector<T, FancyAllocator<T>> type; };typedef FancySet<pair<int, int>>::type CoordSet; typedef FancyVector<pair<int, int>>::type CoordVector;
Hopefully this should give you some idea of what is possible with allocators. Anything is possible as long as you are careful to satisfy the requirements - allocation is tricky, so the requirements are not trivial.
[HeavensRevenge]
> Seeing that in the VS2010 headers make me sad inside
I don't know what you're referring to. That's a valid implementation of N3126 20.8.10.1.3 "Placeholders" [func.bind.place]. Our bind() implementation has (many) bugs, but that is not one of them.
[Nils]
> aha did anybody else notice that the program is called meow?
Nice catch - I love cats.
[devcodex]
> First I just want to say I've been following these videos since they started and I love them, you have a natural knack for disseminating quality information
Thanks for continuing to watch them!
>.
Slight correction: UTF-16 can be losslessly converted to and from UTF-8. Both are encodings of Unicode. However, converting UTF-16 (which is Unicode) to ANSI (which is non-Unicode) loses information; I believe you meant ANSI here instead of UTF-8. In VC10, I fixed bugs like this in the CRT.
(By the way, Unicode is full of headaches, but ANSI is far far worse, as I'm sure you've discovered.)
> Here is my "homework" solution which makes use of several STL algorithms to achieve the desired results:
Nice.
Here's a possible further improvement: given your description of the problem, I'd use a const wregex r(L"(\\d+) (\\d+) (\\d+) (\\d+) (\\d+) (.*)") and perform a regex_match() which tests whether the whole string matches the whole regex. Having passed wsmatch m into regex_match(), I could parse m[1] through m[5] as integers (stoi(), new in VC10, can be used for this), and m[6] would be the rest of the string.
> My only complaint is the lack of string literal identifiers in vc2010 when working with std::u16string and std::u32string
Unicode string literals are a C++0x feature that's on our radar, but it's pretty far down on our list of priorities.
> Thanks again for a great series, you're definitely making an impact in the C++ community!
I'm very happy to hear that.
@STL
Thank you for the feedback! Here's the updated working solution (in roughly half the amount of code as my first attempt and less than a third of that of the original legacy code):
std::u16string chat_data = message->getStringUnicode16(); std::wstring tmp(chat_data.begin(), chat_data.end()); const std::wregex p(L"(\\d+) (\\d+) (\\d+) (\\d+) (\\d+) (.*)"); std::wsmatch m; if (! std::regex_match(tmp, m, p)) { LOG(ERROR) << "Invalid spatial chat message format"; return; }
Looking at this solution compared to the original while loop that did the same thing it's much easier now to determine the intention of the code just by reading it. I am continually amazed at just how powerful and elegant the stl is and at the impact the new standard has on the language, what an awesome experience it must be to have a career working so closely with it!
[quote]
4 days ago, Charles wrote
...let's take this conversation off of this thread. Send me a mail throught the contact us form (I get those mails....).
]
Ah finally and yes, this isn't the place. An e-mail coming your way soon.
Just don't make we waste my time writing an e-mail that's just going to be ignored because that will be unacceptable, you understand.right?
Could you perhaps explain when to use 'get_temporary_buffer' (20.9.8 Temporary buffers [temporary.buffer]) ?
The standard is vague about what to use it for.
Btw have you ever thought about writing a book about STL ?
You're a good teacher.
Oh and there's nothing wrong with your voice or appearance, it is an encoding/hardware problem
[devcodex]
> Looking at this solution compared to the original while loop that did the same
> thing it's much easier now to determine the intention of the code just by reading it.
Yay! Yep, that's exactly right. It's also less likely to contain bugs, especially as the code is modified over time (what if you add a sixth number or change the format in another way?).
[Mr Crash]
> Could you perhaps explain when to use 'get_temporary_buffer'
It has a very specialized purpose. Note that it doesn't throw exceptions, like new (nothrow), but it also doesn't construct objects, unlike new (nothrow).
It's used internally by the STL in algorithms like stable_partition(). This happens when there are magic words like N3126 25.3.13 [alg.partitions]/11: stable_partition() has complexity "At most (last - first) * log(last - first) swaps, but only linear number of swaps if there is enough extra memory." When the magic words "if there is enough extra memory" appear, the STL uses get_temporary_buffer() to attempt to acquire working space. If it can, then it can implement the algorithm more efficiently. If it can't, because the system is running dangerously close to out-of-memory (or the ranges involved are huge), the algorithm can fall back to a slower technique.
99.9% of STL users will never need to know about get_temporary_buffer().
> Btw have you ever thought about writing a book about STL ?
That's one of the things I'd like to do if I had enough free time.
> Oh and there's nothing wrong with your voice or appearance, it is an encoding/hardware problem
Well, maybe I only sound weird to myself, but my right eye is made of plastic (due to a birth defect), so I know I look weird. Thanks, though. :->
another great STL lecture! excellent presentation, content & pacing.
just remember camera 2 when you're at the board
@STL:
Thanks for your extensive reply, a lot of helpful points in it!
I have one remark and one question:
1) I've moved my library to VS2010 to make a performance test: It runs 8% faster compared to VS2008 on the same machine. But this is only the total time including some file read/write and other operations. The benefit for the part which works extensively on the STL containers will probably have more than 8% performance increase (but not more than roughly 20% to 25% which is still fine for me considering that I didn't have any work to achieve this except recompiling).
2) Why didn't you use the new allocator_chunklist in VS2010 instead of your hand-written block allocator? Isn't this allocators intention to provide a block allocation out of the box? Did you avoid it because the allocator is in the stdext namespace and not standard compliant? Does the allocator_chunklist have any drawbacks? (I'm asking because it's the allocator I am planning to make use of in my library.)
[piersh]
> just remember camera 2 when you're at the board
Was I blocking it?
[Slauma]
> I've moved my library to VS2010 to make a performance test: It runs 8% faster compared to VS2008 on the same machine.
Awesome!
> Why didn't you use the new allocator_chunklist in VS2010 instead of your hand-written block allocator?
I played with it. As I recall, I was confused by having to specify the size of the blocks being allocated. I wanted an allocator that would work for a range of block sizes. I could have been mistaken - I didn't spend very long playing with <allocators>.
Will the c++0x standard be fully implemented in the next visual studio version ?
I'm really missing some features / functions
qick and dirty, example :
21.5 Numeric Conversions [string.conversions]
string to_string(int val);
strange though, that only three types of to_string where implemented.
Why not implement them all while you where at it ?
anyway, while i'm waiting i'll use this sloppy thing:
template <typename Type> inline string to_string(const Type &value) { ostringstream os; os << value; return os.str(); }
well that's not the only strangeness i've noticed but you get my point.
Actually i'm already counting the days until the next visual studio version is released, vs2010 is so incomplete, buggy, slow (going over to WPF was a really bad move performance wise), etc.. compared to vs2008
(i'm talking about the editor + compiler here)
the compiler got some really embarrassing bugs
I also looked at all of these videos since the beginning, and previous videos from you (STL). TBH, these are the only learning videos I watch. I often go to C9, to see if they have a new video from you.
Keep it up!!!
P.S. A video on STL (compile-time calculation) would be cool. I forgot the name, but a small example's shown in the Effective C++ book (don't have it here), using the enum hack/workaround.
[Mr Crash]
> Will the c++0x standard be fully implemented in the next visual studio version ?
Core Language: No. There are many features that remain to be implemented, and our compiler front-end dev/test resources are limited.
Standard Library: As usual, we will attempt to conform as closely as possible to the current Working Paper given the Core Language features available to us.
> strange though, that only three types of to_string where implemented.
> Why not implement them all while you where at it ?
This was Library Issue 1261, , fixed in Working Paper N3090, which was released on 3/29/2010. VC10 was released on 4/12/2010.
In the Standard Library, when we become aware of problems, we fix them if we have sufficient time remaining. When the Working Paper itself is defective, that's a headache for us, but we try to figure out something reasonable to do. In this case, nobody reported problems with to_string() in time for us to fix them, so they didn't get fixed. (to_string() has already been fixed for VC11.)
The moral of this story: always grab beta releases and report problems with them as soon as possible (through Microsoft Connect). If you report bugs too late, you'll have no choice but to wait for the next major version.
> the compiler got some really embarrassing bugs
To put this in the gentlest possible way, mentioning unspecified compiler bugs to someone who isn't a compiler dev (me) is a doubly unproductive use of time. Please report compiler bugs through Microsoft Connect. Due to limited resources, the compiler team has to aggressively triage bugs so that they can spend time on the very worst ones, so you should be prepared for Won't Fix resolutions. However, that's better than not reporting bugs at all, which makes them even less likely to be fixed.
If you maintain software, you should be familiar with these issues; every developer wants their users to report bugs as soon as possible.
[Deraynger]
> I also looked at all of these videos since the beginning, and previous videos from you (STL).
> TBH, these are the only learning videos I watch. I often go to C9, to see if they have a new video from you.
Cool! I'll be filming Part 8 in early November.
> P.S. A video on STL (compile-time calculation) would be cool.
Explaining template metaprogramming with C++0x <type_traits> is an excellent idea. I may do just that for Part 8.
***
Part 7 is up!
@STL: Just wanted to mention that the compiler bugs have been reported already.
Only two reasons why i did mention the compiler bugs was to give an example to why i'm waiting on the next version. Second one is, sadly enough, the more you complain and and inform the public (other developers in this case) about the bugs the high the chances are the they will get fixed, at all, (many bugs never gets fixed as you obviously already know)
My intention was not to upset you, i apologize.
> This was Library Issue 1261...
Thanks for the link, it's been bookmarked.
I missed that one, i find"> a bit hard to navigate, there's probably a system to it but still ...all these JTC1, SC22, WG21
> The moral of this story: always grab beta releases
I do run betas in VM's but time is limited so i have to trust that microsoft's internal beta testers do their work (with the public beta testers help of course
)
> Due to limited resources..
Yeah, i've never really bought that as real reason, smells more like an excuse. Usually it's easily fixable (ex, a pipeline issue) but due to poor prioritizing it's never fixed and due to that you get a back log etc..
Now when i think about it, it would be interesting to see this process in action, hey Charles what do you say about a channel9 video about from bug report to fix with information about this "limited resources" microsoft dev teams talk about all the time ?
"An inside look in how the compiler team (try to
) fix bugs"
On another subject:
STL what processes do/did you go though to optimize the nurikabe program ?
Ex. are there any special techniques you use to detect when you have chosen the wrong container, etc..
or is it just trial and error ?
Perhaps make a video someday on how to optimize the usage of the STL. ?
> Just wanted to mention that the compiler bugs have been reported already.
Excellent.
>> Due to limited resources
> Yeah, i've never really bought that as real reason, smells more like an excuse.
It means that we have a finite number of people (devs/testers) with a finite amount of time to work on extremely complicated machinery.
> Usually it's easily fixable (ex, a pipeline issue)
I don't know what "pipeline issue" means.
> STL what processes do/did you go though to optimize the nurikabe program ?
Profiling, to detect where all of my time was going. This took several forms:
1. Adding more test cases. I developed my solver against wikipedia_hard, which turned out to be sufficient for correctness and functionality. But for performance, I discovered that other test cases were even harder to solve. Since then, I've been focusing on nikoli_9, the hardest puzzle available to me.
2. Emitting more information. I've enhanced the output to display the time taken by each step of analysis, and also to indicate where hypothetical contradiction analysis was unsuccessful. This allows me to notice which steps take longer, and to focus on when and where failed guesses occur (as those are extremely, punishingly expensive).
3. Using VS's profiler, which indicated (and continues to indicate) that confinement analysis (both top-level, and during guessing) is where the solver spends all of its time. This is a classic hotspot, so I don't have to worry about optimizing the rest of the code unless and until (hopefully) confinement analysis becomes so blazingly fast that other steps of analysis start showing up in the profiler.
Also lots of experimentation - I'll try making a change, and see if it makes things better or worse.
Great video Stephan, even my (non-programmer) wife commented on how enthusiastic you are! The amount of info you pack into a short time is superb, there is no fluff here. Keep 'em coming, this series is turning into a classic guide to the STL that will be valuable for a long time I suspect
Actually yes the comments about audio in this and couple earlier parts (I think first part or two were OK) were truthful but unspecific. The recording gain is too high in one stage or another. It seems the recording level has been calibrated to be loud at soft voice, however the presenters enthuastic voice goes louder at times and it starts to distort. This is very clearly audible immediately in first 10 seconds, on the words "there" "Lavavej" "part" "my" ..
It's better to record gain at low level, then normalize the audio and remove noise incase the noise floor rises too much, than to record at high level and have clipping since there's nothing that can be done about that later. | https://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Standard-Template-Library-STL-/C9-Lectures-Stephan-T-Lavavej-Standard-Template-Library-STL-6-of-n?format=smooth | CC-MAIN-2017-51 | refinedweb | 6,512 | 61.97 |
This simple code for getting device current speed using GPS. you can use Network Provider also..
Thanks,
package com.geofence.alarm; import android.app.Activity; import android.content.Context; import android.location.Location; import android.location.LocationListener; import android.location.LocationManager; import android.os.Bundle; import android.widget.Toast; import com.example.geofenceapp.R; public class SpeedAlarmActivity extends Activity { Context context; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_speed_alarm); // Acquire a reference to the system Location Manager LocationManager locationManager = (LocationManager) this .getSystemService(Context.LOCATION_SERVICE); // Define a listener that responds to location updates); } }
Thanks,
This is good but it lacks the XML files....would you please send me the whole project to my email address (morisatwine@gmail.com)...thanks in advance...
No There is no need of any xml file just create an empty xml layout no view in this file..
But in you manifest.xml you have to add some permission-
android:name="android.permission.INTERNET"
android:name="android.permission.ACCESS_NETWORK_STATE"
android:name="android.permission.READ_PHONE_STATE"
android:name="android.permission.ACCESS_COARSE_LOCATION"
android:name="android.permission.ACCESS_FINE_LOCATION"
wow, thanks
Thanks for your nice comment.
This comment has been removed by the author.
Is this permission required???? android:name="android.permission.READ_PHONE_STATE
Yes it is required!
Nice code, very clear. Tks
Thanks!
nice sir!!!!
Your welcome!
Now what if I wanted this code to cycle through every 5 seconds
See this line-
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0,
0, locationListener);
Just change it according your requirement after how many time or distance you want speed.
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 5000,
0, locationListener);
Hope it will help you...
Thanks you ,sir. (from Thailand.)
Your Welcome & Thanks(from India :))
Thanks for this tutorial, by inspiring with this tutorial i have created a program in which GPS running in services, my program running well but my problem is whenever i am driving for few minutes its work fine its update longitude and latitude but after some time this will not updating longitude and latitude and terminate my program if you have any idea why this is happen please guide me with solution thanks in advance
I think problem with Service provider, please use best service provider- means
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0,
0, locationListener);
and
locationManager.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 0,
0, locationListener);
May be at the time of driving your device is not getting update from GPS.
So im curious..i have followed this but it stops unexpectedly? Not sure what is wrong
Can you paste your log-cat here please? I think problem with your package name or in your manifest. Have you given permission in your manifest?-
android:name="android.permission.INTERNET"
android:name="android.permission.ACCESS_NETWORK_STATE"
android:name="android.permission.READ_PHONE_STATE"
android:name="android.permission.ACCESS_COARSE_LOCATION"
android:name="android.permission.ACCESS_FINE_LOCATION"
I also have the same problem
no toast is displaying...! how could i calculate the speed of the device , i tried so many time using getspeed() method which is in location class but no using. any guess
1)Please check your manifest, have you given all permissions?
2) Try ones GPS to Network Provider in below line-locationManager.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 0,0, locationListener);
3)And some times it happen Location Listener does not provide any thing us. So no worry try on another device. Or create any other demo app first for testing first.
:) Hope its only thing and hope you have did all but please check ones more time..
Thanks!
Hey, I have developed one app that will calculate the distance and speed while travelling on vehicle also I am getting distance covered from one location to another but I can't get the speed while walking. What would be the problem can you suggest me?
may be when you walking fast your GPS not working. So please enable network and GPS service provider both..
I mean check for best service provider use what ever available. And for getting distance search on google there a API for that just pass your first and last lat,lon it will return you distance.
Hi ...Saurabh Pandya can you please post the code how to calculate the distance and speed ...
@Alampally Please see above my code for calculate speed
location.getSpeed()
and for distance use-
Location.distanceBetween();
see this link-
I'm a complete noob! Could you please tell me exactly where inside the manifest I should put the permissions, and what tags I should use. Thank you in advance!
never mind, I just figured this out!Works perfect, thank you for the code!
Grate!
Hi i'm a beginner of android development.i need to calculate the gps device travelling speed. and that updation on every minute, please help me...
Hi i'm a beginner of android development.i need to calculate the gps device travelling speed. and that updation on every minute, please help me....
This post did not helped you?
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 1000x60,
0, locationListener);
it will give you toast after every 1 min.
Ehm, Did You mean 1000*60 instead 1000x60 ?
:D
@Aditya yes it is 1000*60 :) , I just gave logic not running code so please adjust it.
Hi, i tried your code and followed all your steps including manifest permissions, but when i execute this, it is displaying only Hello world!, since i am new to android, can u please help me to know what is the problem?
As I am using "LocationManager.GPS_PROVIDER" so please check your device should be enable GPS and you should be out side becuase in-side GPS did not work.
Or you cane use Netwrok provider..
Actually i am checking it in emulator, does GPS is there in emulator?
No dear! you can think it your self how you can get emulator speed when in is not moving? you should use mobile phone and should walk or run for device speed.
In case of emulator you can pass lat, long but ones only for displaying map and etc but not for getting speed.
Ya sorry..i got it! i will check it in real device.
Hi,thanks a lot it worked in android device, i need one more help from you, can u please lsend me code for how to send gps coordinates to server, because my app is crashing when i try to send it to server, please help me,i need to do it very urgent.... kulkarni.anandr100@gmail.com, thanks in advance
Got the great help thanks...
sir,toast is not being shown .plz help me.!
This comment has been removed by the author.
i have followed all the steps ,but toast is not showing . :(
Hi, Thank you for your coding. But in my application I used another class for location Manager,LocationListener and onLocationChanged function, that means all location related things are separate file from MainActivity file. So, my question is: how I can get the speed value in MainActivity.java file from that location.java file.
Also, please let me know how I can find of device current direction as degree?
Hello sir, i am developing an apps that is the same as your example, but i want to add sound (warning sound) whenever the speed limit is reached (or exceeded). for example if the movement is above 80 km/h, a sound will be heard...what should i add? thanks ;)
You should use Notification manager. Some thing like that-
when you reach @ 80 km/h speed just call this method and pass a message what you want to show-
/**
*);
}
}
Read more:
Hello,
I am trying to doing the same as your example but i want to send notification whenever the speed exceeds 60 km/h. so plz send me the code of doing so on my id- dubeydeepti8@gmail.com.
Thanks
Deepti
Hi Deepti, the above comment did not work for you? Just add a condition there if speed>60 call the notification method.
Hello,
I just started develeping in android so i'm basically just a beginer but for my project i need to realise an application that sends email automatically every like 5 minutes or every given parametre (like those 5 minutes can be changed to 10 munutes or something ) Can you help me ?
You can use Alarm Manager for this. Check this article-
I have planned to make application having the feature of measuring speed of car using android phone to warn when speed limit exceeded.
I think your code will be very much helpful for me.
Thanks in advance
hey can uplz tell me how to find friend location exactly using gps and it should show the way to reach the friend ,where he is standing at some point........mi id- sravanipractice@gmail.com
Sir! can u mail source code to moghilisairam@gmail.com. thanks in advance :)
Hi i am trying to install the above app in my Galaxy S4. I keep getting unfortunately app has stopped. Could you please help me?
Sir! can u mail source code to sardartashaf@gmail.com
. thanks in advance :)
This comment has been removed by the author.
Sir, can you please explain the SpeedAlarmActivity as a Service for an application? please
How can i track speed and distance in my application and please be open show the interface for this also. Thanks
For the distence here google APIs just pass the starting and end point lat/lon.
hello sir...
in google map, i pointed two locations.. say A and B.. Now i travelled some distance.. i want to know how much i travelled at every gps locations changes.. say every 30 secs...
Hi,how can i set current speed on Image view or Text view please reply .
Someone post the screenshots of this app while the speed is displayed....
Thank you!!!!
Hi, how does this code impact battery if the service is always left running in the background?
Thanks
Hi, Is it work in lower version devices (below 2.3 version). Because i tried in the 2.2 it's not working.
I am not sure but is should work to API Level-8, kindly check it with GPS/Network/WIFI may be some thing work for you. Sometime device does not provide location so please try it later or try many time.
Is this line required for the application to work?
import com.example.geofenceapp.R;
If yes, where is it located? (it looks like an application you once created)
sir i am new in android..i want to make an app vehicle tracking system offline project..in which user can input bike name, how much current distance he/she covered, and expected mileage. when login there is a option or button for filling fuel which starts to track vehicle from filling station and track how much distance covered in fuel(quantity)....please help sir ..how to code
please send it to me at danishalam6@gmail.com.....it will help to understand a lot..
please sir humble request...................ASAP if it possible
Danish, I don't have any code for your specific requirement but I can guide you to make your project done. follow below step to make it done-
1)Create your UI according to your requirement.
2)Make input box to take input like distance and mileage.
3)For calculating the speed and distance use above code.
4)Now you can indicate to user if fuel is low using notification.
Nothing challenging here you can do it easily and keep in mind GPS is never accurate so please don't depend on it.
hi sir, i get the speed from your tutorials but it giving speed not accurate one...5 sec span is given to location updater then also it does not giving correctly so any suggestions....?
I am really new to android studio and I have these several problems below
-I got a red underline for "locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 5000, 0, locationListener);"
-do I have to import this? "import com.example.geofenceapp.R;"
-the "activity_speed_alarm" is shown in red and it says cannot resolve symbol 'activity_speed_alarm'
-Where exactly do I put the permission codes in manifests?
Thank you for your sharing
how display update speed current speed
That toast is current updated speed. You can put it inside textview.
sir please send textview coding
TextView text=(TextView)findviewbyId(R.id.textView1);
text.setText(String.valueOf(location.getLocation());
Hi sir,
I have the same problems as "Herman Tam" -August 6, 2015 at 5:35 AM.
I am using android studio and the problem are in the lines:
"
import com.example.geofenceapp.R;
"
"
setContentView(R.layout.activity_speed_alarm);
"
"
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, locationListener);
"
the first two errors say"cannot resolve symbol".
the last error seems to persist even after implementing your permissions in manifest.xml
please reply even if you cant help us.
thanks for sharing
Manish,
First of all thank you for the code.
I have mine displaying in a TextView and it works fine, however speed is shown as a decimal (e.g, 20.345678). Also, when I am doing 80km/h, speed is showing as 20.nnnnn.
Can you please shed some light on how to display the correct speed in km/h?
Regards,
Michael C.
Fixed...
I used Math.round and multiplied getSpeed() * 3.6
Android Studio complained about how I have constructed the string but it works for me.
TextView text = (TextView) findViewById(R.id.txtSpeed);
text.setText(String.valueOf(Math.round(location.getSpeed() * 3.6))+ " Km/h");
Cool!!!
please send me all project code on my email:-androidapp.developer.in@gmail.com
Sorry dear, I lost my workspace. Copy paste from above.
Hi Manish - Thanks for the code. Is it possible to get the speed value every second using your code.
make changes at below line-
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0,
1000, locationListener);
Note: The first 0 is stand for distance in miles and 1000 is stand for time to update location in every 1 second.
Thanks Manish.
your welcome!
Hello Manish,
I try your app, but does not work. Can you help me please?
My logcat:
02-21 20:01:01.088 3773-3773/com.tomasruml.mysm I/art: Not late-enabling -Xcheck:jni (already on)
02-21 20:01:01.402 3773-3773/com.tomasruml.mysm W/System: ClassLoader referenced unknown path: /data/app/com.tomasruml.mysm-2/lib/x86
02-21 20:01:01.624 3773-3773/com.tomasruml.mysm D/AndroidRuntime: Shutting down VM
02-21 20:01:01.653 3773-3773/com.tomasruml.mysm E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.tomasruml.mysm, PID: 3773
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.tomasruml.mysm/com.tomasruml.mysm.MainActivity}: java.lang.SecurityException: "gps" location provider requires ACCESS_FINE_LOCATION permission.
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2416)
at ...
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616)
02-21 20:01:08.724 3773-3773/com.tomasruml.mysm I/Process: Sending signal. PID: 3773 SIG: 9
Best regards,
Tomas
Add ACCESS_FINE_LOCATION in manifest.xml.
This comment has been removed by the author.
My AndoridManifest.xml:
?xml version="1.0" encoding="utf-8"?>
manifest xmlns:
application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:supportsRtl="true"
android:
activity android:
intent-filter>
action android:
permission android:
permission android:
permission android:
permission android:
permission android:
category android:
/intent-filter>
/activity>
/application>
/manifest>
With < at the begin line of course, but this page does not showing code with < at the begin.
It does not work. How can I fix it?
Best regards,
Tomas
Take all the permissions above to application tag-
permission android:
permission android:
permission android:
permission android:
permission android:
application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:supportsRtl="true"
android:
activity android:
intent-filter>
action android:
This comment has been removed by the author.
Hello Manish,
ok great it is work on my device, but I can not see current speed. Only text Hello world.
How can I fix it?
Can you help me please?
Best regards,
Tomas
Hello sir, I am a beginner in android development and I am making a similar app like yours. But here I am calculating Distance covered and speed together under one button click. I have to show these as textviews. My app will show speed nd distance covered while on move as button is pressed. A stopwatch was also their. Please help me out how can i calculate both simultaneously in a single button click. Waiting for your reply.
I think you can store speed and distance into any List or Array and on button press calculate and display the result.
sorry does this app require an internet connection or just enable the gps ??
There are 2 option 1 is using GPS and another is your network provider. You can see below line of code, it is required your GPS-
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 5000,
0, locationListener);
You can change it to Network provider if needed.
hello sir.
I need to know how to get the calories burned from the distance he or she walk. is it possible. if, can you pls add the code here or do I need to send my email. thanks...
There must be some algorithm to calculate burn calories. You can see gym machines they display burn calories according to your speed and distance covered. Please contact to any specialist regarding this.
hello sir,
I am new in this.i created user login page and after that it is showing current location.i created mainactivity and maps activity.i want to track speed under maps.where do i insert this code.
You can do it on Map Activity.
sir,
where do i insert this code.My maps activity code is-
public class MapsActivity extends FragmentActivity {
private GoogleMap mMap; // Might be null if Google Play services APK is not available.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_maps);
setUpMapIfNeeded();
}
@Override
protected void onResume() {
super.onResume();
setUpMapIfNeeded();
}.map))
.getMap();
// Check if we were successful in obtaining the map.
if (mMap != null) {
setUpMap();
}
}
}
private void setUpMap() {
mMap.addMarker(new MarkerOptions().position(new LatLng(0, 0)).title("Marker"));
mMap.setMyLocationEnabled(true);
}
}
sir,i am not getting speed.my code is-
public class MapsActivity extends FragmentActivity {
Context context;
private GoogleMap mMap; // Might be null if Google Play services APK is not available.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_maps);
setUpMapIfNeeded();
LocationManager locationManager = (LocationManager) this
.getSystemService(Context.LOCATION_SERVICE););
TextView txtCurrentSpeed = (TextView) this.findViewById(R.id.textView3);
txtCurrentSpeed.setText("Current speed:");
}
please suggest me.
Sir I want that speed is set by user and after that alarm beeps
How to do this.
Can u please help me
Do Something like-
1)// Define a listener that responds to location updates
LocationListener locationListener = new LocationListener() {
public void onLocationChanged(Location location) {
location.getLatitude();
if(location.getSpeed()>50){
NotifyToUser();
}
}
2)public void NotifyToUser(){
//write Notification Code using notification builder
}
sir,can you please give a code for notifytouser
You can follow below link-
Read more:
/**
*);
}
}
hi manish, thanks for the code tips, i just started android, and most helpful has been your site, could i have a copy of the full code for the speed display in google maps, all else works fine, except not display speed, will read it before bothering you.
cheers carl ranalli
Hi, do I still get speed if I change provider to network?
on walking location.getSpeed() get speed when we walk above 5 mph..please suggest any other idea to get speed when we walk 2mph..
can you give me the code for activity_speed_alarm please
Hi
I have one array which contain number of lat long in sequence which i travel
Based on this how to get travel distance average speed.
suggest me any example
Thanks | http://www.androidhub4you.com/2013/06/how-to-get-device-current-speed-in_112.html | CC-MAIN-2017-47 | refinedweb | 3,289 | 50.73 |
#include <wx/msgdlg.h>
Helper class allowing to use either stock id or string labels.
This class should never be used explicitly and is not really part of wxWidgets API but rather is just an implementation helper allowing the methods such as SetYesNoLabels() and SetOKCancelLabels() below to be callable with either stock ids (e.g. wxID_CLOSE) or strings ("&Close").
Construct the label from a stock id.
Construct the label from the specified string.
Return the associated label as string.
Get the string label, whether it was originally specified directly or as a stock id – this is only useful for platforms without native stock items id support
Return the stock id or wxID_NONE if this is not a stock label. | https://docs.wxwidgets.org/3.0/classwx_message_dialog_1_1_button_label.html | CC-MAIN-2019-18 | refinedweb | 119 | 63.09 |
Bug list for beta release 201001
Pythonista version 2.0.1 (201001) on iOS 9.2.1 on a 64-bit iPad5,4 with a screen size of (1024 x 768) * 2
Select a Python function name or method name and select "Help..." from the popup menu. No results. This is probably related to docs compression.
Fixed: in Pythonista version 2.0.1 (201004)
I see, thanks!
In this beta, opening the dialog to move files seems unusually slow.
@Webmaster4o Strange, I can't think of anything I've changed there.
The quick help issue will be fixed in the next build, which may actually be the final one. Not a very exciting update, I know, but I'm overall pretty happy with the state of the app, and I'll hopefully have something more interesting to announce soon...
@omz ok, thanks. I just noticed that bringing up the "move" dialog (as well as pressing "new file" takes an unusually long time. It may fix itself.
@Webmaster4o I kinda suspect that you have a pretty deep folder hierarchy in your documents, perhaps you recently installed a large package or something? The folder picker is admittedly not very smart about this, and it has to load the entire folder tree at once.
@omz maybe. That could be it. I recently copied all of the
Pythonista.appdirectory to a folder in ~/Documents, That's probably it :P
@Webmaster4o Hint: use
os.symlink.
Different issue:
objc_utilreally needs an
__all__. Especially because about 100% of all code that uses the module does
from objc_util import *. (All of which has been accidentally importing about 10 probably unwanted modules.) It also means that I have to use hacks like these:
{name for name in dir(objc_util) if not name.startswith("_")} - { "ctypes", "inspect", "itertools", "os", "pp", "re", "string", "sys", "ui", "weakref", }
There (still?) seems to be an issue with the icon support in the UI editor. Quickest way to reproduce:
- create pyui file,
- add a button,
- pick an image for the button -> icon selector works :-)
- add a custom view
- enter the custom view
- add a button there
- pick an image for the button -> empty icon list :-(
Since version 2.0 there is a little bug in the handling of the clipboard filled with whole lines. To reproduce: Look for a line of code with content starting in the very first column. Highlight the code from the left of the first character to the the very left of the next line.
Go to somewhere else. Insert the clipboard left from the first character of that line. Result: The line will be inserted but the terminating newline character (which should be in the clipboard) is NOT inserted. So, such an insert has always to be followed by pressing RETURN.
Sometimes changes in UI editor eg. in a scrollview are not saved and get lost.
@dgelessus , I agree objc_util needs an
__all__
However, to exclude the imported modules, you don't necessarily need to enumerate; instead you can use
inspect:
{name for name in dir(objc_util) if not name.startswith("_") and not inspect.ismodule(getattr(objc_util, name)) }
Setting
__all__in
objc_utilis a good idea, thanks @dgelessus.
Looking at the icon picker code in the UI editor now, I kinda wonder why anything works at all... :/
@omz or anyone else, does anyone know to get files from your camera roll into the files area of the icon picker. This has been an issue for a while now, I just keep forgetting about it. I vaguely remember how to do it in 1.5, but I am lost with this one. It's probably something very simple right under my nose, I just can't see it
@Phuket2 In the UI editor, this isn't possible. For the general asset picker (
[+]button in code editor), it should show image files that are in the same directory as the script you're editing (or a subdirectory). You can import an image from your camera roll as a file using the "New file" dialog. There's an "Imort Photo..." entry at the bottom there.
Is this new in this version?
Cannot place the caret at the end of the text in TextView without getting a 'Range out of bounds' error.
No error if I subtract one, but then the caret is before the final character.
Also no error if I select a range - e.g. selected_range = (0, l) in the sample below.
Simple test:
tv = ui.TextView() tv.text = '123456789' tv.present() tv.begin_editing() l = len(tv.text) # l is 9 tv.selected_range = (l, l)
@mikael Interesting, I don't think it's new (at least I haven't made changes related to this recently). Temporary workaround:
from objc_util import * # ... on_main_thread(ObjCInstance(tv).setSelectedRange_)((l, l))
With the additional note that in ObjC, it is looks to be (start, length), not (start, end).
So, in this case (l, 0), or more generally (start, end - start). | https://forum.omz-software.com/topic/2730/bug-list-for-beta-release-201001 | CC-MAIN-2021-17 | refinedweb | 822 | 75.5 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
The Stream-Of-Consciousness Guide to Minuteman Copyright 2011 by Larry Hastings
written at/for PyCon 2011 March 12th, 2011
First: if you're reading this, I'm sorry. The documentation is terrible. I used to write documentation in a Tiddlywiki; you can find it in "mm/readme.html". But that document is toxically out of date. At best you should use it as a springboard to discover new things about Minuteman. But please do not rely on it. My eventual goal is to replace that (and this) with some sane Sphinx-generated documentation. Watch this space.
Second: be sure to try the "rigged demo" that I used on stage at PyCon 2011. Download
and follow the instructions in there.
Third: the rest of this document is basically a stream-of-consciousness dumping ground for Minuteman documentation topics. I'm sure it is dreadfully incomplete, and bafflingly organized, but it's current and it's your only chance. For now, that is.
Fourth: if this file is a little woozy, well, I am too. I was hit with the quintuple-whammy of
- noisy neighbors up past 1am
- noisy emergency vehicles cruising past at all hours
- losing an hour of sleep to Daylight Savings
- getting up extra-early forbreakfast and the morning lightning talks
- being so tired I couldn't sleep
So I'm not running on much sleep. Believe me, I'm going to bed early tonight-- and getting up late.
- Fifth: remember,
- mm help
- prints out all commands, and
- mm help command
would print out long help on "command".
Onward To Glory!
How would you describe what a hammer is? If you said "it's what you use to pound nails into the wall", I might point out that that's a description of what a hammer is used for, not what it is. A hammer is a heavy weight with a flat part mounted on a handle. Pedantry aside, my point is, there's a big conceptual difference between what Minuteman is and what it's used for.
Minuteman is a program and set of libraries that makes it easy to create a run-time collection of objects ("projects") with a consistent external interface. This interface allows Minuteman to interrogate the object, and to instruct the object to perform actions.
With that, and with a lot of sensible defaults and the aforementioned libraries, Minuteman makes a handy generic large-software-system builder.
Minuteman Concepts And Taxonomy
A "workspace" is a self-contained directory tree. In a workspace you'll find "src", which is where all the projects go, and "release", which is where all the built software goes.
I use the term "project" really to mean two things: primarily, a run-time object conforming to the mm.Project object interface, but secondarily a directory on disk (in your workspace under "src/") that will be represented at runtime by the aforementioned run-time object.
An "action" is an object representing some verb--build, clean, document, test, regress--that Minuteman may request a project to perform. Minuteman can tell a project to "build", or "test", or whatnot.
A "step" is a single step in "building" a project--or whatever requested action. It's intended to map to a single executed external program, like an invocation of "configure". However, for convenience's sakes, commonly-used idioms like "make && make install" or "setup.py build && setup.py install" are bundled together into a single "step" for your convenience--though in actuality those are "composite steps", which run multiple internal steps for you. Currently defined steps that you might want to call directly:
preconfigure configure make make_install setup_py setup_py_install
Minuteman Code Of Ethics
Whenever you build, you install. Installation is always local, into "release". Inside "release" are "bin", "lib", "share", and so on--it's like the "--prefix" directory passed in to a "configure" script. (In fact it's exactly like that!)
All builds and installs are local. Nothing should change outside the workspace when you run a build.
The Demo
Here I'll walk you through what the demo does, in order.
% mm init ws1
This creates a new empty "workspace".
% mm addfetcher myhg hg ~/hg
This adds a "fetcher". Fetchers are objects that go and get source code for you. This fetcher is named "myhg", it is of type "hg" (Mercurial), and the URL it should use is "~/hg". After this, when you ask Minuteman to "add" a project, it'll look to see if there's a valid Mercurial repository in "~/hg/{name-of-project}".
% mm add libevent-python
This tells Minuteman "please add a project named libevent-python to the workspace". If you already have a "src/libevent-python", Minuteman will load whatever it finds in it; if you don't, Minuteman will try the fetchers to see if any of them can get one.
% mm
- This runs a build. You can tell Minuteman to only build certain projects:
- % mm libevent
- You can tell Minuteman to clean the workspace:
- % mm clean
- You can tell Minuteman to clean only specific projects:
- % mm clean libevent
% mm tag ../tagfile
This creates a "tagfile". Really all this does is copy Minuteman's workspace configuration to another file, after checking that no projects have outstanding changes. For a good time, take a look at the two files in Minuteman's secret directory. Go into ".mm" in your workspace and look at "configuration" and "local_configuration".
% mm clone tagfile ws2
This "clones" from the tagfile, creating a new workspace and populating it with the projects enumerated in the tagfile. If it could load all of the projects, it overwrites the workspace's configuration with the tagfile (to preserve all the settings) and pronounces success.
Making Your Own Projects With Minuteman
To experiment with having Minuteman build your own projects, you'll need: * a fresh workspace * a copy of your source (you don't have to check it in)
Let's create a new workspace to build a hypothetical project named "spacegoblin". First, create a new workspace using "mm init". Second, make a directory in the workspace at "src/spacegoblin", and copy your source files in there. Third, create a file called "mmproject.py" in that directory that looks like this (remove the first level of indent):
import mm
- class Project(mm.Project):
-
- def build(self):
- # steps go here
Where I have the comment "steps go here", add one of two sets of things: * If your project is built with "configure && make && make install", add
self.configure() self.make_install()
- If your project is built with "python setup.py build install", add
self.setup_py_install()
- By default this runs Python 2; to use it with Python 3, change it to
self.setup_py_install(python="python3")
Here the string value is the path to the Python interpreter you want to use; without an absolute/relative path, Minuteman will look along the $PATH.
If you want to add a second project, and have one depend on the other, add a "requirements = " line at class scope:
- class Project(mm.Project):
-
requirements = ["other-project-name"] def build(self):# steps go here
"requirements" means projects that must be present, and will be built before the current project. "weak_requirements" means projects that aren't necessary, but if they are present in the workspace will be built before the current project.
Plugins: Loaders
The "loader" is easily the most powerful plugin in Minuteman.
Once first surprising thing about the project object--projects don't know their own name, implicitly. Rather, Minuteman tells the project what its name is, when it's loaded.
This is wholly deliberate. I think of this as "slip" in the interface, like the clutch on a car. It's a point of interface between two systems where I've built in some deliberate flexibility.
- The "loader" interface looks like this:
- def load(prototype) -> project_object
The "prototype" is a prototype of the project--it's a generic object set up to kinda look like a project, but it isn't really a project. The project prototype is required to have a couple of important bits of metadata:
name settings dict directory (if one exists)
- The settings dict is straight out of the configuration file,
- configuration["projects][name]["settings"]
Minuteman has four builtin loaders of interest:
- ImportLoader
The ImportLoader is the loader that looks in the project's directory for a "mmproject.py" file. If the loader finds one, it imports it by hand. If the import works, the loader looks to see if it has a "Project". If it has one, the loader attempts to call it as if it were a Project constructor. If that returns non-None, the loader returns that.
- ProxyLoader
The ProxyLoader lets you load a project where the project and its mmproject.py come from different directories. This is for when you use a repository where the projects don't have "mmproject.py" files of their own, but you have to have an explicit mmproject.py. Just create a project with a directory, where the load name of the class is "mm ProxyLoader", then place in the directory files named with the name of the project you want to load. For example, to use ProxyLoader with libevent, you'd have a "libevent.py" in the ProxyLoader project directory. If you tried to load "libevent" with the ProxyLoader, it's load that "libevent.py", pointed at the "libevent" source directory. (This is one reason why the "load name" is a useful bit of slippage--you can rename the project in your workspace without losing the ability to point a ProxyLoader at the correct Python script.)
- InferredLoader
If you don't have any sort of Python code to handle a project, the InferredLoader may be able to help. It looks in a directory to see if there are any standard idiomatic build scripts it can recognize; if it sees one, it creates a default project object understanding that build and returns it. For example, if it sees a "configure" script, it returns a generic project whose "build" runs the "configure" step then the "make_install" step.
- VirtualLoader
VirtualLoader loads projects without having to hit the disk--they should already be loaded into the interpreter. The way it works is, you give the VirtualLoader a dict to scan over of preknown project names mapping to project class objects. You can also specify a required prefix for the project name. "mm trace" is loaded by a VirtualLoader, as are all the built in loaders and fetchers.
Plugins: Fetchers
A Fetcher is a plugin that goes and fetches source code for you. When you "add" a project, it's supposed to go something like this:
Try and load it by name. If that worked, return it. For each fetcher
Try and fetch it by name. If that succeeded:Try and load it by name. If that worked, return it. Remove the directory.
A fetcher knows what "type" it is ("hg", "git", or "svn"), and what URL to go get the source from. The URL uses modern str.format string substitution, allowing the following fields:
{name} - the name of the project to fetch {revision} - the revision of the project {branch} - what branch to get the project from (an argument to "add")
A"fetcher" exposes a lot of interfaces, but the main one is "fetch". It looks a lot like a loader's "load" method, to whit:
def fetch(prototype) -> bool (success/failure)
One clever fetcher method: fetcher.detect() attempts to determine "could this directory have been checked out from me"? It does this by figuring out the URL used to check out the directory, then turns its URL into a regular expression (where "{name}" becomes something like "(?P<name>)"). If the regular expression matches, the groups of the match object tell us the parameters to the original "fetch" request.
Plugins: Mutate
A mutator is a plugin that monitors actions and steps as they are executed. There's one sample mutator; you can add it to your workspace with
% mm add "mm trace"
The "trace" mutator
"name", "load name", and "fetch name"
Here's one example of flexibility allowed by the "slip" of projects not knowing their own names. When you load it, you can use different names at every step.
"name" is the name the Minuteman workspace uses for the project. You look up a project in workspace.projects[name] with this name.
"load name" is the name the loader uses when loading the project. If unspecified, it defaults to "name".
"fetch name" is the name the fetcher uses when fetching the project. It's the {name} substituted in to the fetcher URI. If unspecified, it defaults to "load name" (and therefore in turn to "name").
BoundInnerClass
I use my BoundInnerClass class decorator a lot. You can read about it here, and thankfully it has good documentation:
commandline.CommandLine
This class is basically terrible. Maybe the best thing is to require Python 3.2 and switch to argparse, or use something like commandline.CommandLine for the initial work and use argparse the rest of the way, or something. | https://bitbucket.org/larry/mm | CC-MAIN-2018-43 | refinedweb | 2,195 | 64.1 |
Link opens in new tab
Thanks for your support — It does make a difference
I had so much fun helping mentoring a couple dozen of end point, you map the response text ao a JSON object, like this:
getList (): Observable<ListItem[]> { return this.http.get(this._listUrl) .map(response => response.json()) .catch(this.handleError); }
When the students ran their own code, which more or less looked like the segment here, they got an exception like this:
angular2.dev.js:23925 EXCEPTION: TypeError: this._http.get(...).mapis not a function in [null]
The reason for this is that the result of the HTTP call is an
Observable . An
Observable has nothing defined by default except
subscribe . You need to import any other operator manually like
import 'rxjs/add/operator/map'; import 'rxjs/add/operator/catch';
If you rely on autocomplete in your editor, and it shows a couple of versions for every operator, remember to choose the with with “/add/” in it. As this is the file that add the operator to the
Observable definition.
You cannot add
* unfortunately. But you can move the
import s from every TypeScript file to the main entry point of your app, likely the file with the
bootstrap() call.
A few caveats about putting it in main file is that, depending on how you set your compiler and/or module loader, it might not work – it still with the SystemJS use you see it in the official Angular 2 quick start though. Also, some people think this is a hacky way, if you do, just add the
import s on top of the each file that uses them.
Another Problem: No Providers for Http
Some students also were getting a different error:
angular2.dev.js:23925 EXCEPTION: Error: Uncaught (in promise): Noproviderfor Http! (CustomersComponent -> DataService -> Http)
The forgotten part this time was adding the Http providers to the bootstrap. Something like this:
import { bootstrap } from 'angular2/platform/browser'; import { HTTP_PROVIDERS } from 'angular2/http'; import { AppComponent } from './app.component'; bootstrap(AppComponent, [HTTP_PROVIDERS]);’s Angular 2 JumpStart .
Go chec it out. He shows how to build everything in his (suprisingly cheap, only $30) Udemy course with the same name . I remember his AngularJS in 60-ish minutes video was a key block in my Angular 1 learning when I first started it back in 2013.
Thanks for everything, Dan
What Were Your Own Problems?
I’m pretty curious, what was the biggest blocker you had when trying to play with code).
What were your own challenges?
Mention them to me on Twitter (I’m @Meligy), or just in a comment below. I can’t wait!
Share With Friends:
P.S. Please help me out by checking this offer, then look below it for a special gift
Your Offer
Links open in new tabs
Thanks for your support — It does make a difference
Your Gift
As a bonus for coming here, I’m giving away a free newsletter for web developers that you can sign up forfrom here or the form below.
It’s not an anything-and-everything link list. It’s thoughtfully collected picks of articles and tools, that focus on Angular 2, ASP.NET 5, and other fullstack developer » Angular2 Http: Solving Common Problems With map() & HTTP_PROVIDERS
评论 抢沙发 | http://www.shellsec.com/news/13709.html | CC-MAIN-2017-09 | refinedweb | 544 | 62.38 |
roberto@inf.puc-rio.br (Roberto Ierusalimschy) wrote: > I don't think so. As far as I know, the "pollution" of the global > namespace by one name per module is quite theoretical. The problem > created by 'seeall' is trivially fixed by not using 'seeall'. I like these simple solutions:-) However, an official function in module package to *selectively* import stuff from other modules would be a welcome addition to the module system, IMHO. I know that's easy to write, but as usual everybody will have his or her own version and it all won't play nicely together. -- cheers thomasl web : | https://lua-users.org/lists/lua-l/2007-04/msg00566.html | CC-MAIN-2021-49 | refinedweb | 102 | 63.9 |
#include <rpl_gtid.h>
Represents the set of GTIDs that are owned by some thread.
This data structure has a read-write lock that protects the number of SIDNOs. The lock is provided by the invoker of the constructor and it is generally the caller's responsibility to acquire the read lock. Access methods assert that the caller already holds the read (or write) lock. If a method of this class grows the number of SIDNOs, then the method temporarily upgrades this lock to a write lock and then degrades it to a read lock again; there will be a short period when the lock is not held at all.
The internal representation is a DYNAMIC_ARRAY that maps SIDNO to HASH, where each HASH maps GNO to my_thread_id.
Constructs a new, empty Owned_gtids object.
Add a GTID to this Owned_gtids.
Print this Owned_gtids to the trace file if debug is enabled; no-op otherwise.
Ensures that this Owned_gtids object can accomodate SIDNOs up to the given SIDNO.
If this Owned_gtids object needs to be resized, then the lock will be temporarily upgraded to a write lock and then degraded to a read lock again; there will be a short period when the lock is not held at all.
Return an upper bound on the length of the string representation of this Owned_groups. The actual length may be smaller. This includes the trailing '\0'.
Returns the owner of the given GTID, or 0 if the GTID is not owned.
Returns true if there is a least one element of this Owned_gtids set in the other Gtid_set.
Removes the given GTID.
If the group does not exist in this Owned_gtids object, does nothing.
Return true if the given thread is the owner of any groups.
Write a string representation of this Owned_groups to the given buffer.
Debug only: return a newly allocated string representation of this Owned_gtids. | http://mingxinglai.com/mysql56-annotation/classOwned__gtids.html | CC-MAIN-2019-22 | refinedweb | 314 | 73.37 |
This is last, final, and 10th entry in the ten commandments of test attributes that started here. And you should read all of them.
We usually talk about isolation in terms of mocking. Meaning, when we want to test our code, and the code has dependencies, we use mocking to fake those dependencies, and allow us to test the code in isolation.
That’s code isolation. But test isolation is different.
An isolated test can run alone, in a suite, in any order, independent from the other tests and give consistent results. We've already identified in footprint the different environment dependencies that can affect the result, and of course, the tested code has something to do with it.
Other tests can also create dependency, directly or not. In fact, sometimes we may be relying on the order of tests.
To give an example, I summon the witness for the prosecution: The Singleton.
Here’s some basic code using a singleton:
public class Counter { private static Counter instance; private int count = 0; public static void Init() { instance = new Counter(); } public static Counter GetInstance() { return instance; } public int GetValue() { return count++; } }
Pretty simple: The static instance is initialized in a call to Init. We can write these tests:
[TestMethod]public void CounterInitialized_WorksInIsolation() { Counter.Init(); var result = Counter.GetInstance().GetValue(); Assert.AreEqual(0, result); } [TestMethod]public void CounterNotInitialized_ThrowsInIsolation() { var result = Counter.GetInstance().GetValue(); Assert.AreEqual(1, result); }
Note that the second passes when running after the first. But if you run it alone it crashes, because the instance is not initialized. Of course, that’s the kind of thing that gives singletons a bad name. And now you need to jump through hoops in order to check the second case.
By the way, we’re not just relying on the order of the tests – we’re relying on the way the test runner runs them. It could be in the order we've written them, but not necessarily.
While singletons mostly appear in the tested code, test dependency can occur because of the tests themselves. As long as you keep state in the test class, including mocking operations, there’s a chance that you’re depending on the order of the run.
Do you know this trick?
public class MyTests: BaseTest { ///...
Why not put all common code in a base class, then derive the test class from it?
Well, apart of making readabilty suffer, and debugging excruciating, we now have all kinds of test setup and behavior that are located in another shared place. It may be that the test itself does not suffer interference from other tests, but we’re introducing this risk by putting shared code in the base class. Plus, you’ll need to no more about initialization order. And what if the base class is using a singleton? Antics ensue.
Test isolation issues show themselves very easily, because once they are out of order (ha-ha), you’ll get the red light. The problem is identifying the problem, because it may seem like an “irreproducible problem”.
In order to avoid isolation problems:
- Check the code. If you can identify patterns of usage like singelton, be aware of that and put it to use: either initialize the singleton before the whole run, or restart it before every test.
- Rearrange. If there are additional dependencies (like our counter increase), start thinking about rearranging the tests. Because the way the code is written, you’re starting to test more than just small operations.
- Don’t inherit. Test base classes create interdependence and hurt isolation.
- Mocking. Use mocking to control any shared dependency.
- Clean up. Make sure that tests clean up after themselves. Or, instead before every run.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/test-attribute-10-isolation | CC-MAIN-2017-51 | refinedweb | 628 | 64.81 |
~$40 well spent, as always. The person who rides a bike without a helmet just because he’s lazy is a fool.
25.03.10
Close Encounters of the Pavement (and Not-Perpendicularly-Crossed-Enough Railroad Tracks) Kind
21.03.10
So this is how liberty dies
18.03.10
Whole-text DOM functionality and Acid3 redux
(If you must know immediately why this post is happening now rather than a couple years ago, see the last paragraph of this post.)
In September 2008 I wrote a web tech blog post about
Text.wholeText and
Text.replaceWholeText. These are two DOM APIs which I implemented in Gecko before I graduated from MIT and took five months to thru-hike the Appalachian Trail. Implementing whole-text functionality was an interesting little bit of hacking, done in an attempt to pick up as many easy Acid3 points as possible for Firefox 3, with as little effort as possible. The functionality didn’t quite make 3.0, but aside from the missed point I think that mattered little.
The careful reader might think the post contains a slight derision for
Text.wholeText and
Text.replaceWholeText — and he would be right to think so. As I note in the last paragraph of the post,
Node.textContent (or in the real world of the web,
innerHTML) is generally better-suited for what you might use
Text.wholeText to implement. In those situations where it isn’t, direct DOM manipulation is usually much clearer.
The whole-text approach of
Text.wholeText and
Text.replaceWholeText is arcane. Its relative usefulness is an artifact of the weird way content is broken up into a DOM that can contain multiple adjacent text nodes, in which node references persist across mutations. It is an approach motivated by fundamental design flaws in the DOM:
Text.wholeText and
Text.replaceWholeText are a patch, not new functionality. Further,
Text.replaceWholeText‘s semantics are complicated, so it’s not particularly easy to use it to good effect. (Note the rather contorted example I gave in the post.)
Fundamentally, the only reason I implemented whole-text functionality is because it was in Acid3. I believe this is the only reason WebKit implemented it, and I believe it is quite probably the only reason other browser engines have implemented it. This is the wrong way to determine what features to implement. Features should be implemented on the basis of their usefulness, of their “aesthetics” (an example lacking such: shared-state threads with manual locks, rather than shared-nothing worker threads with message passing), of their ability to make web development easier, and of what they make possible that had previously been impossible (or practically so). I know of no browser engine that implemented whole-text functionality because web developers demanded it. Nevertheless, its being in a well-known test mandated its implementation; in an arms race, cost-benefit analysis must be discarded. (The one bright spot for Mozilla: in contrast to at least some of their competitors, they didn’t have to spend money, or divert an employee, contractor, or intern already more productively occupied, to implement this — beyond review time and marginal overhead, at least.)
The requirement of whole-text functionality, despite its non-importance, is one example of what I think makes Acid3 a flawed test. Acid3 went out of its way to test edge cases. Worse, it tested edge cases where differences posed little cost for web developers. Acid3 often didn’t test things web authors wanted, but instead it tested things that were broken or not implemented regardless whether anyone truly cared.
The other Acid3 bugs I fixed were generally just as unimportant as whole-text functionality. (Due to time constraints of classes and graduation, this correlation shouldn’t be very surprising, of course, but each trivial test was a missed opportunity to include something developers would care about.) Those bugs were:
- A bug in UTF-16 processing
cursor: none, fixing a test to ensure all CSS 3 cursor keywords were recognized
- Errors thrown when parsing names and namespaces for programmatically-created elements
- A bug in
Element.attributes.removeNamedItemNS
- Some bugs in how we handled omitted versus explicitly
undefinedarguments to some JavaScript number formatting methods
- A mistake in parsing escapes in JavaScript programs
The UTF-16 bug was exactly the sort of thing to test, especially for its potential security implications; disagreement here is frankly dangerous. (Still, I remain concerned that third-party specification inexactness caused Acid3 to permit several different semantics, listed beneath “it would be permitted to do any of the following” in Acid3‘s source. This concern will be addressed in WebIDL, among other places, in the future.)
cursor:none was an arguably reasonable test, but it probably wasn’t important to web developers because it had a trivial workaround: use a transparent image. (The same goes for other unrecognized keywords, if with less fidelity to the user’s browser conventions, therefore lending the testing of these keywords greater reasonableness.) But the other tests are careful spec-lawyering rather than reflections of web author needs. (This is not to say that spec-lawyering is not worthwhile — I enjoy spec-lawyering immensely — but the real-world impact of some non-compliance, such as the
toString example noted below, is vanishingly small.) Nitpicking the exact exceptions thrown trying to create elements with patently malformed names doesn’t really matter, because in a world of HTML almost no one creates elements with novel names. (Even in the world of XML languages, element names are confined to the vocabulary of namespaces.) Effectively no one uses
Element.attributes, and the
removeNamedItemNS method of it even less, preferring instead
{has,get,set}Attribute{,NS}. The bug in question — that
null was returned rather than an exception being thrown for non-existent attributes — was basic spec compliance but ultimately not useful function for web developers. Similarly, the impact of an incorrect difference between
(3.14).toString() and
(3.14).toString(undefined) is nearly negligible. The escape-parsing bug was an interesting quirk, but since other browsers produced a syntax error it had little relevance for developers. All these issues were worth fixing, but should they have been in Acid3? How many developers salivated in anticipation of the time when
eval("var v\\u0020 = 1;") would properly throw a syntax error?
Other Acid3-tested features fixed by others often demonstrated similar unconcern for real-world web authoring needs. (NB: I do not mean to criticize the authors or suggesters of mentioned tests [I'm actually in the latter set, having failed to make these opinions clear at the time]; their tests are generally valid and worth fixing. I only suggest that their tests lacked sufficient real-world importance to merit inclusion in Acid3.) One test examined support for
getSVGDocument(), a rather ill-advised method on frames and objects added by the SVG specification, whose return value, it was eventually determined (after Acid3-spawned discussion), would be identical to the sibling
contentDocument property. Another examined the values of various properties of
DocumentType nodes in the DOM, notwithstanding that web developers use document types — at source level only, not programmatically — almost exclusively for the purpose of placing browser engines in standards mode. Not all tested features were unimportant; one clear counterexample in Acid3, TTF downloadable font support, was well worth including. But if Acid3 gave web authors that, why test SVG font support? (Dynamically-modifiable fonts don’t count: they’re far beyond the bounds of what web authors might use regularly.) SVG font use through CSS was an after-the-fact rationalization: SVG fonts were only intended for use in SVG. (If one wanted to write an acid test specifically for SVG renderers, testing SVG font support at the same time might be sensible. Acid3, despite its inclusion of a few SVG tests, was certainly not such a test.)
But Acid tests don’t have to test trivialities! Indeed, past Acid tests usefully prodded browsers to implement functionality web developers craved. I can’t speak to the original as it was way before my time, but Acid2 did not have these shortcomings. The features Acid2 tested were in demand among web authors before the existence of Acid2, a fortiori desirable independent of their presence in Acid2.
I have hope Acid4 will not have these shortcomings. This is partly because the test’s author recognizes past errors as such. With the advent of HTML5 and a barrel of new standards efforts (workers, WebGL, XMLHttpRequest, CSS animations and transitions, &c. to name a few that randomly come to mind), there should be plenty of useful functionality to test in future Acid tests without needing to draw from the dregs. Still, we’ll have to wait and see what the future brings.
(A note on the timing of this post: it was originally to be a part of my ongoing Appalachian Trail thru-hike posts, because I wrote the web tech blog post on whole-text functionality during the hike. However, at the request of a few people I’ve separated it out into this post to make it more readable and accessible. [This post would have been in the next trail update, to be posted within a week.] This post would indisputably have been far more timely awhile ago, but I write only as I have time. [I wouldn't even have bothered to post given the delay, but I have a certain amount of stubbornness about finishing up the A.T. post series. Since in my mind this belongs in that narrative, and as I've never omitted a memorable topic even if (if? —ed.) it interested no one but me, I feel obliged to address this even this far after the fact.] Now, if you skipped this post’s contents for this explanation, return to the start and read on.)
17.03.10
Cheers!
Original creator unknown, from here via dolske — a nice complement to this persona on this fine St. Patrick’s Day…
10.03.10
Dear Bugzillazyweb<< | http://whereswalden.com/2010/page/7/ | CC-MAIN-2014-42 | refinedweb | 1,671 | 53.71 |
The following is the source code for a Card class that represents the functionality of a single card from a standard deck of 52 playing cards. Notice that “Ace” is high, rather than low. public class Card { private int cardNum; final static String[] suits = {"Spades", "Hearts", "Diamonds", "Clubs"}; final static String[] ranks = {"2", "3", "4", "5", "6", "7","8", "9", "10","Jack", "Queen", "King", "Ace"}; Card(int theCard) { setCardNum( theCard ); } public void setCardNum( int theCard) { cardNum = (theCard >= 0 && theCard <= 51)? theCard: 0; } public int getCardNum() { return cardNum; } public String toString() { return ranks[cardNum] + " of " + suits[cardNum/13]; } public String getSuit() { return suits[cardNum/13]; } public String getRank() { return ranks[cardNum]; } public int getValue() { return cardNum; } } We also create a Deck class that initializes a deck of 52 cards and provides the usual functionality of shuffling the deck and dealing a card, if there is one. public class Deck { private Card[] deck = new Card[52]; private int topCard; Deck() { topCard = 0; for( int i = 0; i < deck.length; i++) deck[i] = new Card(i); } public void shuffle() { topCard = 0; for(int i = 0; i < 1000; i++) { int j = (int) (Math.random() * 52); int k = (int) (Math.random() * 52); Card tmpCard = deck[j]; deck[j] = deck[k]; deck[k] = tmpCard; } } public Card dealCard() { Card theCard; if( topCard < deck.length ) { theCard = deck[topCard]; topCard++; } else theCard = null; return theCard; } } The source code for both of these classes is available on the course web page. For this assignment, you will create a program that plays a simple game of War. In this game, each player is dealt a card from the deck. Whoever has the card with the highest value wins. If the cards that are dealt have the same value, then it is a tie and neither player wins. The player that wins the most rounds wins the game. There is no input required from the players (not very interesting!). You should print the cards that each player is dealt and the result of that round and the final result of the game. You may want to use user input to implement a delay between each round. NOTE THIS IS BEING DONE IN jGRASP. WHOEVER GETS CORRECT GETS 5 STARS!!! | http://www.chegg.com/homework-help/questions-and-answers/following-source-code-card-class-represents-functionality-single-card-standard-deck-52-pla-q3780967 | CC-MAIN-2014-35 | refinedweb | 367 | 69.92 |
Cannot handle web plan with namespace: ".....web.2.0.1" and "...tomcat-2.0.1"
-----------------------------------------------------------------------------
Key: GERONIMO-4414
URL:
Project: Geronimo
Issue Type: Bug
Security Level: public (Regular issues)
Environment: several Geronimo app server like 1.0, 2.0.1, 2.1.1, 2.1.2
Reporter: Frank Hoffmann
Hey Guys....
i tried to deploy the "greatcow" social network into my geronimo as well as other ".war"s.....like
xwikis...
Some of them produce the error message:
Deployment failed:
Cannot handle web plan with namespace -- expecting
or .1
well....i tried to deploy them in the 2.0.1 geronimo and another message respond:
Deployment failed:
org.apache.geronimo.common.DeploymentException: Cannot handle web plan with namespace -- expecting or
i am konfused.....where does the error have his beginning... is it possible, that the missing
geronimo-web.xml ist the problem???? ..... does a tool exists with which i could create a
geronimo-web.xml matched to my several applications (war) which all runs perfectly included
in tomcat-only???
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200811.mbox/%3C195706777.1226943224373.JavaMail.jira@brutus%3E | CC-MAIN-2018-30 | refinedweb | 189 | 52.15 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
1120-PC
Name
U.S. Property and Casualty Insurance Company Income Tax Return
For calendar year 1995, or tax year beginning , 1995, and ending , 19
OMB No. 1545-1027
Department of the Treasury Internal Revenue Service
Instructions are separate. See page 1 for Paperwork Reduction Act Notice.
A Employer identification number
Please type or print
Number and street, and room or suite no. (If a P.O. box, see page 4 of Instructions)
B Date incorporated
City or town, state, and ZIP code
C Check if a consolidated return (Attach Form 851)
D Check applicable boxes: (1) 1 2 3
Final return
(2)
Change of address
(3) (1)
Amended return 953(c)(3)(C) (2) 953(d) 1 2
E Check applicable box if an election has been made under section(s) Taxable income (Schedule A, line 37)
Taxable investment income for electing small companies (Schedule B, line 21) Check if a member of a controlled group (see sections 1561 and 1563) Important: Members of a controlled group, see instructions on page 5. a If the box on line 3 is checked, enter the corporation’s share of the $50,000, $25,000, and $9,925,000 taxable income brackets (in that order): (1) $ (2) $ (3) $ $ b Enter the corporation’s share of: (1) additional 5% tax (not to exceed $11,750) (2) additional 3% tax (not to exceed $100,000) $
4 5 6
Income tax Enter amount of tax that a reciprocal must include Total. Add lines 4 and 5 7a 7b 6478 8835 6765 8844 7c 7d b Other credits (see page 6 of instructions) c General business credit. Enter here and check which forms are attached:
4 5 6
7a Foreign tax credit (attach Form 1118)
Tax Computation and Payments
3800 8586 8845
3468 8830 8846
5884 8826 8847
d Credit for prior year minimum tax (attach Form 8827) e Total credits. Add lines 7a through 7d 8 9 10 Subtract line 7e from line 6 Foreign corporations—Tax on income not connected with U.S. business Recapture taxes. Check if from: Form 4255 Form 8611
7e 8 9 10 11a 11b 12 13
11a Alternative minimum tax (attach Form 4626) 12 13 b Environmental tax (attach Form 4626) Personal holding company tax (attach Schedule PH (Form 1120)) Total tax. Add lines 8 through 12 14a 14b 14c b Prior year(s) special estimated tax payments to be applied c 1995 estimated tax payments (See instructions) d 1995 special estimated tax payments (See page 7 14d of instructions) e 1995 refund applied for on Form 4466 14e ( ) 14f 14g 14h 14i f Enter the total of lines 14a through 14c less line 14e g Tax deposited with Form 7004 h Credit by reciprocal for tax paid by attorney-in-fact under section 835(d) i Other credits and payments 15 16 17 18
14a 1994 overpayment credited to 1995
14j 15 16 17 18
Estimated tax penalty (see page 7 of instructions). Check if Form 2220 is attached TAX DUE. If line 14j is smaller than the total of lines 13 and 15, enter AMOUNT OWED OVERPAYMENT. If line 14j is larger than the total of lines 13 and 15, enter AMOUNT OVERPAID Enter amount of line 17 you want: Credited to 1996 estimated tax $ Refunded Cat. No. 64270Q
Date Date
Title Check if self-employed EIN ZIP code Preparer’s social security no.
Form 1120-PC (1995)
Page
2
Schedule A
1 2
Taxable Income—Section 832 (See page 7 of instructions.)
1 2 (a) Interest received (b) Amortization of premium
Premiums earned (Schedule E, line 7) Dividends (Schedule C, line 14)
3a Gross interest b Interest exempt under section 103
Income
c Subtract line 3b from line 3a d Taxable interest. Subtract line 3c, column (b) from line 3c, column (a) 4 5 6 7 8 9 10 11 12 13 14 15 16 Gross rents Gross royalties Capital gain net income (attach Schedule D (Form 1120)) Net gain or (loss) (Form 4797, line 20, Part II (attach Form 4797)) Certain mutual fire or flood insurance company premiums (section 832(b)(1)(D)) Income on account of special income and deduction accounts Income from protection against loss account (Schedule J, line 2e) Mutual interinsurers or reciprocal underwriters—decrease in subscriber accounts Income from a special loss discount account (attach Form 8816) Other income (attach schedule) Gross income. Add lines 1 through 13 Compensation of officers (attach schedule) (See page 8 of instructions) Salaries and wages (less employment credits) Agency balances and bills receivable that became worthless during the tax year Rents Taxes and licenses Interest Depreciation (attach Form 4562) Depletion Pension, profit-sharing, etc., plans Employee benefit programs Losses incurred (Schedule F, line 13) Additional deduction (attach Form 8816) Other capital losses (Schedule G, line 12, column (g)) Dividends to policyholders Mutual interinsurers or reciprocal underwriters—increase in subscriber accounts Other deductions (See page 10 of instructions) (attach schedule) Total deductions. Add lines 15 through 31 Subtotal. Subtract line 32 from line 14 Special line 6) deduction for section 833 organizations (Schedule H, 34a 34b b Less tax-exempt interest exp. c Bal. Charitable contributions (see page 9 of instructions for 10% limitation)
3d 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20c 21 22 23 24 25 26 27 28 29 30 31 32 33
Deductions (See instructions for limitations on deductions)
17 18 19 20a 21 22 23 24 25 26 27 28 29 30 31 32 33 34a
b Deduction on account of special income and deduction accounts c Total. Add lines 34a and 34b 35 36a Subtotal. Subtract line 34c from line 33 Dividends-received deduction (Schedule C, line 26)
34c 35 36a 36b 36c 37
b Net operating loss deduction c Total. Add lines 36a and 36b 37
Taxable income (subtract line 36c from line 35). Enter here and on page 1, line 1
Form 1120-PC (1995)
Page
3
Schedule B
Part I—Taxable Investment Income of Electing Small Companies—Section 834 (See page 11 of instructions.)
(a) Interest received (b) Amortization of premium
1a
Gross interest
b Interest exempt under section 103
Income
c Subtract line 1b from line 1a d Taxable interest. Subtract line 1c, column (b) from line 1c, column (a) 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Dividends (Schedule C, line 14) Gross rents Gross royalties Gross income from a trade or business, other than an insurance business, and from Form 4797 Income from leases described in sections 834(b)(1)(B) and 834(b)(1)(C) Gain from line 13, Schedule D (Form 1120) Gross investment income. Add lines 1d through 7 Real estate taxes Other real estate expenses Depreciation (attach Form 4562) Depletion Trade or business deductions as provided in section 834(c)(8) (attach schedule) Interest Other capital losses (Schedule G, line 12, column (g)) Total. Add lines 9 through 15 Investment expenses (attach schedule) Total deductions. Add lines 16 and 17 Subtract line 18 from line 8 Dividends-received deduction (Schedule C, line 26) Taxable investment income. Subtract line 20 from line 19. Enter here and on page 1, line 2 1d 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Part II—Invested Assets Book Values
(Complete only if claiming a deduction for general expenses allocated to investment income.)
(a) Beginning of tax year (b) End of tax year
Deductions
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
Real estate Mortgage loans Collateral loans Policy loans, including premium notes Bonds of domestic corporations Stock of domestic corporations Government obligations, etc. Bank deposits bearing interest Other interest-bearing assets (attach schedule) Total. Add lines 22 through 30 Add columns (a) and (b), line 31 Mean of invested assets for the tax year. Enter one-half of line 32 Multiply line 33 by .0025 Income base. Line 1b, column (a) plus line 8 less the sum of line 1b, column (b) and line 16 Multiply line 33 by .0375 Subtract line 36 from line 35. Do not enter less than zero Multiply line 37 by .25 Limitation on deduction for investment expenses. Add lines 34 and 38
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
Form 1120-PC (1995)
Page
4
Schedule C
Income 1 2 3 4 5 6 7 8
Dividends and Special Deductions (See page 12 of instructions.)
domestic 1 2 3 4 5 6 7 8 9
Dividends Received
(a) Not subject to section 832(b)(5)(B) (b) Subject to section 832(b)(5)(B)
(c) Total dividendsreceived ((a) plus (b))
Dividends from less-than-20%-owned corporations (other than debt-financed stock)
Dividends from 20%-or-more-owned domestic corporations (other than debt-financed stock) Dividends on debt-financed stock of domestic and foreign corporations Dividends on certain preferred less-than-20%-owned public utilities stock of
Dividends on certain preferred stock of 20%-or-moreowned public utilities Dividends on stock of certain less-than-20%-owned foreign corporations and certain FSCs Dividends on stock of certain 20%-or-more-owned foreign corporations and certain FSCs Dividends on stock of wholly owned foreign subsidiaries and FSCs Dividends from affiliated companies Other dividends from foreign corporations not included on lines 6, 7, and 8 Income from controlled foreign corporations under subpart F (attach Forms 5471) Foreign dividend gross-up (section 78) Other dividends (attach schedule) Total dividends. Add lines 1 through 13. Enter here and on Schedule A, line 2, or Schedule B, line 2, whichever applies
9 10 11
10 11 12 13
12 13 14
14
Dividends-Received Deduction
Deduction 15 16 17 18 19 20 21 22 23 24 25 26 Multiply line 1 by 70% Multiply line 2 by 80% Deduction for line 3 (see page 13 of instructions) Multiply line 4 by 42% Multiply line 5 by 48% Multiply line 6 by 70% Multiply line 7 by 80% Enter the amount from line 8 Total. Add lines 15 through 22. (See page 13 of instructions for limitation.) Enter the amount from line 9
(a) Not subject to section 832(b)(5)(B) (b) Subject to section 832(b)(5)(B)
(c) Total dividendsreceived deduction ((a) plus (b))
15 16 17 18 19 20 21 22 23 24
Total. Add line 23, column (b), and line 24, column (b). Enter here and on Schedule 25 F, line 10 Total deductions. Add line 23, column (c), and line 24, column (c). Enter here and on Schedule A, line 36a, or Schedule B, line 20, whichever applies
26
Form 1120-PC (1995)
Page
5
Schedule E
1 2
Premiums Earned—Section 832 (See page 13 of instructions.)
1
Net premiums written Unearned premiums on outstanding business at the end of the preceding 2a through 2d 2a 2b 2c 2d
2e 3
3 4
Total. Add lines 1 and 2e Unearned premiums on outstanding business at the end of the current 4a through 4d 4a 4b 4c 4d
4e 5 6 7
5 6 7
Subtract line 4e from line 3 Transitional adjustments under section 832(b)(7)(D). (See page 14 of instructions.) Premiums earned. Add lines 5 and 6. Enter here and on Schedule A, line 1
Schedule F
1 2
Losses Incurred—Section 832 (See page 14 of instructions.)
1 2a 2b 2c 3 4a 4b 4c 5 6 7 8 9 10 11 12 13
Losses paid during the tax year (attach schedule) Balance outstanding at the end of the current tax year for: a Unpaid losses on life insurance contracts b Discounted unpaid losses
c Total. Add lines 2a and 2b 3 Add lines 1 and 2c 4 Balance outstanding at the end of the preceding tax year for: a Unpaid losses on life insurance contracts b Discounted unpaid losses c Total. Add lines 4a and 4b 5 6 7 8 9 10 11 12 13 Subtract line 4c from line 3 Estimated salvage and reinsurance recoverable at the end of the preceding tax year Estimated salvage and reinsurance recoverable at the end of the current tax year Losses incurred (line 5 plus line 6 less line 7) Tax-exempt interest subject to section 832(b)(5)(B) Dividends-received deduction subject to section 832(b)(5)(B) (Schedule C, line 25)
Total. Add lines 9 and 10 Reduction of deduction under section 832(b)(5)(B). Multiply line 11 by .15 Losses incurred deductible under section 832(c)(4). Subtract line 12 from line 8. Enter here and on Schedule A, line 26
Form 1120-PC (1995)
Page
6
Schedule G
Other Capital Losses (See page 14 of instructions.)
(Capital assets sold or exchanged to meet abnormal insurance losses and to pay dividends and similar distributions to policyholders.)
1 2 3 4 5 6 7
Dividends and similar distributions paid to policyholders Losses paid Expenses paid Total. Add lines 1, 2, and 3 Note: Adjust lines 5 through 8 to cash method if necessary. Interest received Dividends received (Schedule C, line 14) Gross rents, gross royalties, lease income, etc., and gross income from a trade or business other than an insurance business including income from Form 4797 (include gains for invested assets only) Net premiums received Total. Add lines 5 through 8 Limitation on gross receipts from sales of capital assets. Subtract line 9 from line 4. If zero or less, enter -0(a) Description of capital asset (b) Date acquired (c) Gross sales price (d) Cost or other basis (e) Expense of sale
1 2 3 4 5 6
7 8 9
8 9 10
10
(g) Loss ((d) plus (e) less the sum of (c) and (f))
(f) Depreciation allowed (or allowable)
11
12
Totals—column (c) must not be more than line 10. (Enter amount from column (g) in Schedule A, line 28, or Schedule B, line 15, whichever applies)
Schedule H
1 2 3 4 5 6 7 8
Special Deduction And Ending Adjusted Surplus for Section 833 Organizations (See page 15 of instructions.)
1 2 3 4 5 6 7 8a 8b 9 10
Health care claims incurred during the tax year Expenses incurred during the tax year in connection with the administration, adjustment, or settlement of health care claims Total. Add lines 1 and 2 Multiply line 3 by .25 Beginning adjusted surplus Special deduction. Subtract line 5 from line 4. If zero or less, enter -0-. Enter here and on Schedule A, line 34a. (See page 15 of instructions for limitation.) Net operating loss deduction (Schedule A, line 36b) Net exempt income: a Adjusted tax-exempt income b Adjusted dividends-received deduction Taxable income (Schedule A, line 37) Ending adjusted surplus. Add lines 5 through 9
9 10
Form 1120-PC (1995)
Page
7
Schedule I
1 a b c 2 a b 3
Other Information (See page 15 of instructions.)
Yes No Yes No
7 Was the corporation a U.S. shareholder of any controlled foreign corporation? (See sections 951 and 957.) If “Yes,” attach Form 5471 for each such corporation. Enter number of Forms 5471 attached 8 At any time during the 1995 calendar year, did the corporation have an interest in or a signature or other authority over a financial account in a foreign country (such as a bank, securities, or other financial accounts)? If “Yes,” the corporation may have to file Form TD F 90-22.1.) If “Yes,” enter the name of the foreign country.
Check method of accounting: Cash Accrual Other (specify) Check box for kind of company: Mutual Stock Did the corporation at the end of the tax year own, directly or indirectly, 50% or more of the voting stock of a domestic corporation? (For rules of attribution, see section 267(c).) If “Yes,” attach a schedule showing: (a) name and identification number; (b) percentage owned; and (c) taxable income or (loss) before NOL and special deductions of such corporation for the tax year ending with or within your tax year.
9
4
Is the corporation a subsidiary in an affiliated group or a parent-subsidiary controlled group? If “Yes,” enter employer identification number and name of the parent corporation
Was the corporation the grantor of, or transferor to, a foreign trust that existed during the current tax year, whether or not the corporation has any beneficial interest in it? If “Yes,” the corporation may be required to file Forms 926, 3520, or 3520-A Has the corporation elected to use its own payout pattern for discounting unpaid losses and unpaid loss adjustment expenses?
10
5
Did any individual, partnership, corporation, estate or trust at the end of the tax year, own, directly or indirectly, 50% or more of the corporation’s voting stock? (For rules of attribution, see section 267(c).) If “Yes,” attach a schedule showing name and identifying number. (Do not include any information already entered in 4 above.) Enter percentage owned
11a Enter the total unpaid losses shown on the corporation’s annual statement: (1) for the current tax year: $ (2) for the previous tax year: $ b Enter the total unpaid loss adjustment expenses shown on the corporation’s annual statement: (1) for the current tax year: $ (2) for the previous tax year: $ 12 13 14 Does the corporation discount any of the loss reserves shown on its annual statement? Enter the amount of tax-exempt interest received or accrued during the tax year $ If the corporation has an NOL for the tax year and is electing to forgo the carryback period, check here Enter the available NOL carryover from prior tax years (Do not reduce it by any deduction on $ line 36b, Schedule A.)
6
Did one foreign person at any time during the tax year own, directly or indirectly, at least 25% of: (a) the total voting power of all classes of stock of the corporation entitled to vote, or (b) the total value of all classes of stock of the corporation? If “Yes,” a Enter percentage owned b Enter owner’s country
15 c The corporation may have to file Form 5472. Enter number of Forms 5472 attached
Schedule J
Protection Against Loss Account (See page 16 of instructions.)
(References are to section 824(d)(1) prior to its repeal by P.L. 99-514.)
1 2
Balance at the beginning of the year Subtractions (attach computation of any items on lines 2a through 2d): a Section 824(d)(1)(B) b Section 824(d)(1)(C) c Section 824(d)(1)(D) d Section 824(d)(1)(E) e Total. Add lines 2a through 2d. Enter here and on Schedule A, line 10 2a 2b 2c 2d
1
2e 3
3
Balance at the end of the year. Subtract line 2e from line 1
Form 1120-PC (1995)
Page
8
Schedule L
Balance Sheets (All filers are required to complete this schedule.)
Beginning of tax year End of tax year (c) (d)
Assets
1 Cash b Less allowance for bad debts 3 4 5 6 7 8 9 10a Inventories U.S. government obligations Tax-exempt securities (see page 16 of instructions) Other current assets (attach schedule) Loans to stockholders Mortgage and real estate loans Other investments (attach schedule) Buildings and other depreciable assets b Less accumulated depreciation 11a Depletable assets b Less accumulated depletion 12 13a b 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Land (net of any amortization) Intangible assets (amortizable only) Less accumulated amortization Other assets (attach schedule) Total assets 2a Trade notes and accounts receivable
(a)
(b)
(
)
(
)
( (
) )
( (
) )
(
)
(
)
Liabilities and Stockholders’ Equity
Accounts payable Mortgages, notes, bonds payable in less than 1 year Insurance liabilities (See page 16 of instructions) Other current liabilities (attach schedule) Loans from stockholders Mortgages, notes, bonds payable in 1 year or more Other liabilities (attach schedule) Capital stock: a Preferred stock b Common stock Paid-in or capital surplus Retained earnings—Appropriated (attach schedule) Retained earnings—Unappropriated Less cost of treasury stock Total liabilities and stockholders’ equity
(
)
(
)
Schedule M-1
1 2 3 4 5
Reconciliation of Income (Loss) per Books with Income per Return (See page 16 of instructions.)
7 Income recorded on books this year not included in this return (itemize) a Tax-exempt interest $ 8 Deductions in this tax return not charged against book income this year (itemize) a Depreciation $ b Contributions carryover $ 9 10 Add lines 7 and 8 Income (Schedule A, line subtract line 9 from line 6 Distributions: a Cash b Stock c Property 6 7 8 Other decreases (itemize) Add lines 5 and 6 Balance at end of year (subtract line 7 from line 4) 35)—
(The corporation is not required to complete Schedules M-1 and M-2 below if the total assets on line 15, column (d), of Schedule L are less than $25,000.)
Net income (loss) per books Federal income tax Excess of capital losses over capital gains Income subject to tax not recorded on books this year (itemize) Expenses recorded on books this year not deducted in this return (itemize) a Depreciation $ b Contributions carryover $ c Travel and entertainment $ 6 1 2 3 Add lines 1 through 5 Balance at beginning of year Net income (loss) per books Other increases (itemize) 5
Schedule M-2
Analysis of Unappropriated Retained Earnings per Books (line 26, Schedule L)
4
Add lines 1, 2, and 3
Printed on recycled paper | https://www.scribd.com/document/541131/US-Internal-Revenue-Service-f1120pc-1995 | CC-MAIN-2018-30 | refinedweb | 3,627 | 51.01 |
Intel IoT Eclipse C++ Build and Run Problem on Edisonihacivelioglu Dec 10, 2014 1:37 PM
I can build and run Eclipse C example projects on my edison arduino board as described in the below link. The example in the guideline builds and runs the code in "Debug" mode. So the code only keeps on working on edison while eclipse is running. When I exit from eclipse, the code stops working on edison at the same time.
As a solution I tried to build the projects in "Release" mode. But it's impossible to build any project successfully in this mode. Each time I get some building and making errors. Is there any guideline that explains configuration details to build the projects in Release mode? Or any alternative solution to run the projects independent from eclipse?
Running a Sample Application
1. Re: Intel IoT (Edison) Eclipse C++ Build Error in Release ModeCMata_Intel Dec 8, 2014 3:07 PM (in response to ihacivelioglu)
I think you could get more information and appropriate help about this in the XDK forums. Even though, have you tried to run the code in background using the terminal console? You could save the code in a directory of your Edison and run the code in there.
Regards;
CMata
2. Re: Intel IoT (Edison) Eclipse C++ Build Error in Release Modemhahn Dec 9, 2014 2:25 AM (in response to ihacivelioglu)
Just run the binary directly on the board. Either in a terminal within Eclipse as described on (you would have to start as "nohup <sw>&" if you want to close Eclipse) or directly from a terminal
@CMata_Intel I guess you wanted to refer to Internet of Things rather than to XDK forum?
3. Re: Intel IoT (Edison) Eclipse C++ Build Error in Release Modeihacivelioglu Dec 9, 2014 3:48 PM (in response to mhahn)
Could you pls give me some more detailed information to overcome my difficulty? I read all the forums that you refered (XDK and IoT) but it is impossible to find any satisfying answer. I know that the codes run on edison with a remote connection. During Run process the compiled codes transfer into /tmp folder on Edison. But without eclipse how should I use gcc or similar command on edison? If i try to use "gcc filename" I get lots of error each time. Regards.
4. Re: Intel IoT (Edison) Eclipse C++ Build Error in Release Modemhahn Dec 9, 2014 4:44 PM (in response to ihacivelioglu)
That sounds different to me than your initial question.
You understood you were asking on how to run binaries generated by Eclipse outside of it.
That's what I answered.
Now you seem to ask how to compile on the target. Could you please clarify what you are up for?
5. Re: Intel IoT (Edison) Eclipse C++ Build Error in Release ModeKurtE Dec 9, 2014 5:12 PM (in response to ihacivelioglu)
I use gcc all of the time directly on the Edison. I use PuTTY on a PC to create a terminal window, I use WinSCP to transfer files from my machine to the Edison. Alternatively I also use the git commands to download stuff from place like github.com.
Note: many of my makefiles get sort-of convoluted as I run them on several different platforms, but as an example here is one that I have been playing with this week for testing out a Ping sensor using MRAA.
In a directory I have the source file, in this case testMraaPing.cpp, which has: Warning, this is a test program that started out for a different device and I just hack on it enough to try out MRAA with a ping sensor.
Example Source file:
#include <iostream> #include "stdio.h" #include "unistd.h" #include <time.h> #include <pthread.h> //========================================================================= //#include "fast_gpio.h" #include <time.h> #include "mraa.h" #include <string> #include "memory.h" mraa_gpio_context gpioDBG; using namespace std; static float cpufreq = 0; static uint64_t tsc_init = 0; static float clocks_per_ns = 0; unsigned long micros2(void) { struct timespec t; t.tv_sec = t.tv_nsec = 0; clock_gettime(CLOCK_REALTIME, &t); return (unsigned long)(t.tv_sec) * 1000000L + t.tv_nsec / 1000L; } //========================================================================= using namespace std; #define HIGH 1 #define LOW 0 #define STARTDELAY 2 #define GPIO_INDEX 2 unsigned long time_s; unsigned long time_e; __syscall_slong_t echotime; unsigned long ulDRStart, ulDeltaDr; unsigned long ulDeltaSum = 0; unsigned long ulCnt = 0; unsigned long DoPing( mraa_gpio_context gpio) { mraa_gpio_dir(gpio, MRAA_GPIO_OUT); mraa_gpio_write(gpioDBG, HIGH); mraa_gpio_write(gpio, HIGH); usleep(STARTDELAY); mraa_gpio_write(gpio, LOW); mraa_gpio_write(gpioDBG, LOW); ulDRStart = micros2(); mraa_gpio_dir(gpio, MRAA_GPIO_IN); ulDeltaDr = micros2() - ulDRStart; while (mraa_gpio_read(gpio) == LOW) ; //pthread_yield(); time_s = micros2(); mraa_gpio_write(gpioDBG, HIGH); while (mraa_gpio_read(gpio) == HIGH) ; //pthread_yield(); time_e = micros2(); mraa_gpio_write(gpioDBG, LOW); ulDeltaSum += ulDeltaDr; ulCnt++; cout << "dt dir: " << ulDeltaDr << "(" << (ulDeltaSum/ulCnt) <<" ): "; return time_e - time_s; } int main(int argc, char **argv) { mraa_result_t rtv = mraa_init(); if (rtv != MRAA_SUCCESS && rtv != MRAA_ERROR_PLATFORM_ALREADY_INITIALISED) { cout << "MRAA Init Failed,Return Value is "; cout << rtv << endl; return 0; } fprintf(stdout, "MRAA Version: %s\nStarting Read\n",mraa_get_version()); mraa_gpio_context gpio; gpio = mraa_gpio_init(GPIO_INDEX); gpioDBG = mraa_gpio_init(3); if (gpio == NULL) { cout << "Init GPIO Out Failed" << endl; return 0; } mraa_gpio_dir(gpio, MRAA_GPIO_OUT); mraa_gpio_dir(gpioDBG, MRAA_GPIO_OUT); mraa_gpio_use_mmaped(gpio, true); mraa_gpio_use_mmaped(gpioDBG, true); bool finishcycle = false; usleep(1000000); // give time for the ping to settle... for (;;) { echotime = DoPing(gpio); cout << echotime << endl; usleep(500000); } mraa_gpio_close(gpio); return 0; }
example: makefile
#~~~~~~~~~~~~~~~~~~~~ Output File Name ~~~~~~~~~~~~~~~~~~~~ MAIN_OUT = TestMraaPing #~~~~~~~~~~~~~~~~~~~~ Source Files ~~~~~~~~~~~~~~~~~~~~ SOURCES = \ TestMraaPing.cpp MAIN_OBJS:= $(subst .cpp,.o,$(SOURCES)) MAIN_DEPS:= $(subst .cpp,.d,$(SOURCES)) #~~~~~~~~~~~~~~~~~~~~ Include Directories ~~~~~~~~~~~~~~~~~~~~ INCLUDE_DIRS = -I. #~~~~~~~~~~~~~~~~~~~~ Library Directories ~~~~~~~~~~~~~~~~~~~~ LIBRARY_DIRS = -L/usr/lib/arm-linux-gnueabihf -L../library #~~~~~~~~~~~~~~~~~~~~ Compiler Options ~~~~~~~~~~~~~~~~~~~~ COMPILE_OPTS = -pedantic -g -O2 -fno-rtti #~~~~~~~~~~~~~~~~~~~~ Linker Options ~~~~~~~~~~~~~~~~~~~~ LDFLAGS = $(LIBRARY_DIRS) -lpthread -lmraa #~~~~~~~~~~~~~~~~~~~~ Toolchain Prefix ~~~~~~~~~~~~~~~~~~~~ #Edison hard coded OSTYPE TCHAIN_PREFIX=i586-poky-linux- CXX = $(TCHAIN_PREFIX)g++ CXXFLAGS = $(COMPILE_OPTS) $(INCLUDE_DIRS) #~~~~~~~~~~~~~~~~~~~~ all ~~~~~~~~~~~~~~~~~~~~ all: begin gccversion build end #~~~~~~~~~~~~~~~~~~~~ build ~~~~~~~~~~~~~~~~~~~~ build: $(MAIN_OUT) $(MAIN_OUT): $(MAIN_OBJS) ../library/libArduinoPort.a $(CXX) $(CXXFLAGS) $(MAIN_OBJS) -o $(MAIN_OUT) $(LDFLAGS) MSG_BEGIN = -------- begin -------- MSG_END = -------- end -------- #~~~~~~~~~~~~~~~~~~~~ Eye candy ~~~~~~~~~~~~~~~~~~~~ begin: @echo @echo $(MSG_BEGIN) end: @echo $(MSG_END) @echo gccversion: @$(CC) --version #~~~~~~~~~~~~~~~~~~~~ clean ~~~~~~~~~~~~~~~~~~~~ clean: begin clean_list end clean_list: -rm $(MAIN_OBJS) -rm $(MAIN_OUT) -rm $(MAIN_DEPS) #~~~~~~~~~~~~~~~~~~~~ backup ~~~~~~~~~~~~~~~~~~~~ backup: clean tar cJvf ../$(MAIN_OUT)_`date +"%Y-%m-%d_%H%M"`.tar.xz * #~~~~~~~~~~~~~~~~~~~~ Dependency Generation include $(subst .cpp,.d,$(SOURCES)) %.d: %.cpp $(CC) -M $(CXXFLAGS) $< > $@.$$$$; \ sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; \ rm -f $@.$$$$
Again sorry that these files are a lot more complex than what is needed, But my makefiles have changed/convoluted over time. I have them setup to do some dependency checking, builds... Also most of mine have stuff in them that detect which platform it is running on and change some of the settings, like where the compiler is, or what include directories to use...
Again sorry if this is not the type of thing you are looking for.
Kurt
6. Re: Intel IoT (Edison) Eclipse C++ Build Error in Release Modeihacivelioglu Dec 10, 2014 12:28 PM (in response to ihacivelioglu)
I have two main issues which are related to eachother.
1st one: I can run sample codes as the same way that explained in the guideline but the codes run only while eclipse ide is opened. When I exit from eclipse the project stops working on edison. The answer is ok, I should run the codes inside the edison with a remote connection.
2nd one: I can see the files which are compiled by eclipse inside the /tmp folder on edison. But I'm not sure what should I do as the next step. So I'm asking for your help. I tried gcc command in different ways with a putty connection but each time I get lots of error messages.
If anyone can share a link or some documents that explains the procedure briefly and step by step that should be followed, I would be appreciated.
Regards.
7. Re: Intel IoT (Edison) Eclipse C++ Build Error in Release Modeihacivelioglu Dec 10, 2014 1:34 PM (in response to ihacivelioglu)
I obtained the solution by the links below.
Command To Run (execute) Bin Files In Linux
Regards.
8. Re: Intel IoT Eclipse C++ Build and Run Problem on Edisonorphanping Dec 13, 2014 9:04 AM (in response to ihacivelioglu)
In the setting of eclipse's Run configurations,you have set the path to remote create the excutable file, and the command to set the properties of the file. When you disconnect the Edison without eclipse remote method. You can use the putty or something else tools to connect the Edison. And directly excute the excutable files you create. | https://communities.intel.com/thread/57841 | CC-MAIN-2018-30 | refinedweb | 1,400 | 61.97 |
Array.IndexOf Method (Array, Object, Int32, Int32).
- count
- Type: System.Int32
The number of elements in the section to search.
Return ValueType: System.Int32
The index of the first occurrence of value within the range of elements in array that starts at startIndex and contains the number of elements specified in count, if found; otherwise, the lower bound of the array minus 1.
The one-dimensional Array is searched forward starting at startIndex and ending at startIndex plus count minus 1, if count is greater than 0.
The elements are compared to the specified value using the Object.Equals method. If the element type is a nonintrinsic (user-defined) type, the Equals implementation of that type is used..
Passing the Length of the array as the startindex will result in a return value of -1, while values greater than Length will raise an ArgumentOutOfRangeException.
This method is an O(n) operation, where n is count. Object itself.
The following code example shows how to determine the index of the first occurrence of a specified element.
using System; public class SamplesArray { public static void Main() { // Creates and initializes a new Array with three elements of the same value. Array myArray=Array.CreateInstance( typeof(String), 12 ); myArray.SetValue( "the", 0 ); myArray.SetValue( "quick", 1 ); myArray.SetValue( "brown", 2 ); myArray.SetValue( "fox", 3 ); myArray.SetValue( "jumps", 4 ); myArray.SetValue( "over", 5 ); myArray.SetValue( "the", 6 ); myArray.SetValue( "lazy", 7 ); myArray.SetValue( "dog", 8 ); myArray.SetValue( "in", 9 ); myArray.SetValue( "the", 10 ); myArray.SetValue( "barn", 11 ); // Displays the values of the Array. Console.WriteLine( "The Array contains the following values:" ); PrintIndexAndValues( myArray ); // Searches for the first occurrence of the duplicated value. String myString = "the"; int myIndex = Array.IndexOf( myArray, myString ); Console.WriteLine( "The first occurrence of \"{0}\" is at index {1}.", myString, myIndex ); // Searches for the first occurrence of the duplicated value in the last section of the Array. myIndex = Array.IndexOf( myArray, myString, 4 ); Console.WriteLine( "The first occurrence of \"{0}\" between index 4 and the end is at index {1}.", myString, myIndex ); // Searches for the first occurrence of the duplicated value in a section of the Array. myIndex = Array.IndexOf( myArray, myString, 6, 5 ); Console.WriteLine( "The first occurrence of \"{0}\" between index 6 and index 10 is at index {1}.", myString, myIndex ); } public static void PrintIndexAndValues( Array myArray ) { for ( int i = myArray.GetLowerBound(0); i <= myArray.GetUpperBound(0); i++ ) Console.WriteLine( "\t[{0}]:\t{1}", i, myArray.GetValue( i ) ); } } /* This code produces 6 and index 10 is at index. | http://msdn.microsoft.com/en-us/library/5h020t0a | crawl-003 | refinedweb | 418 | 53.37 |
Collections as an Alternative to Arrays
Although collections are most often used for working with the Object Data Type, you can use a collection to work with any data type. In some circumstances, it can be more efficient to store items in a collection than in an array. also provides a variety of classes, interfaces, and structures for general and special collections. The System.Collections and System.Collections.Specialized namespaces contain definitions and implementations that include dictionaries, lists, queues, and stacks. The System.Collections.Generic namespace provides many of these in generic versions, which take one or more type arguments.
If your collection is to hold elements of only one specific data type, a generic collection has the advantage of enforcing type safety. For more information on generics, see Generic Types in Visual Basic.
Example
The following example uses the .NET Framework generic class System.Collections.Generic.List to create a list collection of customer structures.
' Define the structure for a customer. Public Structure customer Public name As String ' Insert code for other members of customer structure. End Structure ' Create a module-level collection that can hold 200 elements. Public custFile As New List(Of customer)(200) ' Add a specified customer to the collection. Private Sub addNewCustomer(ByVal newCust As customer) ' Insert code to perform validity check on newCust. custFile.Add(newCust) End Sub ' Display the list of customers in the Debug window. Private Sub printCustomers() For Each cust As customer In custFile Debug.WriteLine(cust) Next cust End Sub
The declaration of the cust. | https://msdn.microsoft.com/en-us/library/e1ad18x6(VS.80).aspx | CC-MAIN-2015-32 | refinedweb | 255 | 50.23 |
A player made and maintained cheat detection tool for osu!. Provides support for detecting replay stealing and remodding from a profile, map, or set of osr files.
Project description
Circlecore
Circlecore is the backend of the circleguard project, available as a pip module. If you are looking to download and start using the program circleguard yourself, see our frontend repository. If you would like to incorporate circleguard into your own projects, read on.
To clarify, this module is referred to internally as circlecore to differentiate it from the circleguard project as a whole, but is imported as circleguard, and referred to as circleguard in this overview.
Usage
First, install circleguard:
pip install circleguard
Circleguard can be run in two ways - through convenience methods such as
circleguard.user_check() or by instantiating and passing a Check object to
circleguard.run(check), the latter of which provides more control over how and what replays to compare. Both methods return a generator containing Result objects.
The following examples provide very simple uses of Result objects. For more detailed documentation of what variables are available to you through Result objects, refer to its documentation in the code.
Convenience Methods
For simple usage, you may only ever need to use convenience methods. These methods are used directly by the frontend of circleguard and are generally maintained on that basis, so methods useful in the most number of situations are used. Convenience methods are no different from running circleguard through Check objects - internally, all convenience methods do is create Check objects and run circleguard with them anyway. They simply provide easy usage for common use cases of circleguard, such as checking a specific map's leaderboard.
from circleguard import * # replace the example api key with your own key - this key is invalid and will not work. circleguard = Circleguard("5c626a85b077fac5d201565d5413de06b92382c4") # screen a user's top plays for replay steals and remods. for r in circleguard.user_check(12092800, num_top=10, num_users=2): if r.ischeat: # later_replay and earlier_replay provide a reference to either replay1 or replay2, depending on which one was set before the other. print("Found a cheater! {} vs {}, {} set later.".format(r.replay1.username, r.replay2.username, r.later_replay.username)) # compare the top 10 HDHR plays on a map for replay steals # Mod to int documentation: for r in circleguard.map_check(1005542, num=10, mods=24): if r.ischeat: print("Found a cheater on a map! {} vs {}, {} set later.".format(r.replay1.username, r.replay2.username, r.later_replay.username)) # compare local files for replay steals for r in circleguard.local_check("/absolute/path/to/folder/containing/osr/files/"): if r.ischeat: print("Found a cheater locally! {} vs {}, {} set later.".format(r.replay1.path, r.replay2.path, r.later_replay.path)) # compare two specific users' plays on a map to check for a replay steal for r in circleguard.verify(1699366, 12092800, 7477458): if r.ischeat: print("Confirmed that {} is cheating".format(r.later_replay.username)) else: print("Neither of those two users appear to have stolen from each other")
More Generally
The more flexible way to use circleguard is to make your own Check object and run circleguard with that. This allows for mixing different types of Replay objects - comparing local .osr's to online replays - as well as the liberty to instantiate the Replay objects yourself and use your own Replay subclasses. See Advanced Usage for more on subclassing.
from circleguard import * from pathlib import Path circleguard = Circleguard("5c626a85b077fac5d201565d5413de06b92382c4") # assuming you have your replays folder in ../replays, relative to your script. Adjust as necessary PATH = Path(__file__).parent / "replays" # assuming you have two files called woey.osr and ryuk.osr in the replays folder. # This example uses python Paths, but strings representing the absolute file location will work just fine. # Refer to the Pathlib documentation for reference on what constitutes a valid Path in string form. replays = [ReplayPath(PATH / "woey.osr"), ReplayPath(PATH / "ryuk.osr")] check = Check(replays) for r in circleguard.run(check): if r.ischeat: print("Found a cheater locally! {} vs {}, {} set later.".format(r.replay1.path, r.replay2.path, r.later_replay.path)) # Check objects allow mixing of Replay subclasses. circleguard only defines ReplayPath and ReplayMap, # but as we will see under Advanced Usage, you can define your own subclasses to suit your needs. replays = [ReplayPath(PATH / "woey.osr"), ReplayMap(map_id=1699366, user_id=12092800, mods=0)] for r in circleguard.run(Check(replays)): if r.ischeat: # Replay subclasses have well defined __str__ and __repr__ methods, so we can print them directly to represent them in a human readable way if need be. print("Found a cheater! {} vs {}, {} set later.".format(r.replay1, r.replay2, r.later_replay))
Caching
Circleguard will cache downloaded replays if you give it the path to a database and set the cache option to True. This reduces download times, because replays are stored locally instead of waiting for the quite heavy api ratelimits. You can see more about setting options under Setting Options.
# if the database given doesn't exist, it will be created at the specified location. cg = Circleguard("5c626a85b077fac5d201565d5413de06b92382c4", "/path/to/your/db/file/db.db") cg.set_options(cache=True) # can also pass cache=True to a convenience method like map_check, but it will only apply for that single check. This will cache replays for all methods for this circleguard object. # all 6 replays will be loaded from the api for r in cg.map_check(221777, num=6): pass # the first 6 replays will be loaded from the cache, and only 5 will be loaded from the api, avoiding the 10 replays/min ratelimit. for r in cg.map_check(221777, num=11)
Caching persists across runs since it is stored on a file instead of in memory; just pass the path to the file when instantiating circleguard.
Advanced Usage
Setting Options
There are four tiers of options. The lowest option which is set takes priority for any given replay or comparison.
Options can be set at the highest level (global level) by using
Circleguard.set_options. Options can be changed at the second highest level (instance level) using
circleguard#set_options, which only affects the instance you call the method on. Be careful to use the static module method to change global settings and the instance method to change instance settings, as they share the same name and can be easy to confuse.
Options can be further specified at the second lowest level (Check level) by passing the appropriate argument when the Check is instantiated. Finally, options can be changed at the lowest level (Replay level) by passing the appropriate argument when the Replay is instantiated.
Settings affect all previously instantiated objects when they are changed. That is, if you change an option globally, it will change that setting for all past and future Circleguard instances.
Subclassing Replay
If you have needs that are not met by the provided implementations of Replay -
ReplayPath and
ReplayMap - you can subclass Replay (or one of its subclasses) yourself.
The following is a simple example of subclassing, where each Replay is given a unique id. If, for example, you want to distinguish between loading an otherwise identical replay at 12:05 and 12:07, giving each instance a unique id would help in differentiating them. This is a somewhat contrived example (comparing a replay against itself will always return a positive cheating result), but anytime you need to add extra attributes or methods to the classes for any reason, it's simple to subclass them.
from circleguard import * class IdentifiableReplay(ReplayPath): def __init__(self, id, path): self.id = id super().__init__(path) circleguard = Circleguard("5c626a85b077fac5d201565d5413de06b92382c4") check = Check(IdentifiableReplay(1, "/path/to/same/osr.osr"), IdentifiableReplay(2, "/path/to/same/osr.osr")) for result in circleguard.run(check): print("id {} vs {} - cheating? {}".format(result.replay1.id, result.replay2.id, result.ischeat))
Although Replay does not have the id attribute by default, because we gave our
Check object
IdentifiableReplays, it will spit back
IdentifiableReplays back at us when we run the check, and we can access our id attribute.
Besides adding information to the Replay through the constructor, you can also control when and how it gets loaded by overloading its
load method. The following example is again contrived, because we provide a database implementation for you (any replays you attempt to load through the api will be loaded from the database instead, if you had previously downloaded and cached them), but hopefully gets the point of overloading
load across.
from circleguard import * class ReplayDatabase(Replay): def __init__(self, map_id, user_id, mods, detect=Detect.ALL): self.map_id = map_id self.user_id = user_id self.mods = mods self.detect = detect self.loaded = False def load(self, loader, cache=None): # execute some sql (implementation not shown) to retrieve replay data from a local database. Assume the call returns a tuple of (replay_id, replay_data) result = load_replay_from_database(self.map_id, self.user_id, self.mods) replay_id = result[0] replay_data = result[1] Replay.__init__(self, self.user_id, self.mods, replay_id, replay_data, self.detect, loaded=True) replays = [ReplayDatabase(1699366, 12092800, 4), ReplayDatabase(1005542, 7477458, 16)] for replay in replays: print("loading replay from local database") replay.load() for result in circleguard.run(Check(replys)): print(result.similarity)
To get around the rather hairy problem of simultaneously allowing users to instantiate Replay subclasses at any point in their program and only loading them when necessary (when calling
circleguard#run(check)), circleguard opts to wait to initialize the Replay superclass until the load method is called and we have all the necessary information that the Replay class requires, either from the api, a local osr file, or some other means.
This means that if you subclass Replay, you must make sure you do a couple of things that circleguard expects from any Replay subclass. Replay must be initialized in your
load method (NOT in your
__init__ method, as you would expect), and you must set self.weight to one of
RatelimitWeight.HEAVY,
RatelimitWeight.LIGHT, or
RatelimitWeight.NONE in your
__init__ method (NOT in your load method! Circleguard needs to know how much of a toll loading this replay will cause on the program before it is loaded). The documentation from the Ratelimit Enum follows, for your convenience:
""" How much it 'costs' to load a replay from the api. If the load method of a replay makes no api calls, the corresponding value is RatelimitWeight.NONE. If it makes only light api calls (anything but get_replay), the corresponding value is RatelimitWeight.LIGHT. If it makes any heavy api calls (get_replay), the corresponding value is RatelimitWeight.HEAVY. This value currently has no effect on the program and is reserved for possible future functionality. """
replay_data must be a list of
circleparse.ReplayEvent like objects when passed to
Replay.__init__. You can look at the circleparse repository for more information, but all that means is that each object must have the
time_since_previous_action,
x,
y, and
keys_pressed attributes.
Finally, the load method of the replay must accept one required argument and one positional argument, regardless of whether you use them -
loader and
cache=None, respectively. If you need to load some information from the api, use the passed Loader class to do so (see the Loader class for further documentation). Should you want to implement a caching system of your own, the cache argument takes care of all the nasty options hierarchy issues and delivers you the final result - should this singular replay be cached? If you choose to cache the replay, you will also have to implement the loading of the replay from the cache, by writing the corresponding logic in the load method. None of that is touched by circleguard - the caching of ReplayMaps happens in an entirely different location than
replay#load. So long as you set
self.loaded to
True by initializing Replay in
load, circleguard will respect your replay and assume you have loaded the data properly.
Normally, all replays in a
Check object are loaded when you call
circleguard#run(check). However, if you require more control over when you load your replays (or which ones get loaded when you do), you can call
circleguard.load(check, replay) to load an individual replay contained in the passed
Check object. This is a shorthand method for calling
replay#load(circleguard.loader, check.cache), and going through circleguard is always recommended, as not doing so can cause unexpected caching issues with the settings hierarchy not cascading down to the replay correctly. See the last section of Subclassing Replay for more on the optional cache option for
replay#load.
There is no limitation on the order in which replays get loaded; when
circleguard#run(check) is called, it first checks if
check.loaded is
True. If it is, it assumes all the replays in the check object are loaded as well and moves on to comparing them. Else, it checks if each replay in the check object have
replay.loaded set to
True - if so, it moves on to loading the next replay. Otherwise, it calls
replay#load.
Modifying Convenience Method Check Before Loading
You may find yourself wishing to perform an action on the
Check returned by a convenience method before running it. Although the standard convenience methods create the
Check and immediately run it, Circleguard provides methods that only create the
Check (
circleguard#create_map_check,
circleguard#create_user_check, etc).
For instance, the gui Circleguard takes advantage of these methods to load the replays one by one and increment a progress bar before running the check, something that would not be possible with the standard convenience methods.
You can also modify the
Check by adding or removing replays before running it. You should see if the recommended approaches for dealing with this, such as the
include argument for convenience methods and
Check objects, satisfy your needs before resorting to modifying a returned
Check.
Contributing
If you would like to contribute to Circleguard, join our discord and ask what you can help with, or take a look at the open issues for circleguard and circlecore. We're happy to work with you if you have any questions!
You can also help out by opening issues for bugs or feature requests, which helps us and others keep track of what needs to be done next.
Conclusion
Whether you read through everything or scrolled down to the bottom, I hope this helped. If you have any questions, the link to our discord follows. We welcome any comments and are happy to answer questions.
Discord:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/circleguard/ | CC-MAIN-2019-47 | refinedweb | 2,419 | 56.66 |
Le Mon, Oct 11, 2010 at 03:13:43PM -0700, Russ Allbery a écrit : > Charles Plessy <plessy@debian.org> writes: > > > how about simply paraphrasing the RFC 822/5832, which our the source of > > inspiration ? In that case, the requirement for field names will be to > > be printable ASCII characters, except colons. > > > I propose the following change in the context the patch that I am > > preparing for clarifying the Policy's chapter about control files, in > > bug #593909. > > It occurred to me, on reviewing your other patch as well, that this change > should probably also say explicitly that field names may not begin with #. Here is an updated patch, that contains the following: Apart from adding that fields names may not begin with #, I also changed ‘US-ASCII’ for ‘ASCII’, since this is the vocabulary used by the Policy. Have a nice day, -- Charles
>From ae5afd407773a02863169dc71bdaacaeb644570c Mon Sep 17 00:00:00 2001 From: Charles Plessy <plessy@debian.org> Date: Wed, 13 Oct 2010 00:14:42 +0900 Subject: [PATCH] Clarification of the format of control files, Closes: #501930, #593909. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Specifies field names similarly to RFC 822/5832; - Distinguishes simple, folded and mulitiline fields; - Clarifies paragraph separators (#501930); - The order of paragraphs is significant; - Fields can have different types or purposes in different control files; - Moved the description of comments from §5.2 to §5.1; - Documented that relationship fields can only be folded in debian/control. --- policy.sgml | 116 +++++++++++++++++++++++++++++++++++++--------------------- 1 files changed, 74 insertions(+), 42 deletions(-) diff --git a/policy.sgml b/policy.sgml index 642f672..02637f0 100644 --- a/policy.sgml +++ b/policy.sgml @@ -2479,19 +2479,26 @@ endif fields<footnote> The paragraphs are also sometimes referred to as stanzas. </footnote>. - The paragraphs are separated by blank lines. Some control +.) + refer to binary packages generated from the source.) The + ordering of the paragraphs in control files is significant. </p> <p> @@ -2509,22 +2516,52 @@ Package: libc6 </p> <p> - Many fields' values may span several lines; in this case - each continuation line must start with a space or a tab. - Any trailing spaces or tabs at the end of individual - lines of a field value are ignored. + There are three types of fields: + <taglist> + <tag>simple</tag> + <item> + The field, including its value, must be a single line. Folding + of the field is not permitted. This is the default field type + if the definition of the field does not specify a different + type. + </item> + <tag>folded</tag> + <item> + The value of a folded field is a logical line that may span + several lines. The lines after the first are called + continuation lines and must start with a space or a tab. + Whitespace, including any newlines, is not significant in the + field values of folded fields.<footnote> + This folding method is similar to RFC 5322, allowing control + files that contain only one paragraph and no multiline fields + to be read by parsers written for RFC 5322. + </footnote> + </item> + <tag>multiline</tag> + <item> + the folded fields. Whitespace, including newlines, + is significant in the values of multiline fields. + </item> + </taglist> </p> <p> - In fields where it is specified that lines may not wrap, - only a single line of data is allowed and whitespace is not - significant in a field body. Whitespace must not appear + Whitespace must not appear inside names (of packages, architectures, files or anything else) or version numbers, or between the characters of multi-character version relationships. </p> <p> + The presence and purpose of a field, and the syntax of its + value may differ between types of control files. + </p> + + <p> Field names are not case-sensitive, but it is usual to capitalize the field names using mixed case as shown below. Field values are case-sensitive unless the description of the @@ -2532,9 +2569,17 @@ Package: libc6 </p> <p> - Blank lines, or lines consisting only of spaces and tabs, - are not allowed within field values or between fields - that - would mean a new paragraph. + Paragraph separators (empty lines) and lines consisting only of + spaces and tabs are not allowed within field values or between + fields. Empty lines in field values are usually escaped by + representing them by a space followed by a dot. + </p> + + <p> + Lines starting with # without any preceding whitespace are comments + lines that are only permitted in source package control files + (<file>debian/control</file>). These comment lines are ignored, even + between two continuation lines. They do not end logical lines. </p> <p> @@ -2600,8 +2645,8 @@ Package: libc6 <file>.changes</file> file to accompany the upload, and by <prgn>dpkg-source</prgn> when it creates the <file>.dsc</file> source control file as part of a source - archive. Many fields are permitted to span multiple lines in - <file>debian/control</file> but not in any other control + archive. Some fields are folded in <file>debian/control</file>, + but not in any other control file. These tools are responsible for removing the line breaks from such fields when using fields from <file>debian/control</file> to generate other control files. @@ -2614,16 +2659,6 @@ Package: libc6 when they generate output control files. See <ref id="substvars"> for details. </p> - - <p> - In addition to the control file syntax described <qref -above</qref>, this file may also contain - comment lines starting with <tt>#</tt> without any preceding - whitespace. All such lines are ignored, even in the middle of - continuation lines for a multiline field, and do not end a - multiline field. - </p> - </sect> <sect id="binarycontrolfiles"> @@ -2822,11 +2857,7 @@ Package: libc6 </p> <p> - Any parser that interprets the Uploaders field in - <file>debian/control</file> must permit it to span multiple - lines. Line breaks in an Uploaders field that spans multiple - lines are not significant and the semantics of the field are - the same as if the line breaks had not been present. + The Uploaders field in <file>debian/control</file> can be folded. </p> </sect1> @@ -3006,7 +3037,7 @@ Package: libc6 <p> This is a boolean field which may occur only in the control file of a binary package or in a per-package fields - paragraph of a main source control data file. + paragraph of a source package control file. </p> <p> @@ -3242,7 +3273,8 @@ Package: libc6 In a source or binary control file, the <tt>Description</tt> field contains a description of the binary package, consisting of two parts, the synopsis or the short description, and the - long description. The field's format is as follows: + long description. It is a multiline field with the following + format: </p> <p> @@ -3306,8 +3338,8 @@ Package: libc6 field contains a summary of the descriptions for the packages being uploaded. For this case, the first line of the field value (the part on the same line as <tt>Description:</tt>) is - always empty. The content of the field is expressed as - continuation lines, one line per package. Each line is + always empty. It is a multiline field, with one + line per package. Each line is indented by one space and contains the name of a binary package, a space, a hyphen (<tt>-</tt>), a space, and the short description line from that package. @@ -3443,7 +3475,7 @@ Package: libc6 <heading><tt>Changes</tt></heading> <p> - This field contains the human-readable changes data, describing + This multiline field contains the human-readable changes data, describing the differences between the last version and the current one. </p> @@ -3481,7 +3513,7 @@ Package: libc6 <heading><tt>Binary</tt></heading> <p> - This field is a list of binary packages. Its syntax and + This folded field is a list of binary packages. Its syntax and meaning varies depending on the control file in which it appears. </p> @@ -3491,7 +3523,7 @@ Package: libc6 packages which a source package can produce, separated by commas<footnote> A space after each comma is conventional. - </footnote>. It may span multiple lines. The source package + </footnote>. The source package does not necessarily produce all of these binary packages for every architecture. The source control file doesn't contain details of which architectures are appropriate for which of @@ -3501,7 +3533,7 @@ Package: libc6 <p> When it appears in a <file>.changes</file> file, it lists the names of the binary packages being uploaded, separated by - whitespace (not commas). It may span multiple lines. + whitespace (not commas). </p> </sect1> @@ -3624,7 +3656,7 @@ Files: and <tt>Checksums-Sha256</tt></heading> <p> - These fields contain a list of files with a checksum and size + These multiline fields contain a list of files with a checksum and size for each one. Both <tt>Checksums-Sha1</tt> and <tt>Checksums-Sha256</tt> have the same syntax and differ only in the checksum algorithm used: SHA-1 @@ -4473,13 +4505,13 @@ Checksums-Sha256: specification subject to the rules in <ref id="controlsyntax">, and must appear where it's necessary to disambiguate; it is not otherwise significant. All of the - relationship fields may span multiple lines. For + relationship fields can only be folded in source package control files.. When wrapping a relationship field, it + each open parenthesis. When opening a continuation line in a relationship field, it is conventional to do so after a comma and before the space following that comma. </p> -- 1.7.1 | https://lists.debian.org/debian-dpkg/2010/10/msg00012.html | CC-MAIN-2017-22 | refinedweb | 1,546 | 62.07 |
In this section, you will learn how to count the number of lines from the given file.
Description of code:
Java has provide various useful tools in every aspect. One of them is the Scanner class which is a member of java.util.* package. Here we have used Scanner class to read all the lines from the file.
You can see in the given example, we have created an object of Scanner class and pass object of File class into the constructor of Scanner class. The method hasNextLine() checks if the file contains next line and the method nextLine() read the line of the string till the end of the file. We have used counter here to detect the number of lines occurred in the file.
hasNextLine(): This method of Scanner class returns true if there is another line in the input of the Scanner class.
Here is the file.txt file:
Here is the code:
import java.io.*; import java.util.*; public class FileCountLine { public static void main(String[] args) throws Exception { File file = new File("C:/data.txt"); Scanner scanner = new Scanner(file); int count = 0; while (scanner.hasNextLine()) { String line = scanner.nextLine(); count++; } System.out.println("Lines in the file: " + count); } }
In this section, we have used file.txt file to read. Output of the above code will display '3'. It will count the null value too.
Output:
Advertisements
Posted on: May | http://www.roseindia.net/tutorial/java/core/files/filenumberofLines.html | CC-MAIN-2016-50 | refinedweb | 235 | 85.08 |
Template class for a circular buffer. More...
#include <circularbuffer.h>
Template class for a circular buffer.
A circular buffer is a cyclic container with a fixed maximum size. When more data is written the oldest data is cyclicylly overwritten in order to keep the maximum size constant.
The current content of the buffer can be retrieved as a time-ordered vector by calling the function getOrderedData().
The circular buffer is used by the AudioRecorder and the SignalAnalyzer.
This class contains of a header file only. There is no corresponding implementation (cpp) file.
Definition at line 51 of file circularbuffer.h.
Construct an empty buffer of maximal size zero.
Default constructor, creating an empty buffer with maximal size zero.
Definition at line 88 of file circularbuffer.h.
Clear the buffer, keeping its maximal size.
Calling this function clears the buffer, setting the actual size to zero, but it does not change its maximum size.
Definition at line 109 of file circularbuffer.h.
Copy entire data in a time-ordered form.
Since the buffer is written cyclically, the newest data entry is somewhere but not necessarily at the phyical end of the buffer. This function therefore retrieves all data contained in the cyclic buffer in a temporally ordered form as a vector, i.e., the higher the index, the newer the data. The data will NOT be removed from the buffer
Definition at line 159 of file circularbuffer.h.
Return actual maximal size.
Definition at line 62 of file circularbuffer.h.
Append a new data element to the buffer.
This function appends a new element. If the current size reaches the maximum size the oldest data is overwritten.
Definition at line 128 of file circularbuffer.h.
Retrieve time-ordeded data with maximum size of n and remove if from the buffer.
This function retrieves the existing data up to a given maximal size n in a time-ordered form and removes this data from the buffer.
Definition at line 186 of file circularbuffer.h.
Resize the buffer, shrink oldest data if necessary.
Set a new maximal buffer size. If the new maximal size is smaller than the actual data size only the newest data will be kept.
Definition at line 213 of file circularbuffer.h.
Return current buffer size.
Definition at line 61 of file circularbuffer.h.
Current write position.
Definition at line 67 of file circularbuffer.h.
Current size of the buffer.
Definition at line 69 of file circularbuffer.h.
Current read position.
Definition at line 66 of file circularbuffer.h.
Internal cyclic data buffer.
Definition at line 70 of file circularbuffer.h.
Maximal size of the buffer.
Definition at line 68 of file circularbuffer.h. | http://doxygen.piano-tuner.org/class_circular_buffer.html | CC-MAIN-2022-05 | refinedweb | 445 | 53.47 |
JDBC stands for Java Database Connectivity. It is an application programming interface, API for short.To fully understand the functions of the JDBC, we must first delve into what an API actually is.
API, or application programming interface, are the gears under the hood of our modern world. For example, everything today is at reach for the consumer. One can order pizza online, buy clothes, and even order packages. There is a myriad of options in the modern world. However, this still begs the question. What is an API? The API serves as an internal datalink from one database to another that serves its purpose in transferring information.
JDBC Drivers
The JDBC API is an application programming interface serves as the medium between the client and relational databases. It is a part of the JSE platform, made by Oracle.
To make use of these drivers, the client must install these adapters on the machine they are using.
There are four types of drivers
- JDBC-ODBC bridge driver.
- Native-API driver (partially java driver)
- Network Protocol driver (fully java driver)
- Thin driver (fully java driver)
The primary functions of these drivers allow us to connect to a data source, update and send queries to databases, and retrieve data from the database itself. We will delve into the individual functions of these drivers below. Before we can use the JDBC classes, we must import from the java.sql package.
JDBC-ODBC bridge driver
To use this driver, we must install the ODBC driver on the client machine. This driver allows the client machine to convert JDBC method calls into its own function calls. Databases that provide an ODBC driver can be accessed; however, not all platforms can support this.
Furthermore, there is no support from JDK 8+, which means this driver’s usage is confined to experimental uses and theory.
Native-API driver
A noticeable difference between a Type-1 and Type-2 driver is that the Native-API driver converts JDBC method calls into the native API’s database. This means that there is NO further conversion by the ODBC driver. However, like the Type-1 driver, the libraries in the Type-2 driver must be installed on the client machine just like the ODBC driver. Not all databases provide a client-side library.
Network Protocol driver
Middleware converts the JDBC calls into specific calls. Middleware needs to be configured specifically, so a Type-3 driver should only be used if there are multiple databases.
Thin driver
Contains all the functionalities of Type-2 and Type-3 drivers; however, they are database specific.
Connection to MYSQL
import java.sql.*; // creates a connection to MYSQL public class JDBC { public static void main(String[] args) { getCon(); } public static Connection getCon() throws Exception { try { // if you are using the local host String url = "jdbc:mysql://localhost:3306/database"; String driver = "com.mysql.jdbc.Driver"; String user = String password =Connection con = DriverManager.getConnection(url,user,password); return con; } catch(Exception e) { System.out.print(e); } return null; } }
This program allows us to connect to MYSQL, which can be used for a variety of things such as creating tables.
Related Article | https://codingsmania.in/jdbc-java/ | CC-MAIN-2021-25 | refinedweb | 525 | 56.35 |
Frank da Cruz
The Kermit Project
Columbia University
As of:
C-Kermit 8.0.211, 10 April 2004
This page last updated: Sat Apr 10 16:45:30 2004 (New York USA Time)
IF YOU ARE READING A PLAIN-TEXT version of this document, note that this file is a plain-text dump of a Web page. You can visit the original (and possibly more up-to-date) Web page here:
[ C-Kermit Home ] [ Kermit Home ]
1. INTRODUCTION 2. FILES 3. SOURCE CODE PORTABILITY AND STYLE 4. MODULES 4.A. Group A: Library Routines 4.B. Group B: Kermit File Transfer 4.C. Group C: Character-Set Conversion 4.D. Group D: User Interface 4.E. Group E: Platform-Dependent I/O 4.F. Group F: Network Support 4.G. Group G: Formatted Screen Support 4.H. Group H: Pseudoterminal Support 4.I. Group I: Security I. APPENDIX I: FILE PERMISSIONS
This file describes the relationship among the modules and functions of C-Kermit 5A and later, and other programming considerations. C-Kermit is designed to be portable to any kind of computer that has a C compiler. The source code is broken into many files that are grouped according to their function, as shown in the Contents.
C-Kermit has seen constant development since 1985. Throughout its history, there has been a neverending tug-of-war among:
The latter category is the most frustrating, since it generally involves massive changes just to keep the software doing what it did before in some new setting: e.g. the K&R-to-ANSIC conversion (which had to be done, of course, without breaking K&R); Y2K (not a big deal in our case); the many and varied UNIX and other API "standards"; IPv6.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
(*) In fact there is little distinction between the ckc*.* and cku*.* categories. It would make more sense for all cku*.* modules to be ckc*.* ones, except ckufio.c, ckutio.c, ckucon.c, ckucns.c, and ckupty.c, which truly are specific to Unix. The rest (ckuus*.c, ckucmd.c, etc) are quite portable.
One hint before proceeding: functions are scattered all over the ckc*.c and cku*.c modules, where function size has begun to take precedence over the desirability of grouping related functions together, the aim being to keep any particular module from growing disproportionately large. The easiest way (in UNIX) to find out in what source file a given function is defined is like this (where the desired function is foo()...):
grep ^foo\( ck*.c
This works because the coding convention has been to make function names always start on the left margin with their contents indented, for example:
static char * foo(x,y) int x, y; { ... }
Also note the style for bracket placement. This allows bracket-matching text editors (such as EMACS) to help you make sure you know which opening bracket a closing bracket matches, particularly when the opening bracket is above the visible screen, and it also makes it easy to find the end of a function (search for '}' on the left margin).
Of course EMACS tags work nicely with this format too:
$ cd kermit-source-directory $ etags ck[cu]*.c $ emacs Esc-X Visit-Tags-Table<CR><CR>
(but remember that the source file for ckcpro.c is ckcpro.w!)
Also:
[ Contents ] [ C-Kermit ] [ Kermit Home ]
And to answer the second-most-oft-repeated question: "Why don't you just use GNU autoconfig / automake / autowhatever instead of hard-coding all those #ifdefs?" Answers:
When writing code for the system-indendent C-Kermit modules, please stick to the following coding conventions to ensure portability to the widest possible variety of C preprocessors, compilers, and linkers, as well as certain network and/or email transports. The same holds true for many of the "system dependent" modules too; particularly the Unix ones, since they must be buildable by a wide variety of compilers and linkers, new and old.
This list does not purport to be comprehensive, and although some items on it might seem far-fetched, they would not be listed unless I had encountered them somewhere, some time. I wish I had kept better records so I could cite specific platforms and compilers.
if (i > 0 && p[i-1] == blah)
can still dump core if i == 0 (hopefully this is not true of any modern compiler, but I would not have said this if it did not actually happen somewhere).
int /* Put character in server command buffer */ #ifdef CK_ANSIC putsrv(char c) #else putsrv(c) char c; #endif /* CK_ANSIC */ /* putsrv */ { *srvptr++ = c; *srvptr = '\0'; /* Make sure buffer is null-terminated */ return(0); }
putchar(BS); putchar(SP); putchar(BS)
This overflows the CPP output buffer of more than a few C preprocessors (this happened, for example, with SunOS 4.1 cc, which evidently has a 1K macro expansion buffer).
C-Kermit needs constant adjustment to new OS and compiler releases. Every new OS release shuffles header files or their contents, or prototypes, or data types, or levels of ANSI strictness, etc. Every time you make an adjustment to remove a new compilation error, BE VERY CAREFUL to #ifdef it on a symbol unique to the new configuration so that the previous configuration (and all other configurations on all other platforms) remain as before.
Assume nothing. Don't assume header files are where they are supposed to be, that they contain what you think they contain, that they define specific symbols to have certain values -- or define them at all! Don't assume system header files protect themselves against multiple inclusion. Don't assume that particular system or library calls are available, or that the arguments are what you think they are -- order, data type, passed by reference vs value, etc. Be conservative when attempting to write portable code. Avoid all advanced features.
If you see something that does not make sense, don't assume it's a mistake -- it might be there for a reason, and changing it or removing is likely to cause compilation, linking, or runtime failures sometime, somewhere. Some huge percentage of the code, especially in the platform-dependent modules, is workarounds for compiler, linker, or API bugs.
But finally... feel free to violate any or all of these rules in platform-specific modules for environments in which the rules are certain not to apply. For example, in VMS-specific code, it is OK to use #if, because VAX C, DEC C, and VMS GCC all support it.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
This problem is partially addressed by the strn...() routines, which should always be used in preference to their str...() equivalents (except when the copy operation has already been prechecked, or there is a good reason for not using them, e.g. the sometimes undesirable side effect of strncpy() zeroing the remainder of the buffer). The most gaping whole, however, is sprintf(), which performs no length checking on its destination buffer, and is not easy to replace. Although snprintf() routines are starting to appear, they are not yet widespread, and certainly not universal, nor are they especially portable, or even full-featured.
For these reasons, we have started to build up our own little library of C Library replacements, ckclib.[ch]. These are safe and highly portable primitives for memory management and string manipulation, such as:
More about library functions in Section 4.A.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
Some modern compilers (e.g. IBM, DEC, Microsoft) have options that say "make all chars be unsigned" (e.g. GCC "-funsigned-char") and we use them when they are available. Other compilers don't have this option, and at the same time, are becoming increasingly strict about type mismatches, and spew out torrents of warnings when we use a CHAR where a char is expected, or vice versa. We fix these one by one using casts, and the code becomes increasingly ugly. But there remains a serious problem, namely that certain library and kernel functions have arguments that are declared as signed chars (or pointers to them), whereas our character data is unsigned. Fine, we can can use casts here too -- but who knows what happens inside these routines.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
On another axis, C-Kermit can be in any of several major states:
When in local mode, the console and communications device are distinct. During file transfer, Kermit may put up a file-transfer display on the console and sample the console for interruption signals.
When in remote mode, the console and communications device are the same, and therefore there can be no file-transfer display on the console or interruptions from it (except for "in-band" interruptions such as ^C^C^C).
[ Contents ] [ C-Kermit ] [ Kermit Home ]
(To be filled in... For now, see Section 3.1 and the comments in ckclib.c.)
[ Contents ] [ C-Kermit ] [ Kermit Home ]
Group B modules may call upon functions from Group E, but not from Group D modules (with the single exception that the main program invokes the user interface, which is in Group D). (This last assertion is really only a conjecture.)
[ Contents ] [ C-Kermit ] [ Kermit Home ]
Here's how to add a new file character set in the original (non-Unicode modules). Assuming it is based on the Roman (Latin) alphabet. Let's call it "Barbarian". First, in ck?xla.h, add a definition for FC_BARBA (8 chars maximum length) and increase MAXFCSETS by 1. Then, in ck?xla.c:
Other translations involving Barbarian (e.g. from Barbarian to Latin-Cyrillic) are performed through these tables and functions. See ckuxla.h and ckuxla.c for extensive examples.
To add a new Transfer Character Set, e.g. Latin Alphabet 9 (for the Euro symbol), again in the "old" character-set modules:
As of C-Kermit 7.0, character sets are also handled in parallel by the new (and very large) Unicode module, ckcuni.[ch]. Eventually we should phase out the old way, described just above, and operate entirely in (and through) Unicode. The advantages are many. The disadvantages are size and performance. To add a character to the Unicode modules:
[ Contents ] [ C-Kermit ] [ Kermit Home ]
If you plan to imbed the Group B, files into a program with a different user interface, your interface must supply an appropriate screen() function, plus a couple related ones like chkint() and intmsg() for handling keyboard (or mouse, etc) interruptions during file transfer. The best way to find out about this is to link all the C-Kermit modules together except the ckuu*.o and ckucon.o modules, and see which missing symbols turn up.
C-Kermit's character-oriented user interface (as opposed to the Macintosh version's graphical user interface) consists of the following modules. C-Kermit can be built with an interactive command parser, a command-line-option-only parser, a graphical user interface, or any combination, and it can even be built with no user interface at all (in which case it runs as a remote-mode Kermit server).
Note that none of the above files is actually Unix-specific. Over time they have proven to be portable among all platforms where C-Kermit is built: Unix, VMS, AOS/VS, Amiga, OS-9, VOS, etc etc. Thus the third letter should more properly be "c", but changing it would be too confusing.
For other implementations, the files may, and probably do, have different names. For example, the Macintosh graphical user interface filenames start with "ckm". Kermit 95 uses the ckucmd and ckuus* modules, but has its own CONNECT command modules. And so on.
Here is a brief description of C-Kermit's "user interface interface", from ckuusr.c. It is nowhere near complete; in particular, hundreds of global variables are shared among the many modules. These should, some day, be collected into classes or structures that can be passed around as needed; not only for purity's sake, but also to allow for multiple simultaneous communication sessions and or user interfaces. Our list of things to do is endless, and reorganizing the source is almost always at the bottom.
The ckuus*.c modules (like many of the ckc*.c modules) depend on the existence of C library features like fopen, fgets, feof, (f)printf, argv/argc, etc. Other functions that are likely to vary among operating systems -- like setting terminal modes or interrupts -- are invoked via calls to functions that are defined in the Group E platform-dependent modules, ck?[ft]io.c. The command line parser processes any arguments found on the command line, as passed to main() via argv/argc. The interactive parser uses the facilities of the cmd package (developed for this program, but, in theory, usable by any program). Any command parser may be substituted for this one. The only requirements for the Kermit command parser are these:
sstate string data 'x' (enter server mode) (none) 'r' (send a 'get' command) cmarg, cmarg2 'v' (enter receive mode) cmarg2 'g' (send a generic command) cmarg 's' (send files) nfils, cmarg & cmarg2 OR cmlist 'c' (send a remote host command) cmarg
cmlist is an array of pointers to strings.
cmarg, cmarg2 are pointers to strings.
nfils is an integer (hmmm, probably should be an unsigned long).
The screen() function is used to update the screen during file transfer. The tlog() function writes to a transaction log (if TLOG is defined). The debug() function writes to a debugging log (if DEBUG is defined). The intmsg() and chkint() functions provide the user i/o for interrupting file transfers.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
For VMS, the files are ckvfio.c, ckvtio.c, and ckusig.c (VMS can use the same signal handling routines as Unix). It doesn't really matter what the files are called, except for Kermit distribution purposes (grouping related files together alphabetically), only that each function is provided with the name indicated, observes the same calling and return conventions, and has the same type.
The Group E modules contain both functions and global variables that are accessed by modules in the other groups. These are now described.
(By the way, I got this list by linking all the C-Kermit modules together except ckutio and ckufio. These are the symbols that ld reported as undefined. But that was a long time ago, probably circa Version 6.)
[ Contents ] [ C-Kermit ] [ Kermit Home ]
#define ZCTERM 0 /* Console terminal */ #define ZSTDIO 1 /* Standard input/output */ #define ZIFILE 2 /* Current input file for SEND command */ #define ZOFILE 3 /* Current output file for RECEIVE command */ #define ZDFILE 4 /* Current debugging log file */ #define ZTFILE 5 /* Current transaction log file */ #define ZPFILE 6 /* Current packet log file */ #define ZSFILE 7 /* Current session log file */ #define ZSYSFN 8 /* Input from a system function (pipe) */ #define ZRFILE 9 /* Local file for READ command */ (NEW) #define ZWFILE 10 /* Local file for WRITE command */ (NEW) #define ZMFILE 11 /* Auxilliary file for internal use */ (NEW) #define ZNFILS 12 /* How many defined file numbers */
In the descriptions below, fn refers to a filename, and n refers to one of these file numbers. Functions are of type int unless otherwise noted, and are listed mostly alphabetically.
PATH_OFF: Pathname, if any, is to be stripped
PATH_REL: The relative pathname is to be included
PATH_ABS: The full pathname is to be included
After handling pathnames, conversion is done to the result as in the zltor() description if convert != 0; if relative or absolute pathnames are included, they are converted to UNIX format, i.e. with slash (/) as the directory separator. The max parameter specifies the maximum size of fn2. If convert > 0, the regular conversions are done; if convert < 0, minimal conversions are done (we skip uppercasing the letters, we allow more than one period, etc; this can be used when we know our partner is UNIX or similar).
Returns the number of files that match fn, with data structures set up so the first file (if any) will be returned by the next znext() call. If ZX_FILONLY and ZX_DIRONLY are both set, or neither one is set, files and directories are matched. Notes:
for (n = nzxpand(string,flags); n > 0; n--) { znext(buf); printf("%s\n", buf); }
should print all the file names; no more, no less.
Existing functions must make "if (inserver && isguest)" checks for actions that would not be legal for guests: zdelete(), zrmdir(), zprint(), zmail(), etc.
In UNIX, the only need Kermit has for privileged status is access to the UUCP lockfile directory, in order to read, create, and destroy lockfiles, and to open communication devices that are normally protected against the user (see the Unix C-Kermit Installation Instructions for discussion). Therefore, privileges should only be enabled for these operations and disabled at all other times. This relieves the programmer of the responsibility of putting expensive and unreliable access checks around every file access and subprocess creation.
Strictly speaking, these functions are not required in all C-Kermit implementations, because their use (so far, at least) is internal to the Group E modules. However, they should be included in all C-Kermit implementations for operating systems that support the notion of a privileged program (UNIX, RSTS/E, what others?).
That is, a negative return from ttchk() should reliably indicate that there is no usable connection. Furthermore, ttchk() should be callable at any time to see if the connection is open. When the connection is open, every effort must be made to ensure that ttchk returns an accurate number of characters waiting to be read, rather than just 0 (no characters) or 1 (1 or more characters), as would be the case when we use select(). This aspect of ttchk's operation is critical to successful operation of sliding windows and streaming, but "nondestructive buffer peeking" is an obscure operating system feature, and so when it is not available, we have to do it ourselves by managing our own internal buffer at a level below ttinc(), ttinl(), etc, as in the UNIX version (non-FIONREAD case).
An external global variable, clsondisc, if nonzero, means that if a serial connection drops (carrier on-to-off transition detected by ttchk()), the device should be closed and released automatically.
It is HIGHLY RECOMMENDED that ttinc() be internally buffered so that calls to it are relatively inexpensive. If it is possible to to implement ttinc() as a macro, all the better, for example something like:
#define ttinc(t) ( (--txbufn >= 0) ? txbuf[ttbufp++] : txbufr(t) )
(see description of txbufr() below)
If timo is greater than zero, ttinl() times out if the eol character is not encountered within the given number of seconds and returns -1.
The characters that were input are copied into "dest" with their parity bits stripped if parity is not none. The first character copied into dest should be the start character, and the last should be the final character of the packet (the last block check character). ttinl() should also absorb and discard the eol and turn characters, and any other characters that are waiting to be read, up until the next start character, so that subsequent calls to ttchk() will not succeed simply because there are some terminators still sitting in the buffer that ttinl() didn't read. This operation, if performed, MUST NOT BLOCK (so if it can't be performed in a guaranteed nonblocking way, don't do it).
On success, ttinl() returns the number of characters read.
Optionally, ttinl() can sense the parity of incoming packets. If it
does this, then it should set the global variable ttprty accordingly.
ttinl() should be coded to be as efficient as possible, since it is
at the "inner loop" of packet reception. ttinl() returns:
-1: Timeout or other possibly correctable error.
-2: Interrupted from keyboard.
-3: Uncorrectable i/o error -- connection lost, configuration problem, etc.
>=0: on success, the number of characters that were actually read and placed in the dest buffer, not counting the trailing null.
NET_TCPB 1 TCP/IP Berkeley (socket) (implemented in ckutio.c) NET_TCPA 2 TCP/IP AT&T (streams) (not yet implemented) NET_DEC 3 DECnet (not yet implemented)Zero or greater: ttname is a terminal device name. Zero means a direct connection (don't use modem signals). Positive means use modem signals depending on the current setting of ttcarr (see ttscarr()).
Kermit's DIAL command ignores the carrier setting, but ttopen(), ttvt(), and ttpkt() all honor the carrier option in effect at the time they are called. None of this applies to remote mode (the tty device is the job's controlling terminal) or to network host connections (modem type is negative).
[ Contents ] [ C-Kermit ] [ Kermit Home ]
ckcnet.h: Network-related symbol definitions.
ckcnet.c: Network i/o (TCP/IP, X.25, etc), shared by most platforms.
cklnet.c: Network i/o (TCP/IP, X.25, etc) specific to Stratus VOS.
The routines and variables in these modules fall into two categories:
Category (1) functions are analogs to the tt*() functions, and have names like netopen, netclos, nettinc, etc. Group A-D modules do not (and must not) know anything about these functions -- they continue to call the old Group E functions (ttopen, ttinc, etc). Category (2) functions are protocol specific and have names prefixed by a protocol identifier, like tn for telnet x25 for X.25.
ckcnet.h contains prototypes for all these functions, as well as symbol definitions for network types, protocols, and network- and protocol- specific symbols, as well as #includes for the header files necessary for each network and protocol.
The following functions are to be provided for networks that do not use normal system i/o (open, read, write, close):
Conceivably, some systems support network connections simply by letting you open a device of a certain name and letting you do i/o to it. Others (like the Berkeley sockets TCP/IP library on UNIX) require you to open the connection in a special way, but then do normal i/o (read, write). In such a case, you would use netopen(), but you would not use nettinc, nettoc, etc.
VMS TCP/IP products have their own set of functions for all network operations, so in that case the full range of netxxx() functions is used.
The technique is to put a test in each corresponding ttxxx() function to see if a network connection is active (or is being requested), test for which kind of network it is, and if necessary route the call to the corresponding netxxx() function. The netxxx() function must also contain code to test for the network type, which is available via the global variable ttnet.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
As of edit 195, Telnet protocol is split out into its own files, since it can be implemented in remote mode, which does not have a network connection:
ckctel.h:
Telnet protocol symbol definitions.
ckctel.c: Telnet protocol.
The Telnet protocol is supported by the following variables and routines:
[ Contents ] [ C-Kermit ] [ Kermit Home ]
NET_SX25 is the network-type ID for SunLink X.25. NET_VX25 is the network-type ID for VOS X.25.
So first you should new symbols for the new network types, giving them the next numbers in the sequence, e.g.:
#define NET_HX25 11 /* Hewlett-Packard X.25 */ #define NET_IX25 12 /* IBM X.25 */
This is in ckcnet.h.
Then we need symbols to say that we are actually compiling in the code for these platforms. These would be defined on the cc command line:
-DIBMX25 (for IBM) -DHPX25 (for HP)
So we can build C-Kermit versions for AIX and HP-UX both with and without X.25 support (since not all AIX and IBM systems have the needed libraries, and so an executable that was linked with them might no load).
Then in ckcnet.h:
#ifdef IBMX25 #define ANYX25 #endif /* IBMX25 */
#ifdef HPX25 #define ANYX25 #endif /* HPX25 */
And then use ANYX25 for code that is common to all of them, and IBMX25 or HPX25 for code specific to IBM or HP.
It might also happen that some code can be shared between two or more of these, but not the others. Suppose, for example, that you write code that applies to both IBM and HP, but not Sun or VOS X.25. Then you add the following definition to ckcnet.h:
#ifndef HPORIBMX25 #ifdef HPX25 #define HPORIBMX25 #else #ifdef IBMX25 #define HPORIBMX25 #endif /* IBMX25 */ #endif /* HPX25 */ #endif /* HPORIBMX25 */
You can NOT use constructions like "#if defined (HPX25 || IBMX25)"; they are not portable.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
In the UNIX version, we use the curses library, plus one call from the termcap library. In other versions (OS/2, VMS, etc) we insert dummy routines that have the same names as curses routines. So far, there are two methods for simulating curses routines:
Here are the stub routines:
In the MYCURSES case, code must be added to each of the last three routines to emit the appropriate escape sequences for a new terminal type.
[ Contents ] [ C-Kermit ] [ Kermit Home ]
[ Contents ] [ C-Kermit ] [ Kermit Home ]
The format of this field (the "," attribute) is interpreted according to the System ID ("." Attribute).
For UNIX (System ID = U1), it's the familiar 3-digit octal number, the low-order 9 bits of the filemode: Owner, Group, World, e.g. 660 = read/write access for owner and group, none for world, recorded as a 3-digit octal string. High-order UNIX permission bits are not transmitted.
For VMS (System ID = D7), it's a 4-digit hex string, representing the 16-bit file protection WGOS fields (World,Group,Owner,System), in that order (which is the reverse of how they're shown in a directory listing); in each field, Bit 0 = Read, 1 = Write, 2 = Execute, 3 = Delete. A bit value of 0 means permission is granted, 1 means permission is denied. Sample:
r-01-00-^A/!FWERMIT.EXE'" s-01-00-^AE!Y/amd/watsun/w/fdc/new/wermit.exe.DV r-02-01-^A]"A."D7""B8#119980101 18:14:05!#8531&872960,$A20B-!7(#512@ #.Y s-02-01-^A%"Y.5!
A VMS directory listing shows the file's protection as (E,RWED,RED,RE) which really means (S=E,O=RWED,G=RED,W=RE), which is reverse order from the internal storage, so (RE,RED,RWED,E). Now translate each letter to its corresponding bit:
RE=0101, RED=1101, RWED=1111, E=0010
Now reverse the bits:
RE=1010, RED=0010, RWED=0000, E=1101
This gives the 16-bit quantity:
1010001000001101
This is the internal representation of the VMS file permission; in hex:
A20B
as shown in the sample packet above.
The VMS format probably would also apply to RSX or any other FILES-11 system.
First of all, the book is wrong. This should not be the World protection, but the Owner protection. The other fields should be set according to system defaults (e.g. UNIX umask, VMS default protection, etc), except that no non-Owner field should give more permissions than the Owner field.
[ Top ] [ Contents ] [ C-Kermit Home ] [ Kermit Home ] | http://www.columbia.edu/kermit/ckcplm.html | crawl-002 | refinedweb | 4,502 | 61.36 |
Well, it?s more than that?
While starting with java as your core language, the only thing that should be on your mind is to understand every native feature that the language has to offer. As java is all about classes, it has some neat design patterns for developers to follow. Your duty as a responsible programmer is to question these design patterns quite often; after all the engineers who built java planned on designing it the way it is now for a reason. So without wasting much time on gossip, lets dive in?.
The why ?
Java being an object oriented language gives you the bliss to write your code in the form of reusable classes. Now as the word reusable has been used, it is there for a reason. Code re-usability doesn’t start by creating objects out of classes, it starts way before that; while you are creating classes itself.
So we have Interface, Abstract class and Concrete class.
PS: Interface is not a class.
1. Interface
Interface is a blueprint for your class that can be used to implement a class ( abstract or not); the point is interface cannot have any concrete methods. Concrete methods are those methods which have some code inside them; in one word – implemented. What your interface can have is static members and method signatures. The example below shall help you understand how to write an interface.
public interface Brain{ public static final int number = 1; public void talk( String name ); public abstract void doProgramming();}
The declaration is much like a class but inside the interface there are some strict rules you need to follow:
- All methods that you declare in an interface can have ? static ?, ? default ? or ? abstract ? modifiers ( Since Java 8 ). Implicitly they are ? public abstract ?.
- Since Java 8, methods can be implemented ( can have a code body ) in an interface if only if it is declared static or default. Abstract methods cannot have a body; all they can have is a method signature as shown in the example above.
- Variables are not allowed in interface. Hence any data declaration is ? public static final ?; hence only constants.
- Interfaces can extend other interfaces ( one or more ) but not classes ( abstract or not ).
- Interfaces cannot be instantiated as they are not concrete classes.
- Methods and constants cannot be declared ? private ?, methods cannot be declared ? final ?.
2. Abstract class
Abstract classes are a bit different from interfaces. These are also used to create blueprints for concrete classes but abstract classes may have implemented methods. Abstract classes can implement one or more interfaces and can extend one abstract class at most. There is a logical reason to this design which we will talk about later in this post. Here is an example of Abstract class creation.
public abstract class Car{ public static final int wheels = 4; String turn( String direction ){ System.out.println( “Turning” + direction ); } public abstract void startWithSound( String sound ); public abstract void shutdown( );}
The declaration rules are as follows:
- A class can be an abstract class without having any methods inside it. But if it has any methods inside it, it must have at least one abstract method. This rule does not apply to static methods.
- As abstract classes can have both abstract and non abstract methods, hence the abstract modifier is necessary here ( unlike in interface where only abstract methods are allowed ).
- Static members are allowed.
- Abstract classes can extend other at most one abstract or concrete class and implement several interfaces.
- Any class that does not implement all the abstract methods of it?s super class has to be an abstract class itself.
3. Concrete class
Concrete classes are the usual stuff that every java programmer has come across for sure. It is like the final implementation of a blueprint in case you are extending it some abstract super class. A concrete class is complete in itself and can extend and can be extended by any class.
public class Rocket{ public static final int astronauts = 4; String turn( String direction ){ System.out.println( “Turning” + direction ); } public abstract void startWithSound( String sound ){ System.out.println( “Engines on ” + sound + “!!”); } public abstract void shutdown( ){ System.out.println( “Ignitions off !!” ); }}
There are no unusual rules of declaration to talk about other that the fact that all the methods have to be concrete and it can extend abstract or concrete class as well as implement several interfaces. The only condition is that all the methods have to be implemented in order for it to qualify as a concrete class.
Extends and Implements
Now for the sake of understanding, lets consider an example of some interfaces, abstract classes and concrete classes that might help you to clarify your doubts about who can implement and extend what. I will put some comments where ever necessary. One thing to keep in mind is the fact that implements keyword is used when you can implement the inheritance, while extends is used to when we have some implemented methods that we can use from the inheritance.
interface Cognition{ void doProgramming();}interface Motor{ void write(); void bite();}interface Brain extends Cognition, Motor{ //the logic behind extends here is we cannot //implement anything in an interface int number = 1;}abstract class Body{ //Totally valid}abstract class clothes extends Body{ abstract void whatDoYouLike( String type );}//As in the next class I don’t plan to implement everything so I //have to leave it as abstractabstract class LivingBeing extends Clothes implements Brain{ void bite(){ System.out.println( “I know how to do that, ghaawww” ); } void whatDoYouLike( String type ){ System.out.println( “I like to wear a” + type ); }}//Now I am planning to implement all the abstract methods so time to //switch to a concrete classclass Human extends LivingBeing{ void write(){ System.out.println( “I will write a medium post !” ); } void doProgramming(){ System.out.println( “I would love to code !!” ); }}class Dog extends LivingBeing{ void write(){ System.out.println( “Woof Woof ??” ); }void doProgramming(){ System.out.println( “Woof Woof ??” ); }}
PS: Java does not allow multiple class inheritance because if two class had two different implementation of the same method then the compiler won?t know which one to use; while on the other hand you can inherit multiple interfaces because there is no implementation for the compiler to be confused about, and its up to you how you wish to implement it.
When to use what ?
The above example might have helped you in understanding the use cases and I am sure now you are convinced that it is not complicated design but simple ingenuity. So whenever you need multiple inheritance and a clean and clear blueprint that has only the design and not the implementation; you go for interface. If you don?t need multiple inheritance but you need a mix of design plan and pre-implementation then abstract class is your choice.
Knowledge not shared is wasted ? Clan Jacobs
? Like, Share and Follow?. ? | https://911weknow.com/interface-vs-abstract-class-vs-concrete-class | CC-MAIN-2021-31 | refinedweb | 1,149 | 64.41 |
hepabolu wrote:
> hepabolu wrote:
>
>> In short: after I've added the missing nodeIds and Ross changed the
>> site.xml file in Forrest, the only thing left are the links from the
>> samples into the docs.
>> The top-level can probably used as is with the exception of the live
>> sites, these are updated in Daisy.
>
>
> AFAICT I fixed all missing nodeIds.
Great stuff Helma.
This approach enabled me to easily modify the Forrest plugin to handle
the imports, so we are still mirroring the nav documents in Daisy
without any changes in Forrest to the site structure.
Currently, what happens is that if a Daisy navigation node contains an
import then it doesn't add anything to the URL-Space for that node. When
I have more time I'll make this more configurable, but hopefully it will
be sufficient for this.
WARNING
-------
I have *NOT* tested this very extensively, in fact I've just done a few
simple manual checks, I've not even done a full site build. I have a
very early start tomorrow and I'm going to bed now.
The next build of the ForrestBot will reveal all, if we get no message
to this list then the build went well, so it is time to see how close to
the required URL Space we are.
The next build is due in 1.5 hours (that is 2 am UTC).
I'm not able to look into this again until Friday AM (UTC + 1).
Ross | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200511.mbox/%3C437295D2.20909@apache.org%3E | CC-MAIN-2018-51 | refinedweb | 250 | 79.5 |
!
Let's assume we have a noisy, binary image, and we want to recover the original image. The code below will create our binary image coded as -1, 1 and the noisy version.
import numpy as np from scipy.signal import convolve2d from scipy import stats %pylab inline
Populating the interactive namespace from numpy and matplotlib
img = imread('lettera.bmp') arr = np.asarray(img, int) mn = np.mean(arr) arr[arr < mn] = -1 arr[arr >= mn] = 1 original_mean = np.mean(arr) print original_mean
-0.477294921875
sigma = 2 noisy_arr = arr + sigma*np.random.normal(size=arr.shape)
imshow(arr) title("Clean Image")
<matplotlib.text.Text at 0xa62090c>
noisy_mean = np.mean(noisy_arr) print noisy_mean imshow(noisy_arr > noisy_mean) title("Noisy Image")
-0.467782399643
<matplotlib.text.Text at 0xa7595cc>
# First, set up our averaging kernel w_conv = np.ones((3, 3)) w_conv[1, 1] = 0 # The simplest denoising: just average nearby pixels. less_noisy_arr = convolve2d(noisy_arr, w_conv, mode='same') # mode='same' to keep same size less_noisy_mean = mean(less_noisy_arr) imshow(less_noisy_arr > less_noisy_mean) title("Less Noisy Image by simple convolution") figure() num_iters = 20 for i in range(num_iters): less_noisy_arr = convolve2d(less_noisy_arr, w_conv, mode='same') # mode='same' to keep same size less_noisy_mean = mean(less_noisy_arr) imshow(less_noisy_arr > less_noisy_mean) title("Less Noisy Image by repeated convolution.") figure() num_iters = 280 for i in range(num_iters): less_noisy_arr = convolve2d(less_noisy_arr, w_conv, mode='same') # mode='same' to keep same size less_noisy_mean = mean(less_noisy_arr) imshow(less_noisy_arr > less_noisy_mean) title("Less Noisy Image by too many repeated convolutions.")
/home/guyrt/.virtualenvs/statistocat/local/lib/python2.7/site-packages/scipy/signal/signaltools.py:422: ComplexWarning: Casting complex values to real discards the imaginary part return sigtools._convolve2d(in1, in2, 1, val, bval, fillvalue)
<matplotlib.text.Text at 0xaf8c4acc>
While this averaging method might be easy to encode, it can be hard to tune. It also tends to "oversmooth". Eventually, it will oversmooth to a single, homogenous image!
How are we to recover the original image? The key insight is similar to our naive implementation: a pixel's value is usually correlated with its neighbors. Thus, we cast the problem as a Markov Random Field. The probability of an image y is$$ \log \tilde{p}(y) = - \sum_{s \neq t } y_s w_{st} y_t$$
There are a few observations to be made here. First, the weight $ w_{st} $ describes the amount that we care about whether pixels $y_s$ and $y_t$ are equal to each other. If $w_{st} > 1$ then the probability of the total image is higher if $y_t$ and $y_s$ share the same value. This follows because $-1 * 1 * -1 = 1 * 1* 1 = 1$ but $1 * 1 * -1 = -1 * 1 * 1 = -1.$ It is often informative to think about Ising models in terms of a sum over products of pixel values encoded as -1, 1. Computationally, it's easier to treat them as a convolution, which we will see.
The values w_{s,t} should enforce our key insight: we only care about comparing $y_s$ and $y_t$ if they are near each other in the image. If that isn't true, then the weight is 0. We can recast the equation as$$ \log \tilde{p}(y) = \frac{1}{2} y^T W y $$
where W is a sparse block Toeplitz matrix. In keeping with our insight, we set $w_{st} = 1$ iff $y_s$ and $y_t$ are adjacent pixels. In fact, if we create a convolution matrix to simulate W, we see that the noise is somewhat reduced. Repeating the process does a fair job of removing noise, but it leads to "clumping": once enough pixels near each other take on a common value, they maintain that value rather well.
However, the key insight above isn't enough to produce a solution.
Now, let's recover the original image using something a bit more powerful. To do this, we'll need to set up a bit of notation. Call the noisy image $y$ and the denoised image $x$, both of which are vectorized images (though our code uses convolve2, so we keep them as actual images). We want to find$$ argmax_x p(y, x) = p(x) p(y|x). $$
Theoretically, we want to find the image $x$ that is most likely to produce the noisy image $y$. If that feels backwards at first, that's okay. Identifying the best input given a know output is a very common machine learning technique.
The prior, $p(x)$ is constructed from the Ising model, so it encodes the idea that nearby pixels influence each other:$$ p(x) = \frac{1}{Z_0} exp(-E_0(x)) = \frac{1}{Z_0} exp(\sum_i\sum_{j \in nbhd(i)} W_{ij} x_i x_j). $$
Note: we don't need to know the value of our normalization constant $Z_0$. Below, we use the unnormalized probabilities $\tilde p = pZ$ for the constant terms in our equations.
from scipy.sparse import dia_matrix, eye def unnormalized_log_prior(x):).dot(x_flat) print unnormalized_log_prior(2 * (noisy_arr > noisy_mean) - 1) print unnormalized_log_prior(2 * (less_noisy_arr > less_noisy_mean) - 1) print unnormalized_log_prior(ones(noisy_arr.shape)) print unnormalized_log_prior(ones(noisy_arr.shape) * -1)
13550.0 123030.0 130302.0 130302.0
The extrema of $p(x)$ occur when $x$ is a constant image, but those are probably not suitable solutions to the joint probability.
Now that we have a prior, we can examine the likelihood $p(y|x)$ of noisy image $y.$ $$ p(y|x) = \prod_i p(y_i|x_i).$$ = \sum_i exp(-L_i(x_i)). $$
for log likelihood loss function $L_i.$ The loss function $L_i$ gives us the probability of observing $y_i$ given that the original image is $x_i$. Here, we assume that the distribution of $y_i$ is normal with a mean determined by $x_i$. In our case, this is obviously true: we designed the noise function!
That gives us a posterior:\begin{equation} p(x|y) = \frac{p(y|x) p(x)}{p(x, y)} = \frac{1}{Z} exp(\sum_i L_i(x_i) - E_0(x)) \end{equation}
where $E_0(x)$ is the Ising weight function defined above.
Right away, we have a problem. The prior energy $E_0$ is defined by the Ising model, which includes interconnections between adjacent pixels in an image, which makes for a function that is difficult to optimize. (For more information on reasoning about Ising models in 2-d in physics, see Wikipedia.) This is where we use variational inference. Rather than deal with the likelihood, which is hard to optimize, we'll deal with a factored approximation.
As a quick review, Variational Inference is an approximation technique for a probability distribution $p$. The idea is to create a class of distributions $q$ that are easier to work with, and then to parameterize $q$ to minimize the approximation error.
In this case, our simplification is to assume that the posterior fully factorizes to$$ q(x) = \prod_i q_i(x_i, \mu_i) $$
where parameter $\mu_i$ is the mean value at pixel $i.$ Note that (1) we can choose $\mu_i$ to minimize $q$: it is a variational parameter, and (2) those means are independent of the surrounding pixels. That means we can derive the mean field update for each pixel independently. We start with$$ \log \tilde{p} (x_i) = x_i \sum_{j \in nbhd(i)} W_{ij} x_j + L_i(x_i) + const .$$
For the mean field update, we need to compute (see Murphy section 21.3.1 for details)$$ \log q_j(x_j) = \mathbb E_{-q_j} [\log \tilde p(x)] + const $$
and since $E_{-q_j} (f) = \sum_{k \neq j} q(x_j, \mu_j | x_j) f(j) = \sum_{k \neq j} q(\mu_j) f(j), $ we have$$ q_i(x_i) \propto exp(x_i \sum_{j \in nbhd(i)} W_{ij} \mu_j + L_i(x_i)) .$$
That's the important theoretical step. Murphy derives an actual update using a great deal of exponential mathematical gymnastics. If you do read his derivation (page 738 in my copy), note that it uses $L_i^+ \equiv L_i(+1)$ and $L_i^- \equiv L_i(-1)$ which are the log likelihood functions centered at each of these two values. The variance in the likelihood controls the strength of the prior. This is the final update, which also incorporates a damping term:$$ \mu_i^t = (1 - \lambda)\mu_i + \lambda \tanh \big( x_i \sum_{j \in nbhd(i)} W_{ij} \mu_j + 0.5 (L_i^+ - L_i^-) \big) $$
Damping is required because the mean field Ising update alone can lead to clumping around local extrema.
It's actually pretty easy to compute this in python.
def unnormalized_log_prior2(x): """ compute the log prior using adjacent pairs. """) noisy_arr_copy = noisy_arr.copy() lmbda = 0.5 for i in range(15): logodds = log(stats.norm.pdf(noisy_arr_copy, loc=1, scale=2)) - log(stats.norm.pdf(noisy_arr_copy, loc=-1, scale=2)) noisy_arr_copy = (1 - lmbda) * noisy_arr_copy + lmbda * tanh(unnormalized_log_prior2(noisy_arr_copy).reshape(noisy_arr_copy.shape) + .5 * logodds)
imshow(noisy_arr_copy) print sum(np.abs(noisy_arr_copy - arr)) / arr.size
0.029547428632
denoised_mu = np.mean(noisy_arr_copy) noisy_arr_copy = 2 * (noisy_arr_copy > denoised_mu) - 1
imshow(noisy_arr_copy)
<matplotlib.image.AxesImage at 0xb0992ac>
imshow(arr)
<matplotlib.image.AxesImage at 0xb4873ec>
sum(np.abs(noisy_arr_copy - arr)) / arr.size
0
Here's where we experienced errors:
imshow(noisy_arr_copy - arr)
<matplotlib.image.AxesImage at 0xb71df4c> | https://nbviewer.org/url/guyrt.github.io/notebooks/Ising_model.ipynb | CC-MAIN-2022-40 | refinedweb | 1,489 | 57.77 |
Why BPEL is not the holy grail for BPM
- |
-
-
-
-
-
-
Read later
Reading List
Introduction
Looking at recent articles and various BPM solutions, it would be easy to assume that BPEL is now the defacto standard to be used when implementing a workflow engine. From a technical perspective this may well be correct, however few people will claim that BPEL can be easily understood by the end-user, a.k.a the business analyst, who definitely prefers a graph based notation such as BPMN. This article will provide guidance in understanding the discrepancy between the technical point of view (pro-BPEL) and analyst's (pro-BPMN). Going further, even if most BPEL-based BPM solutions agree on the discrepancy (since they usually provide a BPMN to BPEL mapping) this article will explain why it is not currently the solution to BPM problems. A real-world example will be used to illustrate our arguments.
Parallelism and Structuredness in Programming Languages
Developers and BPM users may believe that BPEL [BPEL07] is a structured language since it is basically based on blocks, much as traditional languages such as Java and C amongst others. This comes in part from its origin: Microsoft's XLANG, which was block based. However, BPEL origins also include IBM's WSFL, and this is of great importance for the following discussion as it was graph based (hence unstructured). We find in BPEL a mix of structuredness (blocks) and unstructuredness (control-links and events). Those last constructs introduce a bit of unstructuredness into a world of structuredness... The conclusion is that BPEL is not a structured languages even if it looks like a structured language.
On the other hand, BPMN is a flow-chart notation which is naturally unstructured. No doubt about this. In chapter 11 (page 137) of the BPMN specification [BPMN06], a direct mapping from BPMN to BPEL is provided. Some BPMN editors (and users) believe that BPMN is a simple GUI for the underlying BPEL language. This is not quite true as explained in the BPMN FAQ:
"By design there are some limitations on the process topologies that can be described in BPEL, so it is possible to represent processes in BPMN that cannot be mapped to BPEL".
This article, will provide some fundamental details to that very important statement. But let's first focus on structured versus unstructured languages. Why is it so important? The main point is that it is much harder to perform code analysis on an unstructured language than a structured one (such as Java, C, and most --- if not all --- widely used programming languages). Code analysis has a wide range of applications, from error checking (e.g. compilers), to bug detection (e.g. findbugs, deadlock detection, ...) and quality checking (e.g. check style).
An important theorem from Böhm and Jacopini [BOHM66] (and vulgarized on Wikipedia) states that every computable function can be implemented in a programming language that combines subprograms in only three specific ways. These three control structures are:
executing one subprogram, and then another subprogram (sequence);
executing one of two subprograms according to the value of a boolean variable (selection);
executing a subprogram until a boolean variable is true (iteration).
This basically means that any (unstructured) flow-chart can be transformed into a structured one. This formed the basis of Dijkstra' paper "Go To Statement Considered Harmful" [DIJKSTRA68].
There is still a debate on whether we should allow unstructured programming language or not. Nevertheless, the facts are:
most students in the world are taught structured programming;
the most widely used programming languages are (non-strict) structured programming languages;
most unstructured programming languages have introduced some structured constructions (BASIC, COBOL, FORTRAN).
So in general, most programmers do focus on structured programming but sometimes use unstructured constructions (goto, jump, break, exceptions) for various reasons (mainly readability, maintenance, sometimes performance).
Business Analysts Write Parallel and Unstructured Processes
A business analyst (BPM end-user) has to deal with the real world1, which is in essence not only unstructured but highly parallel. This has two implications:
BPM end-users are usually not computer engineers, neither computer scientists: they design business processes using flow-chart notation (this is the most natural) for them, hence, unstructured (and parallel, this is important);
in the face of parallelism, unstructuredness is more expressive than structuredness.
Point 2 is of great importance and has been formally proven by Kiepuszewski et al. [KIEPUSZEWSKI00]. The fact is that there are parallel unstructured, workflows that cannot be expressed into a parallel, structured one. Actually, those cases are quite simple to find. Consider this example using the BPMN notation (created with the Intalio BPMN designer):
In a tool that uses BPEL as its underlying format, a separate pool has to be created in order to validate that diagram. We will discuss this problem later, but the main point here is to focus on the pool called 'Employer' and the 6 activities it contains.. This is only possible when both an office has been set up and an account created in the information system from the human resource database. When both the computer and the medical check have been performed, the employee is ready to work. Of course, you may want to model this simple process differently, but the point I want to make here is that you cannot define a structured parallel workflow that is equivalent 2 to this one, I mean, you will never be able to! We will use this simple example throughout this document.
As a first conclusion from this study we have:
developers naturally write their programs using sequential structured constructions (blocks);
BPM users naturally design their processes using unstructured and parallel constructions (graph);
unstructured and parallel workflows are more expressive than structured parallel ones.
There are workflows that BPM users will design that you definitely cannot express equivalently into a structured parallel workflow. Worse, also in [KIEPUSZEWSKI00], the author shows that even when the transformation from a parallel unstructured workflow to a parallel structured one is possible, it requires the addition of several variables and/or nodes, so that the final result is almost unreadable to the end user. We will speak soon about that readability problem.
Transforming BPMN to BPEL
In the article [OUYANG06], the authors proposed an automatic translation from BPMN to (readable) BPEL. They define a subset of BPMN due to unclear semantic in the specification. Hence, the OR Gateway and Error Intermediate Events are just not considered in their paper. This is a problem if the process being designed contains multiple end events since they are not considered by the transformation tool, and the standard way to convert from a multiple end events workflow to a single one is by using OR Gateways. The algorithm basically works like this:
Try to find well known patterns that map directly to BPEL (sequence, flow, pick, while, ...) in the graph. For each of these, replace the component by a simple task containing the mapped BPEL code.
Then, try to find some quasi-structured component and transforms them in such a way that most of the component is mapped directly to a BPEL structure.
Then, search for acyclic BPMN sub-models (containing only sequence flows and parallel gateways). For these, use BPEL control-links.
Finally, BPEL events are used for the rest.
The result is an "as readable as possible" equivalent BPEL process. Note that using events in BPEL we can always transform BPMN to BPEL. The problem is that the generated code is not readable at all. The conclusion is that an algorithm for transforming any BPMN diagram into an equivalent 3 BPEL process that is "as readable as possible" 4 does exist.
The Intalio Use Case
Note that this is certainly not the algorithm used by the Intalio BPM v2.0 solution for their transformation from BPMN to BPEL as shown by the simple example below. Taking the previous parallel unstructured BPMN diagram, the Intalio solution transforms it into the BPEL process, some of which is shown below:
<?xml version="1.0" encoding="UTF-8"?> <bpel:process xmlns: <bpel:import <bpel:import <bpel:partnerLinks> <bpel:partnerLink </bpel:partnerLinks> <bpel:variables> <bpel:variable </bpel:variables> <bpel:sequence> <bpel:receive </bpel:receive> <bpel:flow bpmn: <bpel:sequence> <bpel:empty bpmn: <bpel:flow bpmn: <bpel:sequence> <bpel:empty bpmn: </bpel:sequence> <bpel:sequence> <bpel:empty bpmn: </bpel:sequence> </bpel:flow> </bpel:sequence> <bpel:sequence> <bpel:empty bpmn: <bpel:empty bpmn: </bpel:sequence> </bpel:flow> <bpel:empty bpmn: </bpel:sequence> </bpel:process>
To get a better picture of the transformation output, we opened the process into the Eclipse BPEL designer:
For some reason, labels for the activities 'Fill HR Db', 'Medical Check', and so on are missing, but in any case we can see from the BPEL source code that the BPMN activity 'Employee Arrival' has been transformed into a 'Receive' BPEL operation. To the business analyst, it is strange to now see 7 activities ('Receive' and 6 other 'Empty' activities) while our original process contained only 6. Looking at the BPEL source code the puzzle is solved: we can see that the activity 'Provide Computer' has been duplicated. In some ways, this is good for the employee: they will get two computers in their office!
Whether the Intalio BPMN2BPEL transformation algorithm produces a readable BPEL or not, is not the issue here: the problem is that the transformation is simply wrong. You can hardly imagine the result for diagrams produced by professional BPM designers focusing on real business processes since they are highly parallel and unstructured in essence.
The Readability problem
The claim in [AALST_UNKNOWNDATE] is that BPEL is not readable by the end-user, so there is a need for a high-level language in which processes are designed. Typically though, runtime information is required before the process can actually be executed. How will this be entered in into the resulting BPEL code? One could argue this is the reason it is important to generate a readable BPEL code. In the case where all the information is entered directly into the editor/designer, at least for debugging purposes, the BPEL code should be as readable as possible where readable means: using BPEL direct straightforward structure (sequence, flow, pick, wait, etc...) as much as possible.
On this readability issue, let's introduce the eclipse JWT sub-project whose aim is to provide a toolkit for Workflow management (designer, transformations, simulations and connections to engine). JWT currently uses the UML Activity Diagram notation (UML-AD for short) for designing workflows. UML-AD is strictly equivalent (in terms of expressive power) to BPMN (see [WHITE_UNKNOWN_DATE] for details). So, using an UML-AD notation, we can represent the previous BPMN diagram in JWT:
JWT has been developed to be extensible, and provides different transformation plugins. One of them is a UML-AD2BPEL transformation provided as a research project from the University of Augsburg. The BPEL transformation plugin outputs a BPEL-WS-1.1 document that is 518 lines long (provided at the end of this document). Note that neither the Eclipse BPEL Designer nor the Intalio Designer were able to properly display this BPEL file. We used Netbeans for that purpose. The diagram produced is too complex to be presented here (see the resources section for a partial rendering of the diagram). The JWT2BPEL transformation tool uses BPEL events extensively in order to get the equivalent 5 BPEL representation. For the courageous readers who tried to read and understand the resulting BPEL file, it should be obvious that it is very hard.
So the simple parallel and unstructured process expressed using pure workflow notation such as BPMN or UML-AD is hardly representable into a “readable” BPEL format. This is a general fact, and not specific to the simple process we are discussing. The situation is worse when BPMN-to-BPEL round-trip engineering problems are taken into consideration: (from Wikipedia) “generating BPEL code from BPMN diagrams and maintaining the original BPMN model and the generated BPEL code synchronized, in the sense that any modification to one is propagated to the other”. Definitely, using BPEL as the final format for executing business processes expressed using a natural workflow notation such as BPMN is asking for trouble.
Note that making a BPMN diagram out of a BPEL process seems to be much easier than the reverse: transforming structured elements to unstructured elements is straightforward. Note that such a transformation --- BPEL2BPMN --- is provided in Intalio in the Java class available here. It seems to be in the core of STP and it is (currently) not fully BPEL compliant seeing the comments in the class:
/* * Very basic sample that generates BPMN out of a BPEL file. * The BPEL parsed is a small subset of the bpel spec: * scope, assign, receive, reply, invoke, flow, sequence. * ... */
Still, importing the BPEL file generated by the Intalio Designer itself does not work for obscure reasons. Nevertheless, the round-trip problem is about synchronizing two quite different representations of the same process: one in BPMN and one in BPEL.
Conclusion
First, we have clarified a common misunderstanding: BPEL is not a structured language, but it is based on a structured language (block-based). In some ways, BPEL is much closer to a standard language such as Java than to a natural workflow notation such as BPMN (which is graph-based). Until now, programmers have dealt directly with their language. Integrated Development Environments are used to simplify several recurrent steps such as compiling, refactoring, testing and so on. But programmers “speak” their language directly. We claim that it should be the same with BPEL. An IDE can only simplify programming (note that we don't use the term “designing” here). But BPEL programmers will have to “speak” BPEL in order to use it and make something useful out of it. The question of whether BPEL is “speakable” or not by general technicians --- as Java can be --- is out of scope of this article, but is definitely an interesting question.
For the business analyst however, it is clear that BPEL is not user-friendly. BPEL is hard to read, hard to learn, hard to implement, and most of all as this is the major end-user concern: hard to hide. We have already noticed that when creating the “Employer” pool in the simple example used in this document we were constrained to having to create another “non-executable” pool in order to generate the BPEL file. Many other BPEL related baggage is present in the Intalio designer that is irrelevant from the point of view of the BPMN analyst: e.g. namespaces, web-service invocations, and XML data type amongst others.
Therefore, we consider BPMN notation as the only currently viable solution for Business Analysts 6. Nevertheless, many execution details, absent from the BPMN specification --- and unknown at design time from the analyst --- will have to be specified before the process can actually get executed. This information is usually technical in nature and site (e.g. mail server address, task repository) or implementation dependent (e.g. web service, J2EE service or .NET service). It is therefore of a great importance that the process skeleton on which the technician will enter the environmental context is both equivalent to the original BPMN process in terms of execution semantics (bisimulation), and easy to read to ensure that modifications made do not change the process behaviour.
Transformation from BPMN to (readable) BPEL is quite hard to implement, and produces --- when correct --- hardly readable code. By the way, the round-trip problem is even harder. This last problem unless resolved, makes BPEL a very difficult target for the output of a process designed by a real business analyst.
Therefore, we may wonder why BPMN is transformed to BPEL since there exist a graph-based standard that maps directly BPMN constructs --- namely XPDL v2.0. With this mapping, XPDL v2.0 becomes the natural BPMN persistence file format. Moreover, it specifies behaviours that were only available previously in BPEL such as Web Service invocations and compensations. Of course, one may claim that XPDL 2.0 lacks some execution specifications that makes him unsuitable for direct execution. We believe that using the BPEL semantic wherever XPDL is under-specified makes room for an engine that can be fully BPMN v2.0, XPDL v2.0 and BPEL compliant. This is how Bonita and Orchestra team will implement their next generation of BPM engine. But this is another story, that requires an article of its own... Stay tuned!
I would like to thank the Bonita & Orchestra team for their help and support during the writing of this article, and especially Miguel Valdes-Faura for his review and suggestions. I would also like to thank Gavin Terrill for his help on corrections and final touches.
Pierre Vignéras
Bull, Architect of an Open World™
*BPM Team*, Bull R & D
- .
Resources
1. The JWT2BPEL transformation outputs this BPEL file.
2. Using NetBeans to get a graphical representation of the BPEL process presented in the previous resource, we get this diagram. This diagram represents the whole process, with the central part collapsed (the node with a '+' symbol). If we expand this node, we get an expanded diagram. On that diagram, we can see two nodes with a '+' symbol inside. They are respectively the 'Employee Arrival' activity and the 'Ready to work' activity of the BPMN original diagram. From the 'Employee Arrival' node, we see a BPEL scope labelled C5 from which events are attached. Those events are seen on its right side. Those events are used to implement in BPEL the parallel and unstructured flow specified in the original BPMN diagram.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Why BPEL is not the holy grail for BPM
by
Stephen Baudendistel
Re: Why BPEL is not the holy grail for BPM
by
Gavin Terrill
Nice try.
by
Alex Boisvert
Now think of BPMN as a visual language to define process logic.
Most would agree that VM bytecode is mostly relevant to language designers, virtual machine experts and programming enthousiasts trying to understand how high-level languages map to low-level instructions for correctness and performance purposes. Just like bytecode, BPEL was not designed to be read or written by people directly. The fact that BPEL is more readable than bytecode is simply a consequence of chosing XML as the interchange format; it does not imply it was designed to be read by business people.
The main intent behind BPEL is to standardize the "instruction set" and semantic of the process virtual machine, such that:
* process definitions can be exchanged between different organizations and run the same way on different VM implementations;
* different languages (incl. BPMN) can produce the same bytecode and run on the same virtual machine;
* most importantly, a larger ecosystem of knowledge, know-how, tools and solutions can develop around a standard process infrastructure.
This article simply misrepresents BPEL and its value to the software industry. If the authors' aim is to increase the mindshare of XPDL, they might want to consider discussing its relative merits instead of spreading misinformation about BPEL.
Alex Boisvert
Director of Product Development
Intalio inc.
PS: Thanks for reporting an important bug in our BPMN-to-BPEL translator. We'll be sure to fix it in future versions of our software.
Re: Nice try.
by
John Mettraux
John
Re: Nice try.
by
Tom Baeyens
If you look at BPEL from the perspective of scripting a new course grained service out of finer grained services, then it makes perfect sense. Cause in that scenario, all the services you want to consume are already available.
BPEL has XML based services as their building blocks. XML based processing technologies (XPath, XQuery) are in kindergarten compared to what you can do in languages like Java. So for every tiny bit of processing you want to do in a BPEL process you need to build this functionality in a programming language, expose it as a service and then call that service from the BPEL process. This creates a huge overhead for the developer.
As explained in 'BPM as a discipline' (see my article Seven Forms of BPM), software development is done in silos (aka software projects). First, in this scenario, the business process diagram provides the structure, but typically a lot of logic in the business process needs to be coded and associated to the business process. That is where BPEL's foundations on WSDL are a PITA because all of your business logic will have to be written in a programming language and then exposed and consumed as a service. Second, software projects are a combination of processes, domain models and mostly some UI. So processes need to be integrated with the software development projects. I believe that is where the BPM bottleneck has been to this date. Some think that BPM-as-a-Service will bring light at the end of the tunnel, but I think that is only making the integration with a software project more difficult. To integrate process technology in to an application or software project, WSDL (read: web services) is not the most appropriate way to bind these together.
That's why we believe that a Process Virtual Machine should be build on Java technology, rather then WSDL (read: web service) technology. From Java you can easily call other Java code, but also webservices when that is needed. We also believe that business processes are most often related and embedded into an application. That's why embedding those processes in a Java project usually makes more sense then extracting them and putting them into the Enterprise Service Bus (ESB).
And as a side note: Now vendors like yours claim that BPMN - BPEL is the way to go. But wasn't BPEL intended to provide the business level diagram in the first place ? So now you translate graphical BPMN to another graphical BPEL process. I don't think the BPEL diagram makes sense. If you look at it as byte code, it should be transparant. In that perspective, it should not matter whether the "byte code" is BPEL, Java, byte code or assembler.
Our Java-based Process Virtual Machine is much more capable of providing that byte code layer. We already have BPEL, XPDL and jPDL running on it. And XPDL and jPDL are much closer to BPMN then BPEL.
Tom Baeyens
JBoss jBPM
Re: Nice try.
by
Pierre Vignéras
Thanks for reporting an important bug in our BPMN-to-BPEL translator. We'll be sure to fix it in future versions of our software.
clearly shows that you misunderstood the post. Even if you fix that bug, it does not mean that the resulting BPEL file will be readable. Therefore, the problem will basically remain. As shown by the JWT UML-AD2BPEL transformation tool --- which as the merit of being correct on that simple example by the way --- the BPEL generated file is not readable. And the round-trip problem makes the problem even worse. Claiming that BPEL " was not designed to be read by business people " is non-sense : what does the 'L' stands for in BPEL? Language isn't it? So BPEL would be the first language not designed to be readable?
Regards.
PS: There are other important bugs (such as this one) that we have identified on the Intalio BPMN2BPEL transformation tool (and on the Intalio solution as a whole). But we cannot really contribute since the Intalio source code is not available (that is rather strange for a so called open-source solution):
dev.eclipse.org/newslists/news.eclipse.stp/msg0...
jorgechollet.wordpress.com/2008/08/11/so-wheres...
Re: Nice try.
by
Robert Morschel
PVM seems to overlap conceptually with SCA, or have I missed the point?
Robert
soaprobe.blogspot.com
Re: Nice try.
by
Mickael Istria
Re: Nice try.
by
Tom Baeyens
Hi Robert,
+1
spot on.
so you can leave out the "but I may be wrong" part, Mickael :-)
Re: Nice try.
by
Tammo van Lessen
Anyways, it's actually neither Intalio's fault, nor BPEL's, nor BPMN's nor a wrong translator. The actual problem is based on the problems graph-based modeling impose. I'm personally a big fan of graph-based modeling, however, it leads to specific problems like Lack of Synchronization and Dead Locks when it comes to execution. So, to deal with this problem there are a few possibilities: a) ignore the problem (this is AFAIK the jBPM, PVM,... approach), which is perfectly fine as longs as people make their models right. b) There is currently a lot of reseach efforts to detect such problems in arbitrary graphs. A solution can be to combine a) with such soundness checks and finally c) make sure that the language as a defined and foolproof execution semantic. The latter is the BPEL way.
I understand that business people don't want to deal with such problems. However, when they notice that their processes get stuck in a dead lock, they would be happy if there would be something like DPE, be it by knowing it and directly modeling BPEL, or by having a tool that takes care of this, that ensures that the BPMN model (with rather unclear semantics) is transformed into a foolproof and properly executable process model (e.g. BPEL). Who would care whether its still "beautiful BPEL" as long it works as expected? Compare it to C++ -> Assembler. I'm pretty sure there are Assembler pros who claim that such generated code also looks strange. If you take a closer look at reference process models you will find that most of them are not fully correct. In this case I definitely prefer the foolproof solution. I don't believe that both can be achieved without making compromises. Let's see what the BPMN 2.0 tradeoff brings.
Bashing around would be another option ;)
Regards.
Re: Nice try.
by
Alex Boisvert
@Tom:
1).
2) You are quite right with your comment about language integration. WSDL is both a strength and a weakness for BPEL. On one hand it forces process designers to think in terms of high-level service building blocks (so-called 'programming in the large'), on the other hand BPEL left out a fairly critical part of the integration story. I'm one of those people who believe this was a good choice. Specifying how BPEL 2.0 would integrate with programming languages as part of the same effort would have been premature. It was a better choice to leave this out of scope of the specification and allow time for experimentation. IBM's own attempt at this was BPEL4J. It think it's fair to say now that it was a good shot at the problem but didn't really succeed. Our own experimentation, within Apache Ode, has led us to SimPEL and in particular, the integration of JavaScript (incl. E4X extensions) with the BPEL language model into a friendly process scripting language. Time will tell if this is a winning combination but so far the feedback has been very positive.
Re: Nice try.
by
Tammo van Lessen.
I don't agree. I consider graphical modeling one of the most powerful features of BPEL as it makes it less a (parallel) programming language but rather introduces means that business people can easily deal with (sticks and boxes). Its for instance pretty hard to refactor a block structured model (because you'd need to change the nesting) but pretty easy if you just can bend links. In any case its a matter of tooling and unfortunately current modeling tools are not as good as needed. I believe that SimPEL, BPELscript, ... can surely help to make BPEL a more accessible to more people but it targets developers only. To reach business people, graphs _plus_ very good tooling is IMO required and capable to do the job.
Regarding the WSDL issue: I also think that tools should try to hide the WSDL informations from modelers as good as it gets, either by providing high-level means of an SOA, allowing users to easily select registered services for partner links, by employing semantic web technologies (e.g. BPEL4SWS) or by removing the dependency on WSDL completely and moving the binding to deploytime (the "BPEL light" approach). When BPEL is just about defining data and control flow between message exchanges with a well-defined execution semantics, I believe that BPEL provides the armamentarium for business modelers.
BPMN XML Serialization Format
by
Bernd Eckenfels
I personally think that the BPMN is quite disambiguos, missing the simple executability of BPEL. However UML serialization formats (which have the same lack of semantics) proofen to be usefull. And it looks like the submissions to OMG try to define execution semantics as well (OMG yet another xml language for execution?!)...
However a interchange format for BPMN models is highly desireable.
Greetings
Bernd
Re: Nice try.
by
Stefan Tilkov
So, is this now the InfoQ "Exclusive Content" Intalio bashing? Who's next? Who's sponsoring?
Not being entirely sure what you're suggesting, but vendor content is clearly marked as such on InfoQ. There is much more opinion than truth in all things SOA and BPM, but we're trying to make very sure that when someone expresses their views, they do so under their own name.
We'd be happy to consider articles for publication that make the case for the opposite view.
Re: Nice try.
by
Tom Baeyens
...leads to specific problems like Lack of Synchronization and Dead Locks when it comes to execution. So, to deal with this problem there are a few possibilities: a) ignore the problem (this is AFAIK the jBPM, PVM,... approach)
We don't ignore it. We see executable processes similar to any other form of software and hence it needs a test suite. A test suite actually shows that the executable process actually does what you want it to do. Whereas static analysis techniques can only raise a red flag and ask the modeler: "Are you sure?"
In BPM as a discipline, writing tests for executable processes is the only option. Dead Path Elimination (DPE) will not be sufficient. Non technical people will not be able to produce software. But they are able to produce models on which the executable process models can be based. Once a diagram becomes an executable process, then it becomes software and needs to be tested as such. Preferrably in combination with the rest of the application.
Only when a specific purpose process language (kinda like a process DSL) is very limited in scope, then it might be possible to make sure that non technical people can actually model executable processes. But in those cases you have to simplify the modelling capabilities to such a low level that you can only handle very specific things with it. Like e.g. specifying approvals in a document management system.
Re: Nice try.
by
Robert Morschel
Robert
soaprobe.blogspot.com
Not the only way to go
by
Alexis Brouard
You're right to say BPEL is not a structured language and BPEL programmers will have to speak BPEL (both are obvious from my point of view). I also agree that BPEL is not user-friendly (but was it designed to be?).
However, for a BPEL programmer, I think it is neither worst or best than another (programming) language: in every language you can do beautiful things (in your discussion, you can read "beautiful = readable") as very ugly ones depending of the principles and the expertise you have in it.
You also give the need to create a non-executive pool aside the "Employer" one to generate a BPEL file; this is more a BPMN constraint than a BPEL one so it's a bit unfair to take this as an sample to make BPEL dirty...
You also say that you consider BPMN as the only currently viable solution for Business Analyst even if the model should be reviewed by technical people to add specific details (that is more than normal as, unfortunately, IT is complex and needs specific skills to be successful).
However, I think BPMN (or UML) is too complex for non-IT compliant people (I mean not "algorithm-structured mindset" people) such as business analysts.
When you don't think with an algorithm thinking, you just can't do BPMN (or UML or any programming language) as you won't be able to figure the tokens that flows from gateways to branches and (unpredictable) events to activities.
About bisimulation and round-tripping: you're absolutely right! There's no easy way to do them with BPMN and BPEL.
But, as far as I know, I can't easily check bisimulation nor easily do round-tripping with, let's say, UML and Java (and this, even if I respect MDA philosophy and standard!).
Does this mean this is a wrong way to go?
No, this just means we still have work to do to improve our industrialization tools and processes.
But we should also be careful of not hoping for a myth of the allmighty button which, when clicked on it, produces all the complex systems we've designed and optimizes the code (and making it readable) at all steps of the process! If you want to reach efficiency, you need to tune (code, designs, hardware) and there is no way to do it for all.
There is a way to do it for a specific context because the context is... specific (i.e. does not aim to cover all cases).
Your last part about XPDL is right but where is the contradiction of switching from BPMN to BPEL and not going through XPDL?
XPDL is a format to exchange process models between modeling tools. So storing in XPDL a process model drawed with BPMN is completely normal (for instance, Tibco Business Modeler - a free tool quite concurrent to Intalio Designer - do it well). By doing this, you can open your process model in another modeler (and maybe in a modeler that do not support BPMN is the XPDL file does not abuse of extension sets).
In what I agree with your claim is that there is no XPDL to BPEL transformer (or I don't know about it) and maybe this could be a way to go to generate BPEL files.
But this does not mean the other ways are wrong...
To conclude my too long speech, thanks to you for saying, claiming and proving that BPEL is not alone in the BPM world (especially in modeling) and BPMN + BPEL is not the only way to go.
But please, do not invalidate those ways as they are valids (in specific contexts) but they can be improved on several subjects (tools included but not only the Intalio ones, also IBM, Tibco, webMethods and SAP ones ;-) ).
Alexis
Re: Nice try.
by
Tammo van Lessen
BPEL was not intended to provide graphical modeling. [...]
I don't agree. [...]
Being able to read helps. graphical modeling != graphbased modeling. So, we're in sync :) Sorry for the noise.
My BPM advice: Listen to Tom Baeyens / use jBPM!
by
Brett Gorres
If you're into modeling (as I am):
As an Eclipse user, I appreciate the GPD (graphical process designer) and the XML code it maintains (bi-directionally) which is very concise and human-readable. (I have tried many workflow engine / finite state machine designers / generators, and the jBPM GPD is a gem.)
BPEL vs jPDL: In terms of flexibility and extensibility:
I believe Java developers and architects will be frustrated if they commit too much to a SOAP / web services-dependent strategy such as BPEL. The beauty (for me) of jBPM+jPDL is that it will interface with more heavyweight systems but does not require them. Also note jBPM's road map. Seems to thoughtfully embrace standards that help keep things improving in the right direction.
What a joy to have testable, flexible, lightweight solutions, even at the heart of enterprise systems. I can do test-driven development and code to interfaces instead of heavyweight specs.
Re: BA involvement
I know business analysts are able to work with jBPM diagrams (and if you believe this book: "Business Process Management with JBoss jBPM: A Practical Guide for Business Analysts") a BA can even "drive" jBPM--not that I am into that. (I'm an agile developer / architect sort of person, but maybe it makes sense to some of you out there.)
I am grateful that JBoss acquired jBPM and I hope Tom Baeyens keeps up the good work. (I attended his lecture at Java One in 2005 and occasionally read his blog. Tom knows his stuff!)
Looking forward to jBPM version 4.0 in 2009. As a consultant for a big bank and an entrepreneur, I'm thankful we already have jBPM today and I know it will continue to be a good option:
Not in spite of other specs, but in concert with them.
I like to advocate for good open source when I see it. jBPM, rock on!
-Brett
One caveat, it took me a while to gain the experience and confidence to be able to confidently promote jBPM at a large client. But that's because I had to do my homework to weed through examples, user guides, optional downloads, etc. in order to build real solutions with it and go through rigorous comparisons for my employer. I'm glad I did.
Recommended reading:
a few jBPM-related chapters in Open Source SOA (an early access Manning title: ) which will also help explain how jBPM fits into the larger context of SOA.
People == Asynchronous Services
by
John Reynolds
The basis for BPM must reflect this reality... or (as we see with BPEL for People) it just feels like a kludge.
Re: Why BPEL is not the holy grail for BPM
by
Stephen Baudendistel
Re: Why BPEL is not the holy grail for BPM
by
Stephen Baudendistel
Re: People == Asynchronous Services
by
Tammo van Lessen
BPEL was designed to choreograph synchronous autonomous services. BPM is generally concerned with asynchronous (usually human-powered) services...
What do you exactly mean by "synchronous autonomous services"? You can of course render and/or invoke synchronous services, the main idea is however to orchestrate message exchanges, which are inherently asynchronous. Using synchronous request/reply is just one option in BPEL.
Your example done without BPEL
by
K Swenson
kswenson.wordpress.com/2008/10/29/directly-exec...
-Keith Swenson
Re: Nice try.
by
Bernd Ruecker
I want to add some thoughts out of my working/consulting experience.
I see BPEL as a very good thing in terms of that it brought up a lot of cool concepts (e.g. the sophisticated correlation mechanism) and started necessary discusssions (e.g. about compensation actions). BUT: It's use case value for real life projects is still limited. Basically it makes sense if you want to build some composed service out of a couple of other WebServices. So I see it as a scripting language for WebServices. But for business processes?
First of all as shown nicely in this article, the BPMN to BPEL mapping is hard, the resultiung code not readable. Readability of BPEL is not important I heard here? So why to use BPEL at all? If the graphical representation or the readability doesn't matter, hey, than I would propose to generate Java or .NET code out of BPMN. There I have hords of developers know how they can deal with it. Where is the added value of BPEL?
The biggest problem in real life is, that BPEL and the whole WS-Stack is too complicated. It is hard to understand and it is hard to find the right people for your project. If you don't have the right people, your project will fail. Should be good news for us as consultants, but I like it more to enable the companys to do their stuff on their own than making myself irreplaceable.
Brings me to some last statement: BPEL is too tool centric! Good for vendors, bad for the users. Without good tooling you cannot succeed with BPEL, because it is simply to complex. Reminds me somehow of EJB 1 & 2.1: Good ideas, conceptually right, but not really usable. Ah yeah: And SimBPEL - Isn't that a move away from the standard? At least I see it as prove for being BPEL to complex!
Indeed, I saw much more successfull jBPM projects, even very big ones, even company wide SOAs. There you deal with Java a lot (not limited to it, you still can access your ESB or WebServices easily), so you have people knowing their job. And it is not that hard to understand, no expensive tooling necessary.
We will see what time brings, but I could image this direction: one BPMN model for business, second model technical BPMN and "code"-generation out of that. Maybe directly to Java or the like or better to a generic state-machine, which is Tom's idea of the Process Virtual Machine (jBPM PVM).
Cheers
Bernd
Re: Nice try.
by
Oliver Kopp
If the graphical representation or the readability doesn't matter, hey, than I would propose to generate Java or .NET code out of BPMN. [...] Where is the added value of BPEL?.
SimPEL and BPELscript
by
Oliver Kopp
Ah yeah: And SimBPEL - Isn't that a move away from the standard? At least I see it as prove for being BPEL to complex!
SimPEL is definitively a move away from the standard. One reason is that the correlation mechanism is different to BPEL.
If you want to have both, a more readable syntax and a bi-directional mapping to BPEL, you should have a look at BPELscript.
Re: Nice try.
by
Miguel Valdes Faura.
This is not a benefit of the BPEL standard but one related to the way the BPM egine is implemented to run a BPM process (wrote in BPEL or any other language)
Any well coded BPM engine in java should gives control back after each step in a process...
Miguel Valdes
Bonita & Orchestra teams
JWT2BPEL BPEL file
by
Florian Lautenbacher
All of these modeled artifacts need to be considered when BPEL code shall be generated from the model. Therefore, for each action a new bpel:Scope is generated with some initial web services and final web services that are always invoked. These web services are part of an integration framework that has been developed as part of the project AgilPro. The Workflow Code Generation Framework is simply currently adapted to this framework. But since it is template based, it can easily be changed to fit other needs and other process engines (without an integration framework), too.
The provided User Manual simply describes how the templates can be changed. Nevertheless, I agree with Pierre that transforming a process model either from BPMN or JWT to BPEL is a non-trivial task.
Regards,
Florian
Project co-lead of the Workflow Code Generation Framework of the
University of Augsburg, Germany | https://www.infoq.com/articles/bpelbpm/ | CC-MAIN-2017-17 | refinedweb | 7,205 | 62.07 |
is it bad practice to use _ in template naming
such as template_name vs. templateName vs. template-name
Question regarding template naming convention
is it bad practice to use _ in template naming
You are already not allowed to use dash
- in your template names, so it leaves you with few options. I personally prefer
templateName or if I want to pseudo-namespace them, I do something like
MNtemplateName where MN is Module Name.
from MDG’s Meteor Style Guide
Use camelCase for identifiers
Functions should be named like doAThing, not do_a_thing.
Other variable names get the same treatment: isConnected, not is_connected.
Originally Meteor used a different convention, in which underscores were sometimes encouraged. But now we’ve moved everything over to camelCase to match common JavaScript convention. If you see old inchworm_style identifiers you’re encouraged to convert them. There are a few places in public APIs where inchworm_style aliases are provided for backward compatibility.
it’s just a pseudo namespace.
for example a user list template would be part of my accounts module so I
would name it like ACuserList
It’s just a shorthand to help me organize/recognize code and help the ide
help me with better code completion | https://forums.meteor.com/t/question-regarding-template-naming-convention/3476 | CC-MAIN-2018-51 | refinedweb | 203 | 53.51 |
Sometimes.
Defining a settings.json file
Now, in this case
settings.json is not quite the same as in ASP.NET apps, and the same API you use there for loading configurations will not work here. That wasn't really my goal, as I mostly just need a simple key/value settings file approach.
My
settings.json file looks something like this:
{ "apiUrlBase": "", "apiKey": "123456" }
For local development I'd like to test against the locally running web API instance so I define a
local.settings.json file:
{ "apiUrlBase": "", "apiKey": "654321" }
Include the settings files in your project
Now the magic of having a
local.settings.json for debug and a
settings.json for release is that the build should pick the correct file for the configuration you are building.
This can be done very easily by using some MSBuild condition logic in your .csproj file:
<ItemGroup> <EmbeddedResource Include="settings.json" Condition="'$(Configuration)' != 'Debug' or !Exists('local.settings.json')" /> <EmbeddedResource Include="local.settings.json" Link="settings.json" Condition="'$(Configuration)' == 'Debug' and Exists('local.settings.json')" /> </ItemGroup>
Notice I'm including both files as
EmbeddedResourceitems. More on this in the next section.
For each file that I've included, I've added conditions for the item. The
settings.json file will be included if my configuration is not set to
Debug or if no
local.settings.json file exists. The
local.settings.json file will be included if the configuration is
Debug and the file actually exists.
The other thing to note is that
local.settings.json has
Link="settings.json" specified which means it will actually be embedded with the filename of
settings.json. This means no matter which file is used at build, the resource will be named the same in the output assembly, so we don't have to guess which filename to load at runtime.
You can make these conditions whatever you want. If you have a white label app, you could set a custom MSBuild property to specify the path to the settings file to use:
<ItemGroup> <EmbeddedResource Include="$(CustomerId).settings.json" Link="settings.json" Condition="'$(Configuration)' != 'Debug' and Exists('$(CustomerId).settings.json')" /> </ItemGroup>
You could then build with something like
-p:CustomerId=customer1 which would cause the build to use
cusomer1.settings.json.
Accessing the settings from your app's shared code
This part is rather easy. Since we used
EmbeddedResource as the item group name in our .csproj, the json file will be embedded into the output assembly as a resource. We can access it with a short bit of code at runtime:
// Get the assembly this code is executing in var assembly = Assembly.GetExecutingAssembly(); // Look up the resource names and find the one that ends with settings.json // Your resource names will generally be prefixed with the assembly's default namespace // so you can short circuit this with the known full name if you wish var resName = assembly.GetManifestResourceNames() ?.FirstOrDefault(r => r.EndsWith("settings.json", StringComparison.OrdinalIgnoreCase)); // Load the resource file using var file = assembly.GetManifestResourceStream(resName); // Stream reader to read the whole file using var sr = new StreamReader(file); // Read the json from the file var json = sr.ReadToEnd(); // Parse out the JSON var j = JObject.Parse(json); var apiUrlBase = j.Value<string>("apiUrlBase"); var apiKey = j.Value<string>("apiKey");
This simply parses the JSON and manually fetches the key/value pairs. You could of course create a C# class to use for deserialization to support more complex configuration hierarchies.
There you have it! A rather simple, yet elegant way to pivot your Xamarin app's configuration settings using a simple JSON file and some MSBuild conditions. | https://redth.codes/settings-json-files-in-xamarin-apps/ | CC-MAIN-2021-21 | refinedweb | 605 | 50.33 |
Redstone
Redstone is a server-side, metadata driven micro-framework for Dart.
How does it work?
Redstone.dart allows you to easily publish your functions and classes through a web interface, by just adding some annotations to them.
import 'package:redstone/server.dart' as app; @app.Route("/") helloWorld() => "Hello, World!"; main() { app.setupConsoleLog(); app.start(); }
Want to know more?
History
Redstone.dart was created by Luiz Henrique Farcic Mineo. On April 11th 2015, it was announced that Luiz would no longer be able to maintain this project. The community soon took to the issue tracker to plan a way to keep development up. Along with Luiz, decisions were made to put the entire project into the hands of the community. | https://www.dartdocs.org/documentation/redstone/0.5.21%2B1/ | CC-MAIN-2017-34 | refinedweb | 120 | 68.77 |
Hello, I have a list of documents that were uploaded in a previous step, in the next step I can update previously uploaded documents, I need a script idea of how to replace an item in a document list with a new version, someone could help me?
Hello,();
How do I find out if a service task has been terminated? Is it possible to figure this out through groovy script?
Hello All,
How can I get the current user id from within a script in a Operations assignment?
I want to store the user id in a BDM for future reference and auditing.
Thank you.
Hi.
I'm trying to insert values into a table in postgreSQL, I need to save the values of the variables that the user enters into the application.
I test with static values in this case the test is correct but when I put the variables the script does not work.
The script is the following:
`import groovy.sql.Sql;
import org.bonitasoft.engine.bpm.document.Document;
import org.bonitasoft.engine.bpm.document.DocumentValue;.
The "in-transaction" script connector is deprecated.
What logic replaces it's functionality? Viz: Creating and updating BDM instances other than those declared in the business data for the process?
Thanks
Chris | https://community.bonitasoft.com/tags/script-0 | CC-MAIN-2020-16 | refinedweb | 211 | 56.15 |
disable/enable a button depending on textbox in xaml
wpf button isenabled=(binding)
wpf disable button programmatically
wpf button enabled binding
wpf enable button when textbox is not empty
wpf disable button on validation error
xamarin disable button after click
wpf enable button based on multiple textbox
I have a TextBox
<TextBox x:
and two Buttons
<Button x: <Image Source="/Assets/images/left_arrow.png"/> </Button> <Button x: <Image Source="/Assets/images/right_arrow.png"/> </Button>
Is there a simple solution to enable/disable the Buttons trough the TextBox?
Like for example if the TextBox is empty the Buttons are disabled. And if the TextBox is not empty the Buttons are enabled.
You could apply a text changed event that checks the input every time it changes.
<TextBox x:
If the text is the way you need it, you can turn the button enabled/disabled.
public void TextBox_TextChanged(object sender, TextChangedEventArgs e) { if (searchTextBox.Text == result) next.IsEnabled = false; }
EDIT: Other than code behind like this approach you could learn about the MVVM design pattern. The other answers partly used practices common in MVVM.
There are various good tutorials for this all around the web.
How to make Button enabled/disabled depending on the TextBox , Keep button inactive until a TextBox has a value, using WPF. Sometimes we need to enable/disable controls based on a property of another Sometimes we need to enable/disable controls based on a property of another control. Like if a textbox has some value, only then enable the button, else disable. In the example below, I have a textbox txtName and once the user enters something (as soon as) into the textbox, then enable the “Add Name” button. Way 1. Way 2.
How about using binding + converter? I guess this concept is valid for UWP as well.
E.g. you have a view model with property
SearchText which is bound to the text in the TextBox. Then you can do the following:
<Window.Resources> <local:StringToBoolConverter x: </Window.Resources>
...
<TextBox x: <Button x:
And the converter code would be quite simple:
public class StringToBoolConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return !string.IsNullOrEmpty(value?.ToString()); } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } }
Another way to go is using Command pattern for the buttons. The
ICommand interface has
CanExecute method that will cantually disable or enable your buttons depending on the return value. See examples in the internet or here.
disable/enable a button depending on textbox in xaml, disable/enable a button depending on textbox in xaml. Multi tool Is there a simple solution to enable/disable the Buttons trough the TextBox? I was looking for code through google that enables and disables a button based on the text property from a textbox. If the textbox has some text then the button should be enabled or otherwise button should be disabled. I got some code but code in xaml but no c#. I am not being able to understand how it works.
Use binding for IsEnabled.
<Button x:</Button>
You can also use Data Triggers but the above is simplest. Converters are not required.
WPF IsEnabled Property (Button Example), So: The Button becomes enabled when the TextBox has text. It remains disabled whenever the TextBox is empty. Example markup: XAML <Window:
Control.Enabled Property (System.Windows.Forms), The example creates a TextBox and sets its Location within the group box. C# Copy. // Add a GroupBox to a form and set some of its common properties. private void With the Enabled property, you can enable or disable controls at run time. For example, a button can be disabled to prevent the user from clicking it. Enable/disable textbox based on checkbox selection in WPF using MVVM. I have a WPF form with as many as 40 textboxes, with a checkbox for each.
Control.IsEnabled Property (Windows.UI.Xaml.Controls), For example, if a control that contains a button has IsEnabled set to false, the However, a disabled control can still source the input events, and input routed Enable or Disable a Control with a CheckBox Using Data Binding By Michael Detras There are some times when we enable or disable some controls depending on whether a CheckBox in a form is checked or not.
WPF Commanding: Enable Disable Button with Command Property , Walkthrough for using WPF commanding for enabling / disabling Button The command property is available for action based elements for e.g. Button, text in the Search textbox, the Search button should become enabled. Depending on the initial “status” of the user I’d like to change the text on the button to show the action to perform e.g.: If the searched user is disabled, the text on the button has to change to ‘enable’ – and vice versa. As I’m very new to building a GUI with XAML, I have no idea how to do this in PowerShell. Example of the code below.
- Usage of event handlers should be discouraged in WPF
- I can't comply with that unless the question stated a design pattern like MVVM was used. I would agree with you otherwise as I don't like code behind that much myself. A simple approach was asked and I believe mine was pretty simple.
- When you code WPF, MVVM should be the default thought, Particularly styling and UI related stuff should be handled within XAML as long as sustainable. You are answer is perfectly alright, its just about encouraging the right behaviour for new programmers.
- This is an ambiguous topic to me. A programmer should always try to keep it simple and try things out for himself. So if something is ugly, but it works, it's fine for the programmer's solution. This is at least a very good way to learn how a program works. I can edit my answer though to do as you say, encourage newer developers to learn a widely accepted design pattern for WPF. | https://thetopsites.net/article/51740336.shtml | CC-MAIN-2021-25 | refinedweb | 1,002 | 64.3 |
Net::Google::PicasaWeb - use Google's Picasa Web API
version 0.11
use Net::Google::PicasaWeb; my $service = Net::Google::PicasaWeb->new; # Login via one of these $service->login('jondoe@gmail.com', 'north23AZ'); # Working with albums (see Net::Google::PicasaWeb::Album) my @albums = $service->list_albums( user_id => 'jondoe'); $album->title('Quick Trip To Italy'); # Listing photos (see Net::Google::PicasaWeb::MediaEntry) my @photos = $album->list_media_entries; my @recent = $album->list_media_entries( max_results => 10 ); my @puppies = $album->list_media_entries( q => 'puppies' ); my @all_puppies = $service->list_media_entries( q => 'puppies' ); # Updating/Deleting photos (or video) $photo->title('Plz to love RealCat'); # Listing tags my @user_tags = $service->list_tags( user_id => 'jondoe' ); my @album_tags = $album->list_tags; my @photo_tags = $photo->list_tags; # Listing comments (see Net::Google::PicasaWeb::Comment) my @recent = $service->list_comments( user_id => 'jondoe', max_results => 10 ); my @photo_comments = $photo->list_comments;
This module uses Moose to handle attributes and such. These attributes are readable, writable, and may be passed to the constructor unless otherwise noted.
This is an Net::Google::AuthSub object used to handle authentication. The default is an instance set to use a service of "lh2" and a source of "Net::Google::PicasaWeb-VERSION".
This is an LWP::UserAgent object used to handle web communication.
This is the base URL of the API to contact. This should probably always be unless Google starts providing alternate URLs or someone has a service providing the same API elsewhere..
When parsing the Google Data API response, these are the namespaces that will be used. By default, this is defined as:
{ '' => 'media', '' => 'gphoto', '' => 'georss', '' => 'gml', }
You may add more namespaces to this list, if needed.
my $service = Net::Google::PicasaWeb->new(%params);
See the "ATTRIBUTES" section for a list of possible parameters.
my $success = $service->login($username, $password, %options);
This is a shortcut for performing:
$service->authenticator->login($username, $password, %options);
It has some additional error handling. This method will return a true value on success or die on error.
See Net::Google::AuthSub.
my @albums = $service->list_albums(%params);
This will list a set of albums available from Picasa Web Albums. If no
%params are set, then this will list the albums belonging to the authenticated user. If the user is not authenticated, this will probably not return anything. Further control is gained by specifying one or more of the following parameters:
This is the user ID to request a list of albums from. The defaults to "default", which lists those belonging to the current authenticated user.
This method also takes the "STANDARD LIST OPTIONS".
my $album = $service->get_album( user_id => 'hanenkamp', album_id => '5143195220258642177', );
This will fetch a single album from the Picasa Web Albums using the given
user_id and
album_id. If
user_id is omitted, then "default" will be used instead.
This method returns
undef if no such album exists.
Returns a list of tags that have been used by the logged user or the user named in the
user_id parameter.
This method accepts this parameters:
The ID of the user to find tags for. Defaults to the current user.
This method also takes all the "STANDARD LIST OPTIONS".
Returns comments on photos for the current account or the account given by the
user_id parameter.
It accepts the following parameters:
This is the ID of the user to search for comments within. The comments returned will be commons on photos owned by this user. The default is to search the comments of the authenticated user.
This method also accepts the "STANDARD LIST OPTIONS".
my $comment = $service->get_comment( user_id => $user_id, album_id => $album_id, photo_id => $photo_id, comment_id => $comment_id, );
Retrieves a single comment from Picasa Web via the given
user_id,
album_id,
photo_id, and
comment_id. If
user_id is not given, "default" will be used.
Returns
undef if no matching comment is found.
Returns photos and videos based on the query options given. If a
user_id option is set, the photos returned will be those related to the named user ID. Without a user ID, the photos will be pulled from the general community feed.
It accepts the following parameters:
If given, the photos will be limited to those owned by this user. If it is set to "default", then the authenticated user will be used. If no
user_id is set, then the community feed will be used rather than a specific user. This option may not be combined with
featured.
This can be set to a true value to fetch the current featured photos on PicasaWeb. This option is not compatible with
user_id.
This method also accepts the "STANDARD LIST OPTIONS".
The "list_photos" and "list_videos" methods are synonyms for "list_media_entries".
my $media_entry = $service->get_media_entry( user_id => $user_id, album_id => $album_id, photo_id => $photo_id, );
Returns a specific photo or video entry when given a
user_id,
album_id, and
photo_id. If
user_id is not given, "default" will be used.
If no such photo or video can be found,
undef will be returned.
These helper methods are used to do some of the work.
my $response = $service->request($method, $path, $query, $content);
This handles the details of making a request to the Google Picasa Web API.
my $entry = $service->get_entry($class, $path, %params);
This is used by the
get_* methods to pull and initialize a single object from Picasa Web.
my @entries = $service->list_entries($class, $path, %params);
This is used by the
list_* methods to pull and initialize lists of objects from feeds.
Several of the listing methods return entries that can be modified by setting the following options.
This is the to limit the returned results to.
This option is only used when listing albums and photos or videos.
By passing a single scalar or an array reference of scalars, e.g.,
thumbsize => '72c', thumbsize => [ qw( 104c 640u d ) ], thumbsize => '1440u,1280u',
You may select the size or sizes of thumbnails attached to the items returned. Please see the documentation for a description of valid values.
This option is only used when listing albums and photos or videos.
This is a single scalar selecting the size of the main image to return with the items found. Please see the documentation for a description of valid values.
This option is only used when listing albums and photos or videos.
This is a tag name to use to filter the items returned.
This is a full-text query string to filter the items returned.
This is the maximum number of results to be returned.
This is the 1-based index of the first result to be returned.
This option is only used when listing albums and photos or videos.
This is the bounding box of geo coordinates to search for items within. The coordinates are given as an array reference of exactly 4 values given in the following order: west, south, east, north.
This option is only used when listing albums and photos or videos.
This may be set to the name of a geo location to search for items within. For example, "London".
Please report any bugs or feature requests to
bug-Net-Google-Picas Net::Google::PicasaWeb
You can also look for information at:
Thanks to:
Net::Google::PicasaWebnamespace and providing some sample code to examine.. | http://search.cpan.org/~hanenkamp/Net-Google-PicasaWeb-0.11/lib/Net/Google/PicasaWeb.pm | CC-MAIN-2016-30 | refinedweb | 1,169 | 56.45 |
x86 Disassembly/Disassembly Examples
From Wikibooks, open books for an open world
Example: Hello World Listing[edit]
Write a simple "Hello World" program using C or C++ and your favorite compiler. Generate a listing file from the compiler. Does the code look the way you expect it to? Do you understand what the assembly code means?
Here are examples of C and C++ "Hello World!" programs.
#include <stdio.h> int main() { printf("Hello World!\n"); return 0; }
#include <iostream> int main() { std::cout << "Hello World!\n"; return 0; }
Example: Basic Disassembly[edit]
Write a basic "Hello World!" program (see the example above). Compile the program into an executable with your favorite compiler, then disassemble it. How big is the disassembled code file? How does it compare to the code from the listing file you generated? Can you explain why the file is this size? | https://en.wikibooks.org/wiki/X86_Disassembly/Disassembly_Examples | CC-MAIN-2016-36 | refinedweb | 144 | 60.82 |
signature SEARCH functor MkSearch (Problem : PROBLEM where type space = Space.space) :> SEARCH where type solution = Problem.solution where type space = Space.space
The MkSearch functor expects a description of the constraint problem to solve, given as a structure PROBLEM. It returns a structure that can be used for searching one or many solutions.
Have a look at the examples.
See also: PROBLEM, PATH, Space.
import signature SEARCH from "x-alice:/lib/gecode/search-factory/SEARCH-sig" import structure MkSearch from "x-alice:/lib/gecode/search-factory/MkSearch"
signature SEARCH = sig type solution type space exception NotAssigned val init : solution Path.t -> unit val nextSolved : unit -> (space * solution Path.t) option val isFinished : unit -> bool val getOneSolution : unit -> (solution * solution Path.t) option val getAllSolutions : unit -> solution list val getUnexploredPath : unit -> solution Path.t option val stopSearch : unit -> unit val betterThan : solution -> unit end
The type of solutions. The MkSearch functor returns a type solution equal to Problem.solution.
The type of constraint spaces. The MkSearch functor returns a type space equal to Space.space.
Raised when the constraint problem to solve is under-specified, that is, a space is solved, but the variables necessary to read the solutions are not all determined. See the same exception in FD.
Optional. Sets the top node of the search (by default it is the root node of the search tree). Raise Fail if the search has already begun and is not finished yet.
Returns NONE if no more solution can be found. Otherwise, returns a pair (space, path) of the solved space and the path of this solution in the search tree.
Indicates if the search is finished.
Returns NONE if no more solution can be found. Otherwise, returns a pair (sol, path) of one new solution and the path of this solution in the search tree. Raises NotAssigned if the variables are not assigned in the solved space.
Returns a list of all the (remaining) solutions. In the case of Branch & Bound, the first solution of the list (head of the list) is the best solution. Raises NotAssigned if the variables are not assigned in the solved space.
Returns the path corresponding to some unexplored node, usually the highest available in the tree. Returns NONE if one or less such nodes are available. Returns SOME path otherwise. The returned path is removed from the list of unexplored nodes so that it will not be explored in the future. Thread-safe.
Stops the search. Thread-safe.
Optional. In case of Branch & Bound search, constrain the search tree by inserting a new solution found in another place (like in distributed search). In general, you need not call this function when doing Branch & Bound search. Thread-safe. | http://www.ps.uni-saarland.de/alice/manual/library/sf-search.html | CC-MAIN-2018-47 | refinedweb | 452 | 67.55 |
Important: Please read the Qt Code of Conduct -
[SOLVED] ToolBar DropShadow - mellow shadow with colored borders
I'm trying to create a material design toolbar with a blurred drop shadow.
The challenge is that I can either create an extremely harsh shadow that looks terrible, or I have to make the borders transparent, which leaves an undesirable white ring around the toolbar.
I mocked up a quick Qt App to demonstrate the issue. If "transparentborder" is removed, the shadow loses it's blur.
import QtQuick 2.4
import QtQuick.Controls 1.3
import QtQuick.Window 2.2
import QtQuick.Dialogs 1.2
import QtQuick.Layouts 1.1
import QtQuick.Controls.Styles 1.2
import QtGraphicalEffects 1.0
ApplicationWindow {
title: qsTr("Hello World")
width: 640
height: 480
visible: true
toolBar: ToolBar { id: mainToolBar width: parent.width height: 48 layer.enabled: true layer.effect: DropShadow{ radius: 4 samples: radius *2 verticalOffset: 3 source: mainToolBar color: "grey" transparentBorder: true } style: ToolBarStyle { background: Rectangle { id: toolBarRect color: "black" anchors.fill: parent border.color: "black" } } }
}
I've tried putting the drop shadow against the toolbar rectangle that creates the background color, and I've tried creating another rectangle just for the drop shadow, but nothing seems to work.
EDIT:
Figured it out. I anchored the background rectangle to each side individually, and then added a margin of -1 to compensate for the lack of border, with the exception of the bottom, which requires a margin of 0. If the bottom is set to -1 as well, then the harsh shadow returns. It doesn't matter what the shadow looks like on the other 3 sides, since they meet the edge of the window.
style: ToolBarStyle { background: Rectangle { id: toolBarRect color: "black" anchors { top: parent.top topMargin: -1 left: parent.left leftMargin: -1 right: parent.right rightMargin: -1 bottom: parent.bottom bottomMargin: 0 } } } | https://forum.qt.io/topic/52178/solved-toolbar-dropshadow-mellow-shadow-with-colored-borders/1 | CC-MAIN-2021-49 | refinedweb | 307 | 60.61 |
Are you sure?
This
2.1 Sensors 2.1.1.1 Temperature sensor
The LM35 is an integrated circuit sensor that can be used to measure temperature with an electrical output proportional to the temperature (in oC). The LM35 thus has an advantage over linear temperature sensors calibrated in ° Kelvin, as the user is not required to subtract a large constant voltage from its output to obtain convenient Centigrade scaling. The LM35's low output impedance, linear output, and precise inherent calibration make interfacing to readout or control circuitry especially easy. The sensor circuitry is sealed and not subject to oxidation, etc. The LM35 generates a higher output voltage than thermocouples and may not require that the output voltage be amplified.
6
Fig 2.1: LM35 connection circuit
Features • • • • • • • • • • • Calibrated directly in¼°C typical Low impedance output, 0.1 Ohm for 1 mA load
7
2.1.1.2 LDR (Light Dependent Resistor)
Light Dependent Resistor (LDR) also known as photoresistor is an electronic component whose resistance changes with the incident light intensity. resistance of LDR may be 5000 ohm in daylight and 20000000 ohm in dark condition.
Fig 2.2 LDR
Fig2.3: Characterstics curve of LDR
Fig2.4: Typical Application circuit
8
2.1.2 Microcontroller (PIC18f4620)
We could have used different microcontroller such as 8051, AVR but we opted for PIC family of microcontroller. It has the following advantages 1. It has high memory space. 2. It has built in ADC consumption during operation. We specifically choose PIC18f4620 because 1. It has large number of I/O pins. 2. It has 13 ten bits ADC channels which makes it easy for us to interface the sensors. 3. It is developed using nanoWatt TECHNOLOGY that reduces power
Fig2.5: Pin Configuration of PIC18F4620 The PIC18f4620 has 5-ports. They are A, B, C, D, E. Port A, B, C, D are of 8-bits in length but port E is of only 4-bits length. All these ports are in digital I/O besides A6 9
and A7. The oscillator is connected to these pins. The analog input pins are in A0,A1,A2,A3,A5,E0,E1,E2,B0,B1,B2,B3,B4. The serial port data to the computer is transmitted from C6 and received form C7. The serial data to the Ethernet chip is transmitted from C5 and received from C4. If pin no. 1 is set to low the data memory of microcontroller will be reset.
Features Operating frequency Program memory Temporary data memory(Ram) Permanent data memory(EEPROM) I/O Ports Serial communication 10-bits A/D Module Instruction sets
PIC18F4620 DC upto 40MHz 64kb (Approx.)4kb 1kb Ports A,B,C,D,E MSSP,USART 13 channels 75 instruction
Table 2.1: Features of PIC18f4620 There are three types of memory in PIC18 Enhanced microcontroller devices: •Program Memory •Data RAM •Data.
10
i. Program Memory PIC18 microcontrollers implement a 21-bit program counter, which is capable of addressing a 2Mbyte program memory space. Accessing a location between the upper boundary of the physically implemented memory and the 2-Mbyte address will return all ‘0’s (a NOP instruction). PIC18F4620 have 64Kbytes of Flash memory and can store up to 32,768 single-word instructions. PIC18 devices have two interrupt vectors. The Reset vector address is at 0000h and the interrupt vector addresses are at 0008h and 0018h. Writing or erasing program memory will cease instruction fetches until the operation is complete. The program memory cannot be accessed during the write or erase, therefore, code cannot execute. An internal programming timer terminates program memory. Fig2.6: Memory Organisation ii. Data Memory The data memory in PIC18 devices is implemented as static RAM. Each Register in the data memory has a 12-bit address, allowing up to 4096 bytes of datamemory. The memory space is divided into as many as 16 banks that contain 256 bytes each; PIC18F4620 device implement all 16 banks. The data memory contains Special Function Registers (SFRs) and General Purpose Registers (GPRs). The Special Function Registers (SFRs) are registers used by the CPU and peripheral modules for controlling the desired operation of the device. These registers are implemented as static RAM. SFRs start at the top of data memory (FFFh) and extend downward to occupy the top half of Bank 15 (F80h to FFFh). GPRs are used for data storage and scratchpad operations in the user’s application. The entire data memory may be accessed by Direct, Indirect or Indexed Addressing modes.
11
iii. Data EEPROM The data EEPROM is a nonvolatile memory array separate from the data RAM and program memory that is used for long-term storage of program data. It is not directly mapped in either the register file or program memory space but is indirectly addressed through the Special Function Registers (SFRs). The EEPROM is readable and writable during normal operation. The EEPROM data memory is rated for high erase/write cycle endurance. A byte write automatically erases the location and writes the new data (erase-before-write). The write time is controlled by an on-chip timer; it will vary with voltage and temperature as well as from chip to chip.
2.1.3 Ethernet chip (ENC28J60)
There are many Ethernet chip in the markets but we choose ENC28J60 because it is produced from the same manufacturer as the PIC. Thus it is easier for us to interface the two chips from microchip.. Two dedicated pins are used for LED link and network activity indication. With the ENC28J60, two pulse transformers and a few passive components are all that is required to connect a microcontroller to a 10Mbps Ethernet network.The ENC28J60 is designed to operate at 25MHz with a crystal connected to the OSC1 and OSC2 pins. The ENC28J60 does not support automatic duplex negotiation. If it is connected to an automatic duplex negotiation enabled network switch or Ethernet controller, the 12
ENC28J60 will be detected as a half-duplex device. To communicate in Full-Duplex mode, the ENC28J60 and the remote node (switch, router or Ethernet controller) must be manually configured for full-duplex operation. 2.1.3.1 Ethernet Controller Features: 1. IEEE 802.3 compatible Ethernet controller 2. Receiver and collision squelch circuit 3. Supports one 10BASE-T port with automatic polarity detection and correction 4. Supports Full and Half-Duplex modes 5. Programmable automatic retransmit on collision 6. Programmable padding and CRC generation 7. Programmable automatic rejection of erroneous packets 8. SPI Interface with speeds up to 10Mb/s 2.1.3.2 Ethernet Controller Block Diagram
Fig2.7: Detail Overview of ENC28J60 13
The ENC28J60 consists of seven major functional blocks: 1. An SPI interface that serves as a communication channel between the host controller and the ENC28J60. 2. Control Registers which are used to control and monitor the ENC28J60. 3. A dual port RAM buffer for received and transmitted data packets. 4. An arbiter to control the access to the RAM buffer when requests are made from DMA, transmit and receive blocks. 5. The bus interface that interprets data and commands received via the SPI interface. 6. The MAC (Medium Access Control) module that implements IEEE 802.3 compliant MAC logic. 7. The PHY (Physical Layer) module that encodes and decodes the analog data that is present on the twisted pair interface. The device also contains other support blocks, such as the oscillator, on-chip voltage regulator, level translators to provide 5V tolerant I/Os and system control logic.
Fig2.8: Pin Configuration of ENC28J60
14
Fig2.9: Typical Application Circuit of Ethernet Chip All memory in the ENC28J60 is implemented as static RAM. There are three types of memory in the ENC28J60: •Control Registers •Ethernet Buffer •PHY Registers The control registers’ memory contains Control Registers (CRs). These are used for configuration, control and status retrieval of the ENC28J60. The Control Registers are directly read and written to by the SPI interface.The Ethernet buffer contains transmit and receive memory used by the Ethernet controller in a single memory space. The sizes of the memory areas are programmable by the host controller using the SPI interface. The Ethernet buffer memory can only be accessed via the read buffer memory and write buffer memory. The PHY registers are used for configuration, control and status retrieval of the PHY module. The registers are not directly accessible through the SPI interface; they can only be accessed through the Media Independent Interface (MII) implemented in the MAC.
15
Fig 2.10: Memory Organization of ENC28J60
i.
Control Registers The Control Registers provide the main interface between the host
controller and the on-chip Ethernet controller logic. Writing to these registers controls the operation of the interface, while reading the registers allows the host controller to monitor operations. The Control Register memory is partitioned into four banks. Each bank is 32bytes long and addressed by a 5-bit address value. The last five locations (1Bh to 1Fh) of all banks point to a common set of registers: EIE, EIR, ESTAT, ECON2 and ECON1. These are key registers used in controlling and monitoring the operation of the device.
16
ii.
Ethernet Buffers The Ethernet buffer contains transmit and receive memory used by the
Ethernet controller. The entire buffer is 8Kbytes, divided into separate receive and transmit buffer spaces. The sizes and locations of ransmit and receive memory are fully programmable by the host controller using the SPI interface. iii. PHY Registers The PHY registers provide configuration and control of he PHY module, as well as status information about its operation. All PHY registers are 16 bits in width. There are a total of 32 PHY addresses; however, only 9 locaions are implemented. Writes to unimplemented locations are ignored and any attempts to read these locations will return ‘0’. All reserved locations should be written as ‘0’; their contents should be ignored when read. Unlike the control registers or the buffer memory, the PHY registers are not directly accessible through the SPI control interface. Instead, access is accomplished through a special set of MAC control registers that implement a Media Independent Interface for Management (MIIM).
17
2.1.4Magnetic Ethernet jack
Magnetic Ethernet jack is a single port shielded RJ45 connector with integrated 10/100 magnetic.
Fig 2.11: Magnetic Ethernet Jack
Fig2.12: Schematics of Magnetic Ethernet Jack
18
2.1.5 Serial port interfacing
A serial port is a serial communication physical interface through which information transfers in or out one bit at a time (contrast parallel port). Throughout most of the history of personal computers, data transfer through serial ports connected the computer to devices such as terminals and various peripherals. Such interfaces are Ethernet, FireWire, and USB all send data as a serial stream, the term "serial port" usually identifies hardware more or less compliant to the RS-232 standard, intended to interface with a modem or with a similar communication device..
2.1.5.1
Serial (RS232) port interface pin out and signal
RS232 DB9 pinout
Fig 2.13: Serial (RS232) Port Interface Pin Out and Signal
19.2: Showing the RS232 pin configuration and signals
2.1.5.2
RS-232 Level Converters
Almost all digital devices which we use require either TTL or CMOS logic levels. Therefore the first step to connecting a device to the RS-232 port is to transform the RS232 levels back into 0 and 5 Volts. As we have already covered, this is done by RS-232 Level Converters. MAX232 is a RS232 level converter. It includes a Charge Pump, which generates +10V and -10V from a single 5V supply. This I.C. also includes two receivers and two transmitters in the same package. This is handy in many cases when we only want to use the Transmit and Receive data Lines. Necessity of two chips, one for the receive line and one for the transmit line is not required.
20
Fig 2.14: Pin configuration of MAX232
Fig 2.15: Typical MAX-232 circuit
However all this convenience comes at a price, but compared with the price of designing a new power supply it is very cheap. There are also many variations of these devices. The large values of capacitors are not only bulky, but also expensive. Therefore other devices are available which use smaller capacitors and even some with inbuilt capacitors.
21
2.1.6 LCD
A liquid crystal display (LCD) is a thin, flat display device made up of any number of color or monochrome pixels arrayed in front of a light source or reflector
PIN NO..3: Pin connections of LCD
2.1.7 LAN connectivity
Our device can be connected to a LAN either through a wired Ethernet cable or through a wireless Wi-Fi access point/transmitter.
2.2 SOFTWARE COMPONENT OVERVIEW
Our project comprises of following software components: • TCP/IP Stack 22
• • • •
C language Web/Database server PHP Proteus simulation software
2.2.1 TCP/IP STACK
Many TCP/IP implementations follow a software architecture referred to as the “TCP/IP Reference model”. Internet Software based on this model is divided into multiple layers, where layers are stacked on top of each other (thus the name “TCP/IP Stack”) and each layer accesses services from one or more layers directly below it. TCP/IP Reference Model Our TCP/IP Model
Fig2.16: Comparing our TCP/IP Model with the TCP/IP Reference Model
23
Like the TCP/IP reference model, our TCP/IP Stack divides the TCP/IP Stack into multiple layers. The code implementing each layer resides in a separate source file, while the services and APIs (Application Programming Interfaces) are defined through header/include files. Unlike the TCP/IP reference model, many of the layers in our TCP/IP Stack directly access one or more layers which are not directly below it. A decision as to when a layer would bypass its adjacent module for the services it needs, is made primarily on the amount of overhead and whether a given service needs intelligent processing before it can be passed to the next layer or not. An additional major departure from traditional TCP/IP Stack implementation is the addition of two new modules: “StackTask” and “ARPTask”. StackTask manages the operations of the stack and all of its modules, while ARPTask manages the services of the Address Resolution Protocol (ARP) layer. To identify an individual computer on the Internet, it must have a unique address. The current version of the Internet Protocol (IPv4) uses a four-byte number, expressed in dotted decimal notation, (e.g., 123.45.67.8). This address consists of three parts. 1. A network address, which uniquely identifies an organization. 2. A subnet address, which identifies a subnet within that organization. 3. A system address, which identifies a single node on that subnet. The size of these fields varies, depending on the size of the organization, but they must occupy a total of four bytes. 2.2.1.1 Stack Modules Following are some of the stack modules and API used.
a. MAC (Media Access Control Layer):
The Ethernet hardware doesn't understand IP addresses; it has its own addressing scheme based on a unique six-byte address for each network adaptor manufactured; this is generally called the media access and control (MAC) address.
24
b. ARP & ARPTASK (Address Resolution Protocol)
The IP-to-hardware address translation protocol is called address resolution protocol (ARP). A node sends a subnet broadcast containing the IP address that is to be resolved, and the node that matches that IP address sends a response with its hardware address.
c. IP (Internet Protocol)
The first software layer above the network drivers is IP and its partner ICMP. Above these, there is a split: connection-oriented applications use transmission control protocol (TCP), whereas connectionless applications use user datagram protocol (UDP). An IP packet is known as a "datagram”.
d. ICMP (Internet Control Message Protocol)
ICMP is an adjunct to IP that gives all nodes on the network the ability to perform simple diagnostics and return error messages. For example, if you ask a router to forward a datagram to an address it can't reach, it will return an ICMP "destination unreachable" message. ICMP messages are contained within the data field of a IP datagram using IP protocol number.
e. TCP (Transmission Control Protocol)
TCP helps to do several jobs at once. It initiates a connection between two nodes and sends data bidirectionally between those two nodes. It handles the network failure, network datagram loss. It closes the connection between those two nodes as well.
f. HTTP (Hyper Text Transmission Protocol)
HyperText Transfer Protocol (HTTP) simply involves an exchange of text messages followed by the transfer of Web data down a TCP connection. To fetch a Web document, the browser opens a TCP connection to server port 80, and then uses HTTP to send a request. Compared to TCP, HTTP is refreshingly simple: 25
the request and response are one or more lines of text, each terminated by the newline (carriage return, line feed) characters. If the request is successful, the information (document text, graphical data) is then sent down the same connection, which is closed on completion. HTTP commands are called methods; the one used to fetch documents is the get method.
Table2.4: Request For Comment The complete list of Internet RFCs and the associated documents are available on many Internet web sites. Interested readers are referred to and as starting points.
26
2.2.2 “C” language with MCC18 compiler
The C language compiler/linker used by us was the MCC18 of microchip. The IDE used by us was the MPLAB of microchip. Components of the C language such as comment definition, constants definition, variable definition, function declaration, operator’s usage, program control statements, arrays, strings, pointers, structures, and unions are similar to the ANSI C standards. Some of the keywords used in the MCC18 language are shown in table. _asm _endasm Auto Break Case Char Const Continue Default Do Double Return Extern Far Float For Goto If Int Long Near Ram Volatile While Short Signed Sizeof Static Struct Switch Typedef Union Unsigned Void Else Enum
Table 2.5: Keywords of MCC18 The processor-specific library files contain definitions that may vary across individual members of the PIC18 family. This includes all of the peripheral routines and the Special Function Register (SFR) definitions. The peripheral routines that are provided include both those designed to use the hardware peripherals and those that implement a peripheral interface using general purpose I/O lines. The functions included in the processor-specific libraries include Hardware Peripheral Functions and. Software Peripheral Library.
27
Advantages of C language in embedded systems: 1. It is easier to code and requires less effort. 2. It has many built in functions Disadvantages of C language in embedded systems: 1. It occupies more memory space. 2. It is inconvenient for time critical applications.
2.2.3. Web/Database server
MySQL is a multithreaded, multi-user SQL database management system (DBMS). The basic program runs as a server providing multi-user access to a number of databases.++. MySQL is popular for web applications and acts as the database component of the LAMP, MAMP, and WAMP platforms (Linux/Mac/Windows-Apache-MySQLPHP/Perl/Python), and for open-source bug tracking tools like Bugzilla. Its popularity for use with web applications is closely tied to the popularity of PHP and Ruby on Rails, which is often combined with MySQL
28
2.2.4 PHP (PHP: Hypertext Preprocessor)
PHP is a widely-used general-purpose server side scripting language that is especially suited for Dynamic. The PHP Group also provides the complete source code for users to build, customize and extend for their own use. PHP primarily acts as a filter. The PHP program takes input from a file or stream containing text and special PHP instructions and outputs another stream of data for display. There are eight data types in PHP which facilitates the programmer are: Integer, Double, Boolean, String, Object, Array, Null, Resource PHP has a formal development manual that is maintained by the free software community. In addition, answers to many questions can often be found by doing a simple internet search.
CGI simplest ways, for the server, of doing this are the following: if the request identifies a file stored on disk, return the contents of that file; if the request identifies an executable command and possibly arguments, run the command and return its output 29. Whenever a request to a matching URL is received, the corresponding program is called, with any data that the client sent as input. Output from the program is collected by the Web server, augmented with appropriate headers as defined by the CGI spec, and sent back to the client. The information regarding physical parameters is sensed from the sensor. This is received by microcontroller and is stored in CGI variables. These CGI variables can be accessed by CGI scripts running in web/database server. The data are stored in the MySQL database. Finally the required graphical representation of the data can be displayed interactively in the website.
30
2.2.5 Proteus: Simulation Software
Proteus is circuit simulation software. Its demo version can be downloaded from for free. Proteus consists of 2 major software parts. They are ISIS and ARES. ISIS is a simulation package whereas ARES is a PCB making package. VSM for PIC18 contains everything we need to develop, test and virtually prototype our embedded system designs based around the Microchip Technologies PIC18. The unique nature of schematic based microcontroller simulation with Proteus facilitates rapid, flexible and parallel development of both the system hardware and the system firmware. In this software we simulated most of our project circuits and codes including the ENC28J60 and PIC18F4620 interconnection and the C language code.
31
CHAPTER 3 METHODOLOGY
32
3.1 HARDWARE DESIGN IMPLEMENTATION
3.1.1 PIC Programmer Circuit
Fig3.1: Complete Circuit Diagram of the PIC18 Programmer
(Ref:)
For loading the .HEX file in the PIC we used the provided P18 software.
We searched for various programmers circuits and designed in breadboard. But all circuits failed. Finally we got above circuit, which worked fine. It was really great to find the programmer circuit working fine. Then we started to deal with PIC18F4620. We did some basic programs like LED blinking, LCD display, ADC port programming, etc. Then we stepped toward sensor interfacing and Ethernet interfacing. 33
3.1.2 Sensor interfacing
Fig3.2: Sensor Interfacing with PIC18F4620 LM 35 is connected to the analog port of the PIC18F4620 microcontroller. As we know the PIC18F4620 has 13 analog pins which can also function as digital I/O. Before the sensor is connected to one of these pins, the PIC18F4620 must first be configured to accept analog data on the specified pins. Only then is the data from the sensor considered to be valid. The PIC18F4620 then changes the analog data to 10 bits and stores it in it’s registers ADRESH:ADRESL which will contain the value of the last completed conversion.
34
3.1.3 LCD interfacing
Fig3.3: LCD Interfacing with PIC18F4620 The MCC18 functions are designed to allow the control of a Hitachi HD44780 LCD controller using I/O pins from a PIC18 microcontroller. The LCD we used in this project was JHD162A series from ETC corporation. The LCD could be used in 4 bit or 8 bit mode. The 4 bit mode saves pins in the microcontroller but requires more processing time and makes the pic18f4620 slower for other functions. Thus we used the 8 bit mode. We used the LCD to display the IP address of the microcontroller and the current sensor value.
35
3.1.4 RS-232 interfacing
Fig3.4: Serial Interfacing with PIC18F4620 The PIC18F4620 has 2 serial ports modules the USART (Universal synchronous and asynchronous receiver and transmitter) and the MSSP (master synchronous serial port) module. We used the USART module for interfacing with the RS232 port of the computer. The computer is used only for debugging purposes and is not totally necessary for operation of the weather box.
#include <p18f4620.h> #include <usart.h> void main(void) { // configure USART OpenUSART( USART_TX_INT_OFF & USART_RX_INT_OFF & USART_ASYNCH_MODE & USART_EIGHT_BIT & USART_CONT_RX & USART_BRGH_HIGH, 25 ); while(1) { while( ! PORTAbits.RA0 ); //wait for RA0 high WriteUSART( PORTD ); //write value of PORTD if(PORTD == 0x80) // check for termination break; // value } CloseUSART(); }
36
3.1.5 Ethernet chip interfacing
Fig3.5: Interfacing ethernet chip Communication with the host controller is implemented via two interrupt pins and the SPI, with data rates of up to 10Mb/s.. The INT pin of the ethernet chip is conencted to the INT0 of PIC18F4620 and the WOL pin is connected to the INT1 pin of PIC18F4620.
37
We performed level conversion because the microcontroller operates at 5V TTL level while the ethernet chip operates at 3.3V CMOS level. The level conversion was done using 74HCT08 (quad AND gate).
3.1.6 Complete Circuit Design
Fig3.6: Schematic Diagram of PIC Interfacing (Designed using Proteus v.7.1) The 2 LEDs are added in the ENC28J60 for debugging purposes. The green LED signifies that a network conenction is available and the red led indicates that the network is ready to accept and transmit messages and packets. The LCD and the computer connected through the RS 232 is not needed for the operation of the device. The ethernet chip is connected to the ethernet cable by a RJ45 with magnetics included and a ferrite bead to reduce high frequency EM interference. The sensor is connected to the ANO and AN1 pins.
38
3.2 SOFTWARE DESIGN IMPLEMENTATION
3.2.1 TCP/IP Implementation Our stack is a collection of different modules. Some modules (such as IP, TCP, UDP and ICMP) must be called when a corresponding packet is received. Any application utilizing our stack must perform certain steps to ensure that modules are called at the appropriate times. This task of managing stack modules remains the same, regardless of main application logic. In order to relieve the main application from the burden of managing the individual modules, our TCP/IP Stack uses a special application layer module known as “StackTask”, or the Stack Manager. This module is implemented by the source file “StackTsk.c”. StackTask is implemented as a cooperative task; when given processing time, it polls the MAC layer for valid data packets. When one is received, it decodes it and routes it to the appropriate module for further processing. It is important to note that Stack Manager is not an integral part of our TCP/IP Stack. It is written so that the main application does not have to manage stack modules, in addition to its own work such as RS-232, LCD, and Sensor interfacing. Before the Stack Manager task can be put to work, it must be initialized by calling the StackInit( ) function. This function initializes the Stack Manager variables and individual modules in the correct order. Once StackInit( ) is called, the main application must call the StackTask( ) function periodically, to ensure that all incoming packets are handled on time, along with any time-out and error condition handling.
39, ARP, IP, TCP, UDP, ICMP, and HTTP. The FTP, DHCP and other protocols such as SLIP are not supported. The absence of DHCP implementation in our project causes the IP address of the microcontroller to be fixed and unable to be changed by the network server.
40
For example the stack manager operations concerning the ICMP packets transmission and reception is shown and described below.
SM=ARP
SM=TCP
SM= Stack Manager FT = Frame Type Fig 3.7: Flow chart of TCP/IP Implementation (For ICMP only) First of all SM is set to idle. Then the SM enters in the MAC module. The status of SM gets changed to either ARP or IP. Then the frame type is checked if that of ARP or IP. If IP then status is changed to ICMP and SM enters into ICMP module. Again frame type is checked. If frame type is ICMP (say), then SM is changed to ICMP (say). If SM=ICMP, then the SM enters ICMP module, to handle the ICMP task.
41
3.2.2Web/Database Server
Our module is an IP enabled device. Now if a public IP if provided to it, it can definitely be accessed through the internet. But being an eight bit microcontroller it would not be able to serve thousands of possible requests from the public domain. Also any hacker wanting to crash our system can generate large no of requests that our module cannot handle. In order to address this problem and to manage readings of different weather variables, the essence of a Web/database server is inevitable. For our demonstration purpose we assign IP of our module to be 192.168.11.5 and the IP of the central web/database server was assigned to be 192.168.11.2 (192.168.11.X, where X is any value in between 1 to 255 but not equal to 5), both connected to same router. Also connecting these two via a lab link cable or the cross wire cable, with data transfer rate set to be 10Mbps in half duplex mode equally worked. Now the challenging task was to send a request to our module at a constant interval, retrieve the data, store it in a managed way, and show a graphical representation of data according to the request at the time of request. The first problem of repeated query to our module at a constant interval was solved using the META tag in the PHP page which sends a request to our module. There was code in that very same page which would read the CGI page, “status.cgi”, stored in our module. META tag was in such a way that it recursively called the page itself. Now for our demonstration purpose we called our page at an interval of one minute. The second problem of retrieving of data was done using the string operation. The page “status.cgi” in our module was written in such a way that we have kept certain tokens that would help to extract the required readings from the total contents of file. Firstly we read the whole contents of the file and stored it in a string. Then using few string operations, taking into consideration of the token we have kept, the readings were retrieved and stored in variables for further processing.
42
Third problem was to store the retrieved value in a manageable way. For that as mentioned we use MySQL database. Before storing we also need to consider the time at which the reading has been taken. For that we use the built in function of PHP, ‘date( )’. Usage of the function was to be done in proper consent. We were storing the retrieved data, weather variables, in such a way that ‘year’, ‘month’, ‘day’, ‘hour’ and ‘minute’ were also the field of table. From this function we extracted all these fields and stored it in different variables. And along with the weather readings, these were also stored in the table. Last but not least we need to represent those data in the graphical format. The main challenging task was to represent the data as according to the user request. Here users can send requests to view the plot in an hourly basis or daily basis as well. So, if we were to average the data at the fly time (request time), it would be definitely be an idiotic task. Let’s say a user requested to view the average temperature on a yearly basis in order to have an idea of impact of global warming, at the place where our module is kept for monitoring. Here for each reading to be plotted we need to average 525600(=365*24*60) readings because we were taking readings at an interval of one minute. Now in order to address this problem what we did was we created five tables. When the data was being written in the main table, we were checking the values of the ‘date’ parameters. The value of minute to be 00 indicates that the hour has been incremented by one.. Similarly, the value of hour to be 00 indicated the change of day, which in turns leads to calculate the average reading of that day from the ‘hour’ table and store in the day table where field are only ‘year’, ‘month’ and ‘day’ including the weather variables. In the similar manner two more tables for month and year were also maintained. This finally led us to have proper data which is to be used to show the proper graph as requested by user in the public domain. For the purpose of plotting of graph we use gd_2 library of PHP. The sample of user interface and the graph plotted using random data for temperature is as shown in the ‘fig e’ and ‘fig f’ respectively.
43
CHAPTER 4 EPILOGUE
44
1 4.1 PROJECT TIMELINE
The project passed through all the phases: Research, Analysis, Design (Simulation and Hardware), Testing and Maintenance and Documentation. The Gantt chart of our project is shown below..2 PROJECT COST ANALYSIS
s/n 1
2 3 4 5 6
Hardware requirements Sensors a. Temperature Sensor b. LDR Ethernet Controller RJ45 JACK Microcontroller LCD Miscellaneous
Specifications LM35
Quantity 1 1 1 1 1 1
Rate $ 1.50 0.25 6.25 8.00 7.75 4.47
Total $ 1.50 0.25 6.25 8.00 7.75 4.47 10 38.22
ENC28J60 J1006FOIT PIC18F4620 HD7740 family Resistors,buffers,etc. TOTAL MODULE COST
6 Wi-Fi transreceiver From ICIMOD 1 7 PV panels From CES,IOE 1 Table 4.1: Financial Details of Hardware Requirements for single module (1$=63 NRS)
45
Thus the total weather module cost is approximately 2,400 /- NRs.
46
4.3 FUTURE ENHANCEMENTS
This project is flexible and efficient for the proper remote monitoring of the remote area, and thus can be implemented as Advanced Remote Data Logger. With the addition of many weather sensors, it can give the data of various parameters. Solar Panel can be added on our module so that it can behave as independent unit and can be mounted in the remote area without any external power supply.
4.4 APPLICATIONS OF THE PROJECT
With the advancement of today’s technology only logging of data is not sufficient. But the availability of data in the usable form for the real time application is most important. This is what incorporated in our project.
With the feature of real time avialability our aim is to provide the unmanned
weather station. Where sensing and recording of data are done automatically and also that the data are tabulated properly in modern database system () which further enhances the usability of data recorded. In botanical studies for example in the green house the regular monitoring of data is an important and tidious task and with aid of our project this can be done quite easily and efficiently. It also provides references for futher experiments as data are kept in the managed way in the modern databse. Temperature and humidity monitoring in airport is also an important and critical task where too our project can be utilized. In the industrial plants as well, our project can be used to monitor different physical parameters which in turns help to analyse the performance to different 47
sophisticated machines and also predefined premitive measures can be carried out to prevent disasters. Knowing different weather parameters of climate of Snowy Mountain is an essential factor for development of mountain tourism in the country like Nepal. Small variations of weather parameters for small period can contain vital information thus implementing our project help to address those types of cases as well.
4.5 PROBLEMS FACED
Where there is a gravel road, it is easy to be paved; but if there is no road, then to construct a paved road, is even more difficult. This was the problem we faced when we chose to implement a web server in a microcontroller. For days we searched in the library for a similar project being undertaken by our seniors but could not get a single one. Next step in our project was to select the adequate hardware. Here we receive the nightmare of problems. Some major components of our project were not avialable in the local market so we needed to get them from the manufacturers abroad. Also the cost of attaining each component was far too behind our budget as we need to get more than one complete set of components, because in course of our working process the probability of getting components damaged cannot be ignored. And also that we cannot complain the manufacturer if we received any damaged piece. Time was another important factor. We need to order components by keeping adequate time margin that we have enough time for succesful completion of our project. For the solution of this problem we decided to search for the places where our project could be implemented and that we could get mutual benifits. ICIMOD was the organisation toward whom we should be grateful for reliefing us from entangling in such a mess by providing us supports and encouragements. Since PIC18F series was unfamiliar to our friends and seniors as well. We also need to work hard on getting appropriate programmers. With dedicated hard work and
48
searching procedures we were able to accumulate a couple programmer circuits schematics but could not get a working one on our table. But eventually we got one. The code for microcontroller was to be written using C language. But the working environment of MPLAB was quite different and we do need to work hard to get familiarize with. Directly testing our codes on the hardware module could result in the devastating effects. So we need to have proper simulation software for that purpose. But to get the relevent and reliable simulation software which provide the facility of simulating our components and that to free was really a tough job. But newer release of proteus solved our problem. Though simulation helps in providing a dirction to a project but never is an exact relica of overall project. For days we get frustrated as our task use to be done on the simulation but not on the hardware. On successful completion of the design on Proteus Simulation we implemented the design in the hardware. On the course of hardware design we faced various problems. Since our Ethernet chip worked at 3.3 V and PIC18F4620 at 5V, we have to take care of voltage level conversion in our design which is taken care by using AND gate as 3.3V to 5V logic shifter. Since we need to work at higher frequency 20 MHz, we also need to consider high frequency design as well. For this we used ferrite bead, twisted pair cable, decoupling capacitors, proper alignement of active and passive components etc.Thus we can say gradual debugging and reconnenting components taking in account of knowledge gain in the course of instrumentation II really become a fruitfull.
49
4.6 CONCLUSION
The project entitled “IP based weather station: MAUSAM PARISUCHAK” undertaken by us consists of 3 major portions: the sensors, the microcontroller, and the Ethernet chip which is used to connect to a LAN. It includes the 3 concepts: embedded systems, TCP/IP communication, and a weather station. It provides real time data of weather in remote/inaccessible locations through a wireless /wired connection. The module built in our project acts as a web server and displays the sensor data in the form of web pages. This module finds its practical implementation in the remote mountainous area of Nepal which can provide data through the internet either by a series of microwave links or satellite links. We would like to thank all those who have supported us. We would also like to thank the teachers and our supervisors for providing us the needed support. Finally the support of ICIMOD is also highly appreciated.
50
REFERENCES
1. 2. 3. TCP/IP lean- Web servers for Embedded Systems by Jeremy Bentham 4. Microprocessors and Interfacing by Douglas V Hall and many more blogs
51 | https://www.scribd.com/doc/13991231/Real-Time-Embedded-Project | CC-MAIN-2018-05 | refinedweb | 6,736 | 55.24 |
This article was originally published in my blog.
TL;DR
When testing Redux, here are a few guidelines:
Vanilla Redux
- The smallest standalone unit in Redux is the entire state slice. Unit tests should interact with it as a whole.
- There is no point in testing reducers, action creators and selectors in isolation. As they are tightly coupled with each other, isolation gives us little to no value.
- Tests should interact with your redux slice same way your application will. Use action creators and selectors, without having to write tests targeting them in isolation.
- Avoid assertions like
toEqual/
toDeepEqualagainst the state object, as they create a coupling between your tests and the state structure.
- Using selectors gives you the granularity you need to run simple assertions.
- Selectors and action creators should be boring, so they won't require testing.
- Your slice is somewhat equivalent to a pure function, which means you don't need any mocking facilities in order to test it.
Redux +
redux-thunk
- Dispatching thunks doesn't have any direct effect. Only after the thunk is called is when we will have the side-effects we need to make our application work.
- Here you can use stubs, spies and sometimes mocks (but don't abuse mocks).
- Because of the way thunks are structured, the only way to test them is by testing their implementation details.
- The strategy when testing thunks is to setup the store, dispatch the thunk and then asserting whether it dispatched the actions you expected in the order you expected or not.
I have created a repo implementing the ideas above.
Intro
As a Software Engineer, I am always finding ways to get better at my craft. It is not easy. Not at all. Coding is hard enough. Writing good code is even harder.
Then there are tests. I think every single time I start a new project — professionally or just for fun — my ideas on how I should test my code change. Every. Single. Time. This is not necessarily a bad thing as different problems require different solutions, but this still intrigues me a little.
The Problem with Tests
As a ~most of the time~ TDD practitioner, I have learned that the main reason we write tests it not to assert the correctness of our code — this is just a cool side effect. The biggest win when writing tests first is that it guides you through the design of the code you will write next. If something is hard to test, there is probably a better way to implement it.
However, after if you have done this for some time, you realize that writing good tests are as hard as writing production code. Sometimes is even harder. Writing tests takes time. And extra time is something that your clients or the business people in your company will not give you so easily.
Ain't nobody got time for that! (Photo by Aron Visuals on Unsplash)
And it gets worse. Even if you are able to write proper tests, throughout the lifespan of the product/project you are working on, requirements will change, new scenarios will appear. Write too many tests, make them very entangled and any minor change in your application will take a lot of effort to make all tests pass again. Flaky tests are yet another problem. When it fails, you have no idea were to start fixing it. You will probably just re-run the test suite and if it passes, you are good to go.
Schrödinger's tests: sometimes they fail, sometimes they pass, but you cannot know for sure (Picture by Jie Qi on Flickr)
But how do you know if you are writing good tests? What the hell is a good test in the first place?
Schools of Testing
There is an long debate between two different currents of thoughts known as London School and Detroit School of Testing.
Summarizing their differences, while Detroit defends that software should be built bottom-up, with emphasis on design patterns and the tests should have as little knowledge as possible about the implementation and have little to no stubbing/mocking at all, London advocates that the design should be top-down, using external constraints as starting point, ensuring maximum isolation between test suites through extensive use of stubs/mocks, which has a side effect of having to know how the subject under test is implemented.
This is a very brief summary — even risking being wrong because of terseness — but you can find more good references about this two decades old conundrum here, here and here
Testing in the Real World
So which one is right, Londoners or Detrotians? Both of them and neither of them at the same time . As I learnt throughout the almost five years I have been a professional Software Engineer, dogmatism will not take you very far in the real world, where projects should be delivered, product expectations are to be matched and you have bills to pay.
What you really need is to be able to take the best of both worlds and use it in your favor. Use it wisely.
We live in a world where everybody seems obsessed with ~almost~ perfect code coverage, while the problem of Redundant Coverage is rarely mentioned — it is not very easy to find online references discussing this. If you abuse tests, you may end up having a hard time when your requirements suddenly change.
In the end we are not paid to write tests, we are paid to solve other people's problems through code. Writing tests is expensive and does not add perceivable value to the clients/users. One can argue that there is value added by tests, but in my personal experience it is very hard to make non-technical people to buy that.
What we as Software Engineers should strive for is to write the minimum amount of tests that yields enough confidence in code quality and correctness — and "enough" is highly dependent on context.
Redux Testing According to the Docs
Redux is known to have an outstandingly good documentation. In fact this is true. There is not only API docs and some quick examples, as there are also some valuable best practices advice and even links to more in depth discussions regarding Redux and its ecosystem.
However, I believe that the "Writing Tests" section leaves something to be desired.
Testing Action Creators
That section in the docs start with action creators.
export function addTodo(text) { return { type: 'ADD_TODO', text } }
Then we can test it like:) }) })
While the test is correct and passes just fine, the fundamental problem here is that it does not add much value. Your regular action creators should be very boring, almost declarative code. You do not need tests for that.
Furthermore, if you use helper libraries like
redux-act or Redux's own
@reduxjs/toolkit — which you should — then there is absolutely no reason at all to write tests for them, as your tests would be testing the helper libs themselves, which are already tested and, more important, are not even owned by you.
And since action creators can be very prolific in a real app, the amount of test they would require is huge.
But how can we know for sure our plain-old action creators do not contain silly errors like typos on them?
Bear with me. More on that later.
Testing reducers
In Redux, a reducer is a function which given a state and an action, should produce an entirely new state, without mutating the original one. Reducers are pure functions. Pure functions are like heaven to testers. It should be pretty straightforward, right?
The docs gives us the following } }
Then the test: } ]) }) })
Let's just ignore the fact that the suggested test case "should handle ADD_TODO" is actually two tests bundled together — with might freakout some testing zealots. Even though in this case I believe it would be best to have different test cases — one for an empty list and the other for a list with some initial values — sometimes this is just fine.
The real issue with those tests is that they are tightly coupled with the internal structure of the reducer. More precisely, the tests above are coupled to the state object structure through those
.toEqual() assertions.
While this example is rather simple, it is very common for the state of a given slice in Redux to change over time, as new requirements arrive and some unforeseen interactions need to occur. If we write tests like the ones above, they will soon become a maintenance nightmare. Any minimal change in the state structure would demand updating several test cases.
So how exactly are we supposed to write those tests?
Testing Redux the right way
Disclaimer: I am not saying this is the best or the only way of testing your Redux application, however I recently came to the conclusion that doing it the way I suggest bellow yields the best cost-benefit that I know of. If you happen to know a better way, please reach out to me through the comments, Twitter, e-mail or smoke signs.
Here is a popular folder structure for Redux applications that is very similar to the ones that can be found in many tutorials and even the official docs:
src └── store ├── auth │ ├── actions.js │ ├── actionTypes.js │ └── reducer.js └── documents ├── actions.js ├── actionTypes.js └── reducer.js
If you are like me and like to have test files colocated with the source code, this structure encourages you to have the following:
src └── store ├── auth │ ├── actions.js │ ├── actions.test.js │ ├── actionTypes.js │ ├── reducer.js │ └── reducer.test.js └── documents ├── actions.js ├── actions.test.js ├── actionTypes.js ├── reducer.js └── reducer.test.js
I have already left
actionTypes tests out as those files are purely declarative. However, I already explained why action creators should be purely declarative, and therefore should not be tested as well. That leaves us with testing the only reducer itself, but that does not seem quite right.
The problem here is what we understand as being a "unit" in Redux. Most people tend to consider each of the individual files above as being themselves a unit. I believe this is a misconception. Actions, action types and reducers must be tightly coupled to each other in order to function properly. To me, it does not make sense to test those "components" in isolation. They all need to come together to form a slice (e.g.:
auth and
documents above), which I consider to be the smallest standalone piece in Redux architecture.
For that reason, I am found of the Ducks pattern, even though it has some caveats. Ducks authors advocates everything regarding a single slice (which they call a "duck") should be placed in a single file and follow a well-defined export structure.
I usually have a structure that looks like more this:
src └── modules ├── auth │ ├── authSlice.js │ └── authSlice.test.js └── documents ├── documentsSlice.js └── documentsSlice.test.js
The idea now is to write the least amount of test possible, while having a good degree of confidence that a particular slice works as expected. The reason why Redux exists in the first place is to help us manipulate state, providing a single place for our application state to lie in.
In other words, the value Redux provides us is the ability to write and read state from a centralized place, called the store. Since Redux is based on the Flux Architecture, its regular flow is more or less like this:
The Flux Architecture by Eric Eliott on Medium
Redux Testing Strategy
In the end of the day, what we want to test is that we are correctly writing to — through dispatching actions — and reading from the store. The way we do that is by given an initial state, we dispatch some action to the store, let the reducer to its work and then after that we check the state to see if the changes we expect were made.
However, how can we do that while avoiding the pitfall of having the tests coupled with the state object structure? Simple. Always use selectors. Even those that would seem dumb.
Selectors are you slice public API for reading data. They can encapsulate your state internal structure and expose only the data your application needs, at the granularity it needs. You can also have computed data and optimize it through memoization.
Similarly, action creators are its public API for writing data.
Still confused? Let's try with some code using
@reduxjs/toolkit:
Here is my auth slice:
import { createSlice, createSelector } from '@reduxjs/toolkit'; export const initialState = { userName: '', token: '', }; const authSlice = createSlice({ name: 'auth', initialState, reducers: { signIn(state, action) { const { token, userName } = action.payload; state.token = token; state.userName = userName; }, }, }); export const { signIn } = authSlice.actions; export default authSlice.reducer; export const selectToken = state => state.auth.token; export const selectUserName = state => state.auth.userName; export const selectIsAuthenticated = createSelector([selectToken], token => token !== '');
Nothing really special about this file. I am using the
createSlice helper, which saves me a lot of boilerplate code. The exports structure follows more or less the Ducks pattern, the main difference being that I don't explicitly export the action types, as they are defined in the
type property of the action creators (e.g.:
'auth/signIn').
Now the test suite implemented using
jest:
import reducer, { initialState, signIn, selectToken, selectName, selectIsAuthenticated } from './authSlice'; describe('auth slice', () => { describe('reducer, actions and selectors', () => { it('should return the initial state on first run', () => { // Arrange const nextState = initialState; // Act const result = reducer(undefined, {}); // Assert expect(result).toEqual(nextState); }); it('should properly set the state when sign in is made', () => { // Arrange const data = { userName: 'John Doe', token: 'This is a valid token. Trust me!', }; // Act const nextState = reducer(initialState, signIn(data)); // Assert const rootState = { auth: nextState }; expect(selectIsAuthenticated(rootState)).toEqual(true); expect(selectUserName(rootState)).toEqual(data.userName); expect(selectToken(rootState)).toEqual(data.token); }); }); });
The first test case (
'should return the initial state on first run') is only there to ensure there is no problem in the definition of the slice file. Notice that I am using the
.toEqual() assertion I said you should not. However, in this case, since the assertion is against the constant
initialState and there are no mutations, whenever the state shape changes,
initialState changes together, so this test would automatically be "fixed".
The second test case is what we are interested in here. From the initial state, we "dispatch" a
signIn action with the expected payload. Then we check if the produced state is what we expected. However we do that exclusively using selectors. This way our test is more decoupled from the implementation
If your slice grows bigger, by using selectors when testing state transitions, you gain yet another advantage: you could use only those selectors that are affected by the action you dispatched and can ignore everything else. Were you asserting against the full slice state tree, you would still need to declare those unrelated state properties in the assertion.
An observant reader might have noticed that this style of testing resembles more the one derived from Detroit School. There are no mocks, stubs, spies or whatever. Since reducers are simply pure functions, there is no point in using those.
However, this slice is rather too simple. Authentication is usually tied to some backend service, which means we have to manage the communication between the latter and our application, that is, we have do handle side-effects as well as the loading state. Things start to get more complicated.
Testing a More Realistic Slice
The first step is to split our
signInFailure. The names should be self-explanatory. After that, our state needs to handle the loading state and an eventual error.
Here is some code with those changes:
import { createSlice, createSelector } from '@reduxjs/toolkit'; export const initialState = { isLoading: false, user: { userName: '', token: '', }, error: null, }; const authSlice = createSlice({ name: 'auth', initialState, reducers: { signInStart(state, action) { state.isLoading = true; state.error = null; }, signInSuccess(state, action) { const { token, userName } = action.payload; state.user = { token, userName }; state.isLoading = false; state.error = null; }, signInFailure(state, action) { const { error } = action.payload; state.error = error; state.user = { userName: '', token: '', }; state.isLoading = false; }, }, }); export const { signInStart, signInSuccess, signInFailure } = authSlice.actions; export default authSlice.reducer; export const selectToken = state => state.auth.user.token; export const selectUserName = state => state.auth.user.userName; export const selectError = state => state.auth.error; export const selectIsLoading = state => state.auth.isLoading; export const selectIsAuthenticated = createSelector([selectToken], token => token !== '');
The first thing you might notice is that our state shape changed. We nested
userName and
token in a
user property. Had we not created selectors, this would break all the tests and code that depends on this slice. However, since we did have the selectors, the only changes we need to do are in the
selectToken and
selectUserName.
Notice that our test suite is completely broken now, but that is because we fundamentally changed the slice. It is not hard to get it fixed though:
describe('auth slice', () => { describe('reducer, actions and selectors', () => { it('should return the initial state on first run', () => { // Arrange const nextState = initialState; // Act const result = reducer(undefined, {}); // Assert expect(result).toEqual(nextState); }); it('should properly set loading and error state when a sign in request is made', () => { // Arrange // Act const nextState = reducer(initialState, signInStart()); // Assert const rootState = { auth: nextState }; expect(selectIsAuthenticated(rootState)).toEqual(false); expect(selectIsLoading(rootState)).toEqual(true); expect(selectError(rootState)).toEqual(null); }); it('should properly set loading, error and user information when a sign in request succeeds', () => { // Arrange const payload = { token: 'this is a token', userName: 'John Doe' }; // Act const nextState = reducer(initialState, signInSuccess(payload)); // Assert const rootState = { auth: nextState }; expect(selectIsAuthenticated(rootState)).toEqual(true); expect(selectToken(rootState)).toEqual(payload.token); expect(selectUserName(rootState)).toEqual(payload.userName); expect(selectIsLoading(rootState)).toEqual(false); expect(selectError(rootState)).toEqual(null); }); it('should properly set loading, error and remove user information when sign in request fails', () => { // Arrange const error = new Error('Incorrect password'); // Act const nextState = reducer(initialState, signInFailure({ error: error.message })); // Assert const rootState = { auth: nextState }; expect(selectIsAuthenticated(rootState)).toEqual(false); expect(selectToken(rootState)).toEqual(''); expect(selectUserName(rootState)).toEqual(''); expect(selectIsLoading(rootState)).toEqual(false); expect(selectError(rootState)).toEqual(error.message); }); }); });
Notice that
userName and
token do not matter to it. Everything else is much in line with what we have discussed so far.
There is another subtlety that might go unnoticed. Even though the main focus of the tests is the reducer, they end up testing the action creators as well. Those silly errors like typos will get caught here, so we do not need to write a separate suite of tests to prevent them from happening.
The same thing goes for selectors too. Plain selectors are purely declarative code. Memoized selectors for derived data created with
createSelector from reselect should not be tested as well. Errors will get caught in the reducer test.
For example, if we had forgotten to change
selectUserName and
selectToken after refactoring the state shape and left them like this:
// should be state.auth.user.token export const selectToken = state => state.auth.token; // should be state.auth.user.userName export const selectUserName = state => state.auth.userName;
In that case, all test cases above would fail.
Testing Side-Effects
We are getting there, but our slice is not complete yet. It lacks the part that orchestrates the sign in flow and communicates with the backend service API.
Redux itself deliberately does not handle side-effects. In order to be able to do that, you need a Redux Middleware that will handle that for you. While you can pick your own poison,
@reduxjs/toolkit already ships with
redux-thunk, so that is what we are going to use.
In this case, the Redux docs actually has a really good example, so I basically took it and adapted to our use case.
In our
authSlice.js, we simply add:
// ... import api from '../../api'; // ... export const signIn = ({ email, password }) => async dispatch => { try { dispatch(signInStart()); const { token, userName } = await api.signIn({ email, password, }); dispatch(signInSuccess({ token, userName })); } catch (error) { dispatch(signInFailure({ error })); } };
Notice that the
signIn function is almost like an action creator, however, instead of returning the action object, it returns a function which receives the dispatch function as parameter. This is the "action" that will be triggered when the user clicks the "Sign In" button in our application.
This means that functions like
signIn are very important to the application, therefore, they should be tested. However, how can we test this in isolation from the
api module? Enter Mocks and Stubs.
Since this is basically an orchestration component, we are not interested in the visible effets it has. Instead, we are interested in the actions that were dispatched from within the thunk according to the response from the API.
So we can change the test file like this:
import configureMockStore from 'redux-mock-store'; import thunk from 'redux-thunk'; // ... import api from '../../api'; jest.mock('../../api'); const mockStore = configureMockStore([thunk]); describe('thunks', () => { it('creates both signInStart and signInSuccess when sign in succeeds', async () => { // Arrange const requestPayload = { email: 'john.doe@example.com', password: 'very secret', }; const responsePayload = { token: 'this is a token', userName: 'John Doe', }; const store = mockStore(initialState); api.signIn.mockResolvedValueOnce(responsePayload); // Act await store.dispatch(signIn(requestPayload)); // Assert const expectedActions = [signInStart(), signInSuccess(responsePayload)]; expect(store.getActions()).toEqual(expectedActions); }); it('creates both signInStart and signInFailure when sign in fails', async () => { // Arrange const requestPayload = { email: 'john.doe@example.com', password: 'wrong passoword', }; const responseError = new Error('Invalid credentials'); const store = mockStore(initialState); api.signIn.mockRejectedValueOnce(responseError); // Act await store.dispatch(signIn(requestPayload)); // Assert const expectedActions = [signInStart(), signInFailure({ error: responseError })]; expect(store.getActions()).toEqual(expectedActions); }); });
So unlike reducers, which are easier to test with Detroit School methodology, we leverage London School style to test our thunks, because that is what makes sense.
Because we are testing implementation details, whenever code changes, our tests must reflect that. In a real world app, after a sucessful signin, you probably want to redirect the user somewhere. If we were using something like connected-react-router, we would end up with a code like this:
+import { push } from 'connected-react-router'; // ... import api from '../../api'; // ... const { token, userName } = await api.signIn({ email, password, }); dispatch(signInSuccess({ token, userName })); + dispatch(push('/')); } catch (error) { dispatch(signInFailure({ error })); } // ...
Then we update the assert part of our test case:
+import { push } from 'connected-react-router'; // ... // Assert const expectedActions = [ signInStart(), signInSuccess(responsePayload), + push('/') ]; expect(store.getActions()).toEqual(expectedActions); // ...
This is often a criticism against
redux-thunk, but if you even so decided to use it, that is a trade-off you have to deal with.
Conclusion
When it comes to the real world, there is no single best approach for writing tests. We can and should leverage both Detroit and London styles to effectively test your applications.
For components which behave like pure functions, that is, given some input, produce some deterministic output, Detroit style shines. Our tests can be a little bit more coarse-grained, as having perfect isolation does not add much value to them. Where exactly we should draw the line? Like most good questions, the answer is "It depends".
In Redux, I have come to the conclusion that a slice is the smallest standalone unit that exists. It makes little to no sense writing isolated tests for their sub-components, like reducers, action creators and selectors. We test them together. If any of them is broken, the tests will show us and it will be easy to find out which one.
On the other hand, when our components exists solely for orchestration purposes, then London style tests are the way to go. Since we are testing implementation details, tests should be as fine-grained as they get, leveraging mocks, stubs, spies and whatever else we need. However, this comes with a burden of harder maintainability.
When using
redux-thunk, what we should test is that our thunk is dispatching the appropriate actions in the same sequence we would expect. Helpers like
redux-mock-store make the task easier for us, as it expose more of the internal state of the store than Redux native store.
T-th-tha-that's a-all f-fo-fo-folks!
Discussion
Same here. Thanks a lot. I had such a big confusion trying to test react app with redux-toolkit. The main reason for the confusion was that I was trying to mix my slice testing with UI testing and I was getting weird errors with useDispatch and useSelector not being properly handled.
Hey Henrique, I just want to deeply thank you for this amazing article.
This was exactly what I was looking for - how to do redux testing, but not the by-the-book way or someone else's specific way. Someone who's been there, done that, and tossed the stuff that's unnecessary. We're using immutable (hate) and saga (eh) so I'm not like gonna copy and paste your examples or anything. But this gives me a starting mentality. I may be the first one to do Redux unit tests in the company; so this is a good foundation. And I can point to this article if coworkers question what i'm doing. Tx!
Henrique....I have spent nearly a day looking around on web for best practices on writing tests for React + redux-toolkit + asyn thunks. This article has been IMMENSELY helpful.
Only things it was missing was typescript 😉.
Thank you so much for this article!!!!!!!
You're a legend ...
It would be good to see the updated version with createAsyncThunk but this has been so helpful, not only for Redux but testing in general: i.e. what to test ?
I tend to write too many tests, in doubt..
thanks a lot man ! @hbarcelos , finally figuring out the real case test when using RTK, since it doesnt have any documentation on their website, literally confusing.... | https://dev.to/hbarcelos/a-better-approach-for-testing-your-redux-code-2ec9 | CC-MAIN-2020-50 | refinedweb | 4,308 | 55.64 |
Hide/Unhide Non-Bookmarked Lines
Hello,
Is there any way to hide and unhide all non-bookmarked lines?
Thank you
- Ekopalypse last edited by
afaik, not as builtin feature.
A possible solution might be to use a scripting language plugin
and some code similar to the pythonscript below.
from Npp import editor bookmark_mask = 1<<24 line = 0 while True: line = editor.markerNext(line, bookmark_mask) if line == -1: break editor.hideLines(line,line) line+=1
This is just some demo code demonstrating the feature.
- Alan Kilborn last edited by Alan Kilborn
The hide lines feature in Notepad++ is “underdeveloped”; I’d stay away from it unless and until it is made better by the developers. BUT…I can see how what you want to do is valuable.
@Ekopalypse OP wanted to hide NON bookmarked lines but AFAICT at a quick look, your P.S. will hide the bookmarked lines instead?
- Ekopalypse last edited by
@Alan-Kilborn
you are correct and I thought is on bookmarked lines really correct?
Well, non-bookmarked makes sense :-)
Let’s see if OP wants to go that way.
@Ekopalypse, @Alan-Kilborn Thank you both for your input.
I do indeed wish to hide non-bookmarked lines. Although the code above seems to hide bookmarked lines only - and I don’t know how to unhide them either.
The reasoning is that I have a large database which contains over 24,000 download links - one per line, and I need to go through the painful task of editing each one of them. (I can’t see any other way to modify filenames on one server & modifying the respective link on another server at the same time!)
So to assist with the work, I can highlight all the download links by bookmarking them, and then hiding all other information, which would allow me to sift through the links easier.
- Alan Kilborn last edited by
@Mike-Smith said in Hide/Unhide Non-Bookmarked Lines:
I need to go through the painful task of editing each one of them
24000 things to examine and edit is a huge manual task. Perhaps if you elaborate a bit more and/or show some data, someone here might have some automation hints for you? Maybe it isn’t possible…but hopefully something could be done.
and I don’t know how to unhide them either.
As far as I know, Notepad++'s menus only offers a “Hide Lines”. After you’ve done that one or more times, you’ll see some arrows in the margin, example:
So the way to “unhide” these lines is to click on one of the green arrows. If you have a script that hides a lot of lines, showing them all again when desired is problematic because you’d have to click on a lot of green arrows. At that point the better way would be to simply restart Notepad++ (which doesn’t remember the status of hidden lines when exited and re-run).
I do indeed wish to hide non-bookmarked lines
It seems like you could run your bookmarking operation, then do a “Inverse Bookmark” command, and then run the script @Ekopalypse provided…to get what you want?
It seems like you could run your bookmarking operation, then do a “Inverse Bookmark” command, and then run the script @Ekopalypse provided…to get what you want?
Yes, that’s a good idea! I did just that, and it hid everything I didn’t need to edit. Thank you.
24000 things to examine and edit is a huge manual task.
I don’t thing it’s something that can be automated though. The issue is that I have all the download links in one database (which I’m editing through Notepad++), and the files are stored on a seperate server. The task I am currently processing, is to randomize all filenames:
So not only do I need to complete the task of randomizing the filenames (A task I’m achieving using Bulk Rename Utility), I have to then ensure the download links are changed to represent the relevant filename. | https://community.notepad-plus-plus.org/topic/19019/hide-unhide-non-bookmarked-lines | CC-MAIN-2021-31 | refinedweb | 680 | 69.41 |
COM Interoperability and .NET
Strong Naming
It is important to understand strong naming because it plays a role in COM Interop. A strong name is one that is globally unique to the particular assembly. This helps to avoid common DLL conflicts, a.k.a. DLL hell, such as naming conflicts and versioning issues. They also provide a security check that the contents of the assembly have not changed since it was last compiled. If the .NET object calling a COM component is strong named, the COM component needs to be strong named as well; otherwise, the advantages of strong naming are lost. If it is a COM client calling a .NET object, the .NET object needs to be strong named so that the CLR can resolve the name to the appropriate assembly.
How to Make an Assembly Strong Named
- Generate a key file at a command prompt: sn.exe -k key.snk.
- Add an attribute to the AssemblyInfo.cs file that references the generated key file: <Assembly: AssemblyKeyFile("..\..\key.snk")>. The path in the attribute is relative to the project output directory. A further explanation can be found in the default AssemblyInfo.cs file itself.
Use an Unmanaged COM Object in a Managed .NET Application
If you are doing new development using .NET and you need to utilize existing COM investments, this is the section for you. In my opinion, this is the most likely case in which you will need to use COM Interop. The process is different if you are planning to use strong naming for your assemblies or not. Both scenarios are outlined below, followed by an example scenario with some supporting code.
Option 1: .NET Client Object is not Strong Named
- Create a runtime callable wrapper (RCW) for the COM component so that the CLR can interact with the object as a managed type. There are a number of ways to generate the RCW. A couple of the more common ways are listed below.
- Use the Type Library Importer utility (tlbimp.exe). It is a manual command line driven utility that accepts different arguments. It converts a COM-specific type definition from a COM type library into equivalent definitions for .NET using a wrapper assembly. Example: tlbimp <TLB name>.tlb.
- An easier way is to let Visual Studio .NET do the work for you by clicking the Project menu, Add Reference menu item, COM tab, and then double-click the desired COM component from the list of registered components. Click OK to close the dialog, and the wrapper is now generated.
Option 2: .NET Client Object is Strong Named
- Generate a key file at the command line. Example: sn.exe -k <Key File name>.snk.
- Use the Type Library Importer (tlbimp.exe) to generate a strong named wrapper assembly. Example: tlbimp /keyfile:<Key File name>.snk <TLB name>.tlb.
- Reference the wrapper assembly DLL in the .NET project.
Sample COM Object
For the sake of this example, we have a COM object that contains a function that will pad the left side of a string with a specified string until it reaches a desired length. We'll pretend this is the greatest version of a pad function ever written and that we are compelled to reuse it in its current form. This function has been compiled in a Visual Basic 6.0 class called clsCommon that is part of an ActiveX dll called SampleUtil.dll when compiled. The code for the COM component is located below, and the client .NET code is located in the trailing section.
'****************************************************************' Description: Left pad the given string with the given string' until it reaches a string of the desired length.'' Parameters: v_strInput - string to pad' v_strPad - string to pad with' v_intLength - desired string length'' Return Val: String - string left padded with given char'****************************************************************Function leftPad(ByVal v_strInput As String, _ ByVal v_strPad As String, _ ByVal v_intLength As Integer) As StringOn Error GoTo ErrorCode Dim intCount As Integer ' Loop controlDim intLenPad As Integer ' Length of the pad stringDim strOutput As String ' Output string intCount = Len(v_strInput) intLenPad = Len(v_strPad) strOutput = v_strInput While (intCount < v_intLength) strOutput = v_strPad & strOutput intCount = intCount + intLenPad Wend leftPad = strOutput ErrorCode: If (Err.Number <> 0) Then leftPad = v_strInput End IfEnd Function
Sample .NET Application
I did not use a strong name for this example, so the only thing required to reference and use the COM component was to simply go through the Visual Studio .NET menus and add a reference to the SampleUtil COM component. The sample .NET client is below.
/// <remarks>/// Sample client to use the SampleUtil COM component./// </remarks>public class SampleUtilClient{ public SampleUtilClient() { SampleUtil.clsCommon utility = new SampleUtil.clsCommon(); string test = utility.leftPad("testing", "0", 50); }}
Page 2 of 3
| http://www.developer.com/net/net/article.php/11087_1730971_2/COM-Interoperability-and-NET.htm | CC-MAIN-2014-42 | refinedweb | 786 | 66.94 |
Creating Calendar Based Timers in Java EE 6
Creating Calendar Based Timers in Java EE 6
Join the DZone community and get the full member experience.Join For Free
Get the Edge with a Professional Java IDE. 30-day free trial.
Java EE 6 allows developers to create application timers that are initialized when either a Stateless Session Bean, a Singleton Bean or a Message Driven Bean are deployed to the application server.
To indicate that a method on any of these beans is to be invoked on a timed basis, the method must be annotated with either the @Schedule annotation (for single timer schedules), or the @Schedules annotation (for multiple timer schedules).
The code below shows a very simple Stateless Session Bean configured with 2 scheduled timers. The first timer is configured with one schedule whereas the second is configured with 2 schedules.
package com.acme.timer; import javax.ejb.Schedule; import javax.ejb.Schedules; import javax.ejb.Stateless; import javax.ejb.Timer; @Stateless public class CalendarTimer { @SuppressWarnings("unused") @Schedule(second = "*/10", minute = "*", hour = "8-17", dayOfWeek = "Mon-Fri", dayOfMonth = "*", month = "*", year = "*", info = "Scheduled Timer") private void scheduledTimeout(final Timer t) { System.out.println(t.getInfo().toString() + " called at: " + new java.util.Date()); } @SuppressWarnings("unused") @Schedules({ @Schedule(second = "15", minute = "*", hour = "8-17", dayOfWeek = "Mon-Fri", dayOfMonth = "*", month = "*", year = "*", info = "2nd Scheduled Timer"), @Schedule(second = "45", minute = "*", hour = "8-17", dayOfWeek = "Mon-Fri", dayOfMonth = "*", month = "*", year = "*", info = "2nd Scheduled Timer") }) private void scheduledTimeout2(final Timer t) { System.out.println(t.getInfo().toString() + " called at: " + new java.util.Date()); System.out.println(); } }As can be seen, the first timer is annotated with the @Schedule annotation. This annotation takes several parameters that define the timer schedule:
The table above shows the allowable values that can be used for each expression used to build up a schedule. These values can also be expanded into expressions to make more complex schedules.
Wildcard: A wildcard character (*) is used to indicate that the schedule will fire for every valid value of the specific operand. For example, setting the value second="0", minute="*" would cause a timer to be invoked every minute at 0 seconds.
Lists: Comma separated lists of values allow timers to occur at every value in the list rather than at all valid values as specified by the wildcard character. For example second="0", minute="0, 15, 30, 45" would cause a timer to be invoked every quarter of an hour.
Ranges: Hypen separated ranges allow timers to occur within the specified range. For example dayOfMonth="1-5" would cause a timer to be invoked every day for the first 5 days of each month.
Intervals: Intervals are defined in the format start/interval and are valid only for hours, minutes and seconds. An interval is defined as the start value for a timer and then the interval at which a timer will be invoked. For example hour="12/1" would cause a timer to be invoked on the hour, every hour in an afternoon. It's possible to combine the wildcard and interval expressions to cause a timer to be invoked every x hours, minutes or seconds. For example minute="*/10" would cause a timer to be invoked every 10 minutes.
The second method in the example above shows how 2 different schedules can be applied to a timer. In this instance, the method is annotated with the @Schedules annotation rather than the @Schedule annotation. }} | https://dzone.com/articles/creating-calendar-based-timers | CC-MAIN-2018-22 | refinedweb | 573 | 54.42 |
principle
Depth first search (DFS) follows the principle that it always goes along one side of the node, all the way to black, then returns to the starting node, and then continues to the next side. If it finds the target node, it returns, if it cannot find it, it will traverse all the nodes. Because there are only two sides of a binary tree, DFS, for a binary tree, first traverses the left subtree, then traverses the right subtree, which is equivalent to traversing in order.
The following animation demonstrates the process of finding 5:
The principle is simple and clear, but as the saying goes:
Details are the devil
Let's see how to implement DFS with Swfit, and give some problems that need to be noticed in programming.
Recursive implementation of DFS
Since recursion is very consistent with algorithm logic, let's first look at the implementation of recursion. First, define the nodes of binary tree:
public class TreeNode { public var val: Int //Node value public var left: TreeNode? //Left node public var right: TreeNode? //Right node public init(_ val: Int) { self.val = val self.left = nil self.right = nil } }
Complete recursive implementation:
func dfsTree(_ root: TreeNode?, _ dst: Int) -> TreeNode? { if (root == nil) { return nil } if (root?.val == dst) { return root } var dstNode = self.dfsTree(root?.left, dst) if (dstNode == nil) { dstNode = self.dfsTree(root?.right, dst) } return dstNode }
Because binary tree itself is defined recursively, it is very natural to use recursive call, as long as we deal with left first, then right.
Non recursive implementation of DFS
Due to the low efficiency of most recursion, we try to expand recursion into a loop. At this time, there are two problems to consider:
- After the left branch access is completed, to get the right branch back to the node, you need to record the visited nodes, here Array is used as the stack for storage.
- There are three states of a node: not accessed, accessed, and branch accessed.
For example, as shown in the animation, blue corresponds to no access, that is, the initial state of the tree; yellow represents that the node has been accessed and put into the stack; gray corresponds to that both left and right branches have been accessed, so how to store it? To save the access state of the left and right branches, it is used together with the node here tuple Save in stack:
(node:TreeNode, isCheckLeft:Bool, isCheckRight:Bool)
The complete non recursive implementation is as follows:
func loopDfsTree(_ root: TreeNode?, _ dst: Int) -> TreeNode? { if (root == nil) { return nil } var checkNodes = Array<(node:TreeNode, isCheckLeft:Bool, isCheckRight:Bool)>() checkNodes.append((root!, false, false)) while checkNodes.last != nil { var nodeInfo = (checkNodes.popLast())! let dstNode = nodeInfo.node if (dstNode.val == dst) { return dstNode } if (dstNode.left != nil && nodeInfo.isCheckLeft == false) { nodeInfo.isCheckLeft = true checkNodes.append(nodeInfo) checkNodes.append(((dstNode.left)!, false, false)) } else if (dstNode.right != nil && nodeInfo.isCheckRight == false) { nodeInfo.isCheckRight = true checkNodes.append(nodeInfo) checkNodes.append(((dstNode.right)!, false, false)) } else { } } return nil }
Thinking questions
Compared with the recursive and non recursive implementation of DFS, why does the recursive implementation not need to save the state information of nodes? You are welcome to leave a message. I believe that knowledge after deep thinking will last forever. | https://programmer.group/animation-demonstration-binary-tree-de-depth-first-search-dfs.html | CC-MAIN-2020-40 | refinedweb | 553 | 57.16 |
Opened on Jun 21, 2014 at 12:10:51 AM
Closed on Mar 4, 2015 at 8:39:14 PM
Last modified on Oct 10, 2017 at 6:33:12 AM
#2182 closed defect (fixed)
broken RTEMS CLOCKS_PER_SEC interface, when including only time.h
Description (last modified by Sebastian Huber)
I had to patch Lua (a very portable code) as follows for RTEMS 4.10. I had a look around in the source and maybe its not possible for _SC_CLK_TCK to be properly defined when expecting to include only the standard C lib clock interface via time.h. Maybe its fixed in 4.11, but nevertheless I should probably file a bug report while I see the issue, in case it isnt.
/*
- repair broken RTEMS CLOCKS_PER_SEC interface */
#ifdef rtems
# include <sys/unistd.h>
#endif
Attachments (4)
Change History (11)
Changed on Jun 23, 2014 at 5:53:41 AM by Sebastian Huber
comment:1 Changed on Jun 24, 2014 at 6:39:06 AM by Sebastian Huber
As the subject says, this bug is about CLOCKS_PER_SEC not sysconf().
A test case is:
#include <time.h>
int main()
{
clock_t x = CLOCKS_PER_SEC;
(void) x;
return 0;
}
Error:
In file included from /opt/rtems-4.11/arm-rtems4.11/include/time.h:18:0,
from test.c:1:
test.c: In function 'main':
test.c:5:16: error: '_SC_CLK_TCK' undeclared (first use in this function)
clock_t x = CLOCKS_PER_SEC;
test.c:5:16: note: each undeclared identifier is reported only once for each function it appears in
comment:2 Changed on Dec 18, 2014 at 12:38:35 PM by Sebastian Huber
POSIX says that CLOCKS_PER_SEC is one million. So it seems this mapping to _SC_CLK_TCK is wrong.
comment:3 Changed on Feb 24, 2015 at 3:15:25 PM by Gedare Bloom
XSI requires CLOCKS_PER_SEC to be one million. If we don't care about XSI conformance we can let it be anything. However we could define CLOCKS_PER_SEC to 1000000 and scale ticks to microseconds in _times().
comment:4 Changed on Feb 25, 2015 at 8:35:08 AM by Sebastian Huber
We should definitely fix this so that we are in line with Linux and BSD.
Changed on Feb 25, 2015 at 7:52:45 PM by Gedare Bloom
Different proposed fix for newlib.
Changed on Feb 25, 2015 at 7:53:06 PM by Gedare Bloom
Corresponding fix in RTEMS for different proposed fix in newlib.
comment:5 Changed on Feb 25, 2015 at 7:55:49 PM by Gedare Bloom
I added two patches (untested) that may fix the bug. I'm a little bit confused about the original logic in the #ifndef __RTEMS_USE_TICKS_FOR_STATISTICS__ part though. Why does it divide ticks by 100 when assigning to tms_utime? I got rid of that division but maybe it belongs?
Changed on Mar 3, 2015 at 4:12:51 PM by Gedare Bloom
Test case.
Proposed fix for Newlib. | https://devel.rtems.org/ticket/2182 | CC-MAIN-2019-30 | refinedweb | 486 | 71.65 |
In IBM Container Service, I have a few Docker images under my private registry , can I share them to other users of bluemix ?
In addition, can we create an organization like Docker Hub does to share an image with all users of my organization ?
thanks very much.
Answer by Marisa Lopez de Silanes (153) | Jun 29, 2015 at 05:34 PM
Hi Leo,
An organization has a single private Bluemix repository for Docker images. By default, these images are shared by all users within an organization.
You can push your docker images stored in your private registry to the Bluemix private repository.
You can pull down your images from the IBM Containers registry to your local host docker repository, then push them to any docker repository you have access to.
Before you can access the private Bluemix repository to push or pull an image, you need to log in to Bluemix with your credentials.
Hope this helps!
Marisa
Hi Marisa you wrote: "You can pull down your images from the IBM Containers registry to your local host docker repository,". How I can do that? docker pull is not supported and cf ic doesn't have this command :(
Thanks Andrea
Answer by timdp (16) | Jun 16, 2017 at 09:28 AM
You can now issue
read-only or
read-write tokens for
IBM Bluemix Container Registry using the
container-registry plugin for the
bx command.
Tokens can either be non-expiring (unless revoked) or expire after 24 hours and can be used by anyone in possession of them.
Images in the registry are visible to all users in the account, and each account can create multiple namespaces in which to store images.
You can read up on tokens and namespaces
39 people are following this question.
Logging into Container Registry using tokens is not working (2) 1 Answer
Is it possible to set custom(Nexus) Docker Image registry for IBM Delivery Pipeline? 1 Answer
ibmliberty container crashing locally 3 Answers
container volumes not writable by non-root user 2 Answers
Container shutsdown as soon as it is created. 1 Answer | https://developer.ibm.com/answers/questions/199314/can-i-share-images-in-my-private-registry-to-other.html | CC-MAIN-2019-30 | refinedweb | 349 | 59.64 |
Hi everyone! I'm a new member here.
First of all I would like to address my problem on C++ programming in serial communication. My program is expected to receive float number from the serial port COM7 and show it on exe window. But while I'm trying to compile it, there are quite a lot of errors.
Below is the code that I've written:
Is this program written in correct way to receive data from serial port? Any help and advice is highly appreciated. Thanks!Is this program written in correct way to receive data from serial port? Any help and advice is highly appreciated. Thanks!Code:#include <iostream> using namespace std; #include "serial.h" void main[] { CSerial serial; if (serial.Open(2, 9600)) { char* lpBuffer = new char[500]; float NumberRead = serial.ReadData(lpBuffer, 500); cout<<"Temperature = "<<NumberRead<<endl; delete []lpBuffer; } else AfxMessageBox("Failed to open port!"); } | http://cboard.cprogramming.com/cplusplus-programming/126129-how-receive-data-serial-port.html | CC-MAIN-2015-32 | refinedweb | 149 | 60.11 |
- Windows 10 Step by Step Tutorial – Hello UWP App
- How to Capture the Windows Mobile 10 Network Traffic with Fiddler?
- UWP Tips & Tricks #4 – DEP0001 : Unexpected Error: A Prerequisite for an install could not be satisfied.
- Visual Studio 2015 – The name “MapControl” does not exist in the namespace “using:Windows.UI.Xaml.Controls.Maps”
- UWP Tips & Tricks # 3 – Creating a Hosted Web App in Visual Studio 2015 ?
- UWP Tips & Tricks #2 – How to Enable Windows Runtime access from JavaScript in the Windows 10 Hosted Web App ?
- UWP Tips &Tricks #1 – AppBar and Default Behavior
- Windows Phone 10 – Unable to login to phone error something went wrong error code 0x801901f4
- How to Enable Developer Mode for Windows 10 using Group Policy Editor?
- Add Newline or line break in the Text attribute of TextBlock in Xaml
- Narrator in Your Windows Phone App
- Q&A #46 – What is App.xaml file in Universal App ?
- Q&A #45 – Which Capability is enabled by Default in Universal App ?
- Windows Store Error – The package identity associated with this update doesn’t match the uploaded appx:
- Windows Phone 8.1 and Windows Runtime Apps How to #19 – Specify the Minimized mode for BottomAppBar. | http://developerpublish.com/windowsphone-windows10/ | CC-MAIN-2018-13 | refinedweb | 196 | 54.32 |
ClassCastException: java.util.Date cannot be cast to java.sql.Date To fix this, you need to either change the type of Date object in your Affiliate class to java.sql.Date or do this ps.setDate(6, new: Let me check this. IN operator must be used with an iterable expression How much time would it take for a planet scale Miller-Urey experiment to generate intelligent life Manual update fails on server India
What do I do with my leftover cash? If you're storing it as a String, then you can format it using SimpleDateFormat in the format you want and then store it. –R.J Feb 5 '14 at 11:12 I believe you can use new java.sql.Date(dtToday.getTime()); –MadProgrammer Feb 4 '15 at 6:50 sqlDate = new Date(javaDate.getTime()); I think dtToday is the date you want to insert. share|improve this answer answered Mar 4 '13 at 17:23 Gilbert Le Blanc 34.3k53272 add a comment| up vote 0 down vote public class Time extends java.util.Date It is not possible to
Possible outcomes of fight between coworkers outside the office The cost of switching to electric cars? do you get an error message, or an incorrect answer? PowerShell vs Python How to be Recommended to be a Sitecore MVP more hot questions question feed default about us tour help blog chat data legal privacy policy work here advertising
Please go through and help me i want the db date in the fromat MM/dd/yyyy –user3222718 Feb 5 '14 at 11:15 @user3222718 - I say you post this as Check this Out Similar queries ClassCastException: java.util.Date cannot be cast to java.sql.Date - Stack Overflow ClassCastException: java.util.Date cannot be cast to java.sql.Date - Stack Overflow ClassCastException: java.util.Date cannot be cast to View More at... Browse other questions tagged java date jasper-reports or ask your own question.
How much does a CLW potion heal? Browse other questions tagged java oracle servlets or ask your own question. import java.sql.Date import java.util.Date ClassCastException: java.util.Date cannot be cast to java.sql.Date - S... Can I cite email communication in my thesis/paper?
To convert from java.util.Date to java.sql.Date, you can use: java.util.Date date = new java.util.Date(); java.sql.Date sqlDate = new java.sql.Date(date.getTime()); share|improve this answer edited Aug 26 '12 at 15:16 answered Aug 26 insert one record and change the value to a real date => The bug is triggered. String next_dt = req.getParameter("NextDate"); DateFormat dtFmt = null; dtFmt = new SimpleDateFormat("yyyy-MM-dd"); dtToday = (Date) dtFmt.parse(next_dt); java oracle servlets share|improve this question asked Feb 4 '15 at 6:49 Kimaya 1526 Thank you.
I'm sorry, this question seems completely unrelated to the bug. The converse it not true; you need to create a java.sql.Date from the java.util.Date. –Boris the Spider Feb 5 '14 at 11:00 add a comment| 4 Answers 4 active oldest votes What do you call a relay that self-opens on power loss? What do I do with my leftover cash?
If using JDBC directly, then for example java.sql.PreparedStatement methods only accept java.sql.Date, so you will have to construct that yourself. Is adding the ‘tbl’ prefix to table names really a problem? Why is innovation spelt with 2 n's while renovation is spelt with 1? You should rather ask in a Java focused mailing list or forum (and preferably attach a simple test case for the problem, e.g.
The java.time classes and the old classes have some convenience methods for converting back and forth -- useful while we wait for JDBC drivers to be updated to directly utilize the You can't just cast different types like that. To convert, use new methods added to the old classes. check my blog Why there are no approximation algorithms for SAT and other decision problems?
Yuck. */ switch (RowSetMD.getColumnType(columnIndex)) { case java.sql.Types.DATE: { long sec = ((java.sql.Date)value).getTime(); return new java.sql.Date(sec); } DB is Oracle 11g. I just started my first real job, and have been asked to organize the office party. The code you pasted formats a java.util.Date as yyyy-MM-dd.
Instant instant = myUtilDate.toInstant(); To determine a date, we need the context of a time zone. In a world with time travel, could one change the present by changing the future? As is java.sql.Date - that strips the time part. java.sql.Timestamp is a direct subclass of it.
more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed LocalDate todayLocalDate = LocalDate.now( ZoneId.of( "America/Montreal" ) ); // Use proper "continent/region" time zone names; never use 3-4 letter codes like "EST" or "IST". OCJP6, OCWCD5 mark reusen Greenhorn Posts: 22 posted 5 years ago The result I want is a Date object, so I can use it for updating the Date in the news share|improve this answer edited Jun 27 at 20:00 answered Jul 6 '15 at 4:55 Basil Bourque 40.2k8131185 add a comment| up vote 1 down vote Method for comparing 2 dates (util.date
Is privacy compromised when sharing SHA-1 hashed URLs? If those answers do not fully address your question, please ask a new question. Does f:x↦2x+3 mean the same thing as f(x)=2x+3? Apart from this, String , double etc are using, which are fine.
Join them; it only takes a minute: Sign up How to solve ClassCastException when passing date parameters to jasper report? How much does a CLW potion heal? Then use the formatted date to get the date in java.sql.Date java.util.Date utilDate = "Your date" SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd"); final String stringDate= dateFormat.format(utilDate); final java.sql.Date sqlDate= java.sql.Date.valueOf(stringDate); share|improve this View More at...
Not the answer you're looking for? Bug219011 - ClassCastException: java.lang.String cannot be cast to java.util.Date Summary: ClassCastException: java.lang.String cannot be cast to java.util.Date Status: RESOLVED FIXED Product: db Classification: Unclassified Component: Code Version: 7.3 Hardware: All All Example param1.put("fdate",new SimpleDateFormat("yourPattern").parse(jTfdate.getText())); Where the pattern, "yourPattern" should correspond to the text parsed see SimpleDateFormat pattern's. | http://systemajo.com/how-to/how-to-solve-java-lang-classcastexception-java-util-date-cannot-be-cast-to-java-sql-date.php | CC-MAIN-2018-34 | refinedweb | 1,099 | 68.67 |
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
If you liked what you've learned so far, dive in!
video, code and script downloads.
Go Deeper!
This tutorial has been upgraded! Check out the [Symfony 3 Controllers][Symfony 3 Controllers].
3 steps. That's all that's behind rendering a page:
_controllerkey and executes that function.
The controller is all about us, it's where we shine. Whether the page is HTML, JSON or a redirect, we make that happen in this function. We might also query the database, send an email or process a form submission here.
Tip
Some people use the word "controller" to both refer to a the class (like
DefaultController) or the action inside that class.
Controller functions are dead-simple, and there's just one big rule: it must
return a Symfony :symfonyclass:
Symfony\\Component\\HttpFoundation\\Response
object.
To create a new Response, add its namespace to top of the controller class. I know, the namespace is horribly long, so this is where having a smart IDE like PHPStorm will make you smile:
// src/Yoda/EventBundle/Controller/DefaultController.php namespace Yoda\EventBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use Symfony\Component\HttpFoundation\Response; class DefaultController extends Controller { // ... }
Tip
If you're new to PHP 5.3 namespaces, check out our [free screencast on the topic][free screencast on the topic].
Now create the new Response object and quote Admiral Ackbar:
public function indexAction($count, $firstName) { return new Response('It\'s a traaaaaaaap!'); }
Now, our page has the text and nothing else.
Again, controllers are simple. No matter how complex things seem, the goal is always the same: generate your content, put it into a Response, and return it.
How would we return a JSON response? Let's create an array that includes
the
$firstName and
$count variables and turn it into a string with
json_encode. Now, it's exactly the same as before: pass that to a
Response
object and return it:
public function indexAction($count, $firstName) { $arr = array( 'firstName' => $firstName, 'count' => $count, 'status' => 'It\'s a traaaaaaaap!', ); return new Response(json_encode($arr)); }
Now our browser displays the JSON string.
Tip
There is also a [JsonResponse][JsonResponse] object that makes this even easier.
Wait. There is one problem. By using my browser's developer tools, I can
see that the app is telling my browser that the response has a
text/html
content type.
That's ok - we can fix it easily. Just set the
Content-Type header on
the
Response object to
application/json:
public function indexAction($count, $firstName) { // ... $response = new Response(json_encode($arr)); $response->headers->set('Content-Type', 'application/json'); return $response; }
Now when I refresh, the response has the right
content-type header.
I know I'm repeating myself, but this is important I promise! Every controller returns a Response object and you have full control over each part of it.
Time to celebrate: you've just learned the core of Symfony. Seriously, by understanding the routing-controller-Response flow, we could do anything.
But as much as I love printing Admiral Ackbar quotes, life isn't always this simple. Unless we're making an API, we usually build HTML pages. We could put the HTML right in the controller, but that would be a Trap!
Instead, Symfony offers you an optional tool that renders template files.
Before that, we should take on another buzzword: services. These are even trendier than bundles!
Symfony is basically a wrapper around a big bag of objects that do helpful things. These objects are called "services": a techy name for an object that performs a task. Seriously: when you hear service, just think "PHP object".
Symfony has a ton of these services - one sends emails, another queries the database and others translate text and tie your shoelaces. Symfony puts the services into a big bag, called the "mystical service container". Ok, I added the word mystical: it's just a PHP object and if you have access to it, you can fetch any service and start using it.
And here's the dirty secret: everything that you think "Symfony" does, is actually done by some service that lives in the container. You can even tweak or replace core services, like the router. That's really powerful.
In any controller, this is great news because, surprise, we have access
to the mystical container via
$this->container:
public function indexAction($count, $firstName) { // not doing anything yet... $this->container; // ... }
Tip
This only works because we're in a controller and because we're exending
the base :symfonyclass:
Symfony\\Bundle\\FrameworkBundle\\Controller\\Controller
class.
One of the services in the container is called
templating. I'll show
you how I knew that in a bit:
public function indexAction($count, $firstName) { $templating = $this->container->get('templating'); // ... }
This templating object has a
render method on it. The first argument
is the name of the template file to use and the second argument holds the
variables we want to pass to the template:
// src/Yoda/EventBundle/Controller/DefaultController.php // ... public function indexAction($count, $firstName) { $templating = $this->container->get('templating'); $content = $templating->render( 'EventBundle:Default:index.html.twig', array('name' => $firstName) ); // ... }
The template name looks funny because it's another top secret syntax with three parts:
EventBundle:Default:index.html.twig src/Yoda/EventBundle/Resources/views/Default/index.html.twig ``[ This looks like the ][ This looks like the ]controller` syntax we saw in routes, but don't mix them up. Seriously, one points to a controller class & method. This one points to a template file. Open up the template.
{# src/Yoda/EventBundle/Resources/views/index.html.twig #}
Hello {{ name }}
Welcome to Twig! A curly-little templating language that you're going to fall in love with. Right now, just get fancy by adding a strong tag:
Hello {{ name }}
Back in the controller, the `render` method returns a string. So just like before, we need to put that into a new `Response` object and return it:
public function indexAction($count, $firstName) {
$templating = $this->container->get('templating'); $content = $templating->render( 'EventBundle:Default:index.html.twig', array('name' => $firstName) ); return new Response($content);
}
Refresh. There's our rendered template. We still don't have a fancy layout, just relax - I can only go so fast! ## Make this Shorter Since rendering a template is pretty darn common, we can use some shortcuts. First, the `templating` service has a `renderResponse` method. Instead of returning a string, it puts it into a new `Response` object for us. Now we can remove the `new Response` line and its `use` statement:
// src/Yoda/EventBundle/Controller/DefaultController.php namespace Yoda\EventBundle\Controller;
use Symfony\Bundle\FrameworkBundle\Controller\Controller;
class DefaultController extends Controller {
public function indexAction($count, $firstName) { $templating = $this->container->get('templating'); return $templating->renderResponse( 'EventBundle:Default:index.html.twig', array('name' => $firstName) ); }
}
### And even Shorter Better. Now let's do less. Our controller class extends Symfony's own base controller. That's optional, but it gives us shortcuts. [Open up the base class][Open up the base class], I'm using a "go to file" shortcut in my editor to search for the `Controller.php` file. One of its shortcut is the `render` method. Wait, this does exactly what we're already doing! It grabs the `templating` service and calls `renderResponse` on it:
// vendor/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Controller/Controller.php // ...
public function render($view, array $parameters = array(), Response $response = null) {
return $this->container->get('templating')->renderResponse( $view, $parameters, $response );
}
Let's just kick back, call this method and return the result:
public function indexAction($count, $firstName) {
return $this->render( 'EventBundle:Default:index.html.twig', array('name' => $firstName) );
}
I'm sorry I made you go the long route, but now you know about the container and how services are working behind the scenes. And as you use more shortcut methods in Symfony's base controller, I'd be so proud if you looked to see what each method *actually* does. Controllers are easy: put some code here and return a `Response` object. And since we have the container object, you've got access to every service in your app. Oh right, I haven't told you what services there are! For this, go back to our friend console and run the `container:debug` command:
php app/console container:debug
It lists every single service available, as well as what type of object it returns. Color you dangerous. Ok, onto the curly world of Twig! [free screencast on the topic]: [JsonResponse]: [Open up the base class]: [Symfony 3 Controllers]:
// } } | https://symfonycasts.com/screencast/symfony2-ep1/controller | CC-MAIN-2021-10 | refinedweb | 1,432 | 57.87 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.